AI has the potential to improve the quality of care while lowering the costs for patients and hospitals, says Dr. Mahiben Maruthappu, Dr. Varun Buch. But when will it start to make a difference?
Whether ordering a meal, planning a journey or even doing something as simple as unlocking a ‘phone, artificial intelligence (AI) now powers the key interactions between human beings and technology. Many are heralding AI as the ‘new electricity’ because it has the potential to drive the same kind of revolutionary change that electricity brought about one century ago.
Healthcare is one of the sectors where the need for change is most acute; it faces numerous significant challenges, not least the need to cater to a growing and ageing population while costs steadily climb to consume an ever-increasing proportion of global GDP. However, while AI has enormous potential to bring about positive developments in this area, this is yet to happen, and there is little evidence the technology is having any real impact.
<subhead>Valuable abilities and benefits
One of AI’s many valuable abilities is finding complex patterns in large amounts of data; show an AI application millions of X-rays and it will quickly learn to detect which ones are abnormal. Similarly, train a machine to spot subtle signs of deterioration in a patient’s vital signs and that machine will soon automate the surveillance currently done by humans.
Since a single AI system can be scaled to serve an extensive number of patients, simultaneously, there are clear cost savings in utilising AI. In addition to reducing cost, AI is expected to improve care quality and consistency. For example, a type of AI known as a ‘deep neural network’ is demonstrating superior performance compared to human clinicians in tasks such as detecting the early signs of blindness, or distinguishing between cancerous and non-cancerous skin moles.
The adoption of AI will also improve patients’ experience of healthcare. An emerging theme in ‘healthcare AI’ relates to applications which assist clinicians with repetitive back-office tasks, giving them more time to interact with patients and actually deliver care. Treatments will also become more personalised; rather than treating large groups of patients with the same therapy, clinicians using AI will be able to pinpoint the most effective therapy for a specific patient based on analysis of their electronic health record, smartphone, wearable data and even their genome.
What are the barriers?
Given these compelling benefits in value, quality and patient experience, it is surprising to find only a handful of AI applications currently approved for clinical use. This discrepancy is due to a number of factors. Firstly, there are clear (and appropriate) barriers relating to data privacy and security. The development of healthcare AI is a collaborative process between healthcare organisations, academic institutions and commercial entities. Data must be anonymised, securely stored and legally shared.
Secondly, there is an emerging ‘digital divide’ in healthcare – where the young and well are more likely to utilise newer technologies, and older and sicker people either don’t have access or don’t have the skills to fully engage with these opportunities. This means that technologies such as artificial intelligence are limited in how impactful they can be and may struggle to gain traction with some population segments. A potential solution could be to routinely support older people, or their caregivers, in engaging with digital health. Furthermore, technology could be redesigned so that, instead of navigating through a series of confusing menus, users can interact more naturally with technology – for example, by having a conversation with a smart assistant – adapting technology to people, rather than people to technology.
Another significant barrier is the general inertia in healthcare when it comes to embracing new technology. Many hospitals still record data using pen and paper and, where they do use technology, there is poor interoperability with other hospitals. This makes finding the large datasets required to train AI very difficult. Furthermore, there is a growing public perception that AI and ‘big data’ are sinister technologies. This is hardly surprising, given recent events such as the first fatality involving a self-driving car in March 2018 or the Facebook data-sharing scandal with Cambridge Analytica.
Who is to blame?
With such negative public opinion, it is difficult to gain patient consent and persuade healthcare professionals to actively participate in the essential testing and validation needed to develop a new medical treatment. The medical profession needs to work hard to reposition AI in healthcare as being definitively different to AI in other sectors.
Finally, what happens when AI makes a mistake and who is to blame? Currently, this important question remains without a clear answer. One of the problems is that AI systems are so complex that they don’t present a series of logical steps through which their recommendations were derived; if it is not possible to see how a decision was made, it is difficult to ascribe blame, or take clear steps to prevent the same mistake happening again. Making machine learning algorithms more interpretable is presently an area of significant research activity within the AI community.
So, when will AI arrive in healthcare? This is a question rather akin to asking ‘When will the internet arrive?’ in the early nineties. The process of AI integration will involve incremental progress as patients, clinicians and policy makers begin to appreciate the potential for AI to improve the quality, and lower the cost, of care. The development of appropriate frameworks for data sharing, creation of robust performance validation methodology by regulators and improvements in technological infrastructure within healthcare will help to accelerate the transition.
One point, however, is certain – when AI becomes prevalent in healthcare, it will be revolutionary.
This edited article first appeared on IT Portal.