The use of artificial intelligence in healthcare is often touted as a technology which can transform how tasks are carried out across the NHS. Rachel Dunscombe, CEO of the NHS digital academy and director for Tektology, and Jane Rendall, UK managing director for Sectra, examine what needs to happen to make sure AI is used safely in healthcare
CREDIT: This is an edited version of an article that originally appeared on Digital Health
In part two of this feature we look at the final four suggestions from Rachel and Jane of some of the components which could underpin a – much-needed – international model for AI, an international model which would help healthcare to safely accelerate AI adoption.
New demographic validation
One local geography might have two demographic minorities, while another, only a few miles away, might have a significant mix of ethnic minorities making up around half the population. Healthcare systems, like the NHS in the UK for example, usually buy technology before extending it over other geographies, and this requires looking at new demographic validation.
If the population in question changes – for example, through immigration, an extension of services, or something else happening, an algorithm needs to be validated against a new dataset.
Something that can operate safely in the UK, might not operate safely in parts of South America, or China. Bias detection will have allowed for validation in your original population, but you can’t test it on day one against every set of demographics where it might eventually be used.
There are so many ethnicities and groups on this planet that this has to be done in stages. So, as you extend the algorithm across new demographics, you need to validate. If a service in Merseyside extended out to Manchester, then it would need to be tested again.
Explainable un-blackboxing
Having to send doctors on data science degrees isn’t practical, but we don’t yet have a standard way of drawing pictures, or writing words, to say what an algorithm is doing at the moment.
If you think about a packet of food, you get an ingredient list; we need a similar, standardised approach for AI. We need to work towards ‘explainable un-blackboxing’ that will include clinical terminology, but will also include common measures we find across different industries in terms of performance. If you are going to get a CE mark or certification it could be standard across health, nuclear, aviation and other sectors. The EU is early in its thinking on how this can work, but discussion has started.
Clinical audit
We need a clinical audit capability in algorithms. If a case is taken to a coroner’s court, if there has been an untoward incident, we will have to show how an algorithm contributed towards care. This is something we already do with human doctors and nurses; we need to do it with algorithms.
Pathway performance over-time
In areas like radiology there is an opportunity to examine the performance of an algorithm compared with human reporting. This isn’t about AI replacing humans, but it can help healthcare organisations to make decisions about where and how to make best use of the human in the pathway.
For disciplines like radiology this is key, given the significant human resource challenge faced in some countries. We also need to think about this from the perspective of the patient; if algorithms can report a lot faster than humans, could humans actually delay diagnosis, particularly when humans are being used for double-reading, and could this impact the surgery or treatment? Are there opportunities to change this pathway, or to, potentially, use AI to help free up the human resource to focus on diagnosing more complex cases more quickly?
This is about looking at the performance of the pathway and measuring outcomes where AI can make a difference. Playing this back to citizens at a time when trust issues are still prevalent around algorithms can help to demonstrate how AI is being used to improve healthcare.
Looking to address matters
Healthcare organisations are looking to AI to help to address a significant number of matters – from the ongoing pandemic to long-established challenges. Not bringing AI will mean that we will otherwise hit crisis points – especially in areas like radiology where, in some countries, demand continues to grow by around 10% year-on-year, whilst the number of trainees continues to decline.
But the situation is more complex than simply acquiring algorithms. A standard approach to managing the algorithm life cycle could make all the difference to successful adoption at the pace required.
Be the first to comment