The improvements needed to use AI in healthcare – part one

The use of artificial intelligence in healthcare is often touted as a technology which can transform how tasks are carried out across the NHS. Rachel Dunscombe, CEO of the NHS Digital Academy and director for Tektology, and Jane Rendall, UK managing director for Sectra, examine what needs to happen to make sure AI is used safely in healthcare

CREDIT: This is an edited version of an article that originally appeared on Digital Health

When one NHS trust in the north of England started to introduce artificial intelligence, several years ago, clinicians needed to sit postgraduate data science courses in order to understand how algorithms worked.

In common with most healthcare organisations, the trust didn’t have a uniform approach to onboarding algorithms and applying necessary supervision to how they performed. It became a manually intensive operation for clinicians to carry out the necessary clinical safety checks on algorithms, requiring a huge amount of overhead and, as a result, significantly limiting the organisation’s ability to scale the use of AI.

AI needs supervision

AI, in many ways, needs to be managed like a junior member of staff; it needs supervision. Healthcare organisations need to be able to audit its activity – just as they would a junior doctor or nurse – and they need sufficient transparency in how an algorithm works in order to provide necessary oversight and assess if and when intervention is needed to improve its performance and ensure it is safe.

So, how can we do this in a scalable way? Expecting doctors to do a master’s degree in data science isn’t the answer – but developing a standard approach to managing the lifecycle of algorithms could be. In the UK, organisations like NHSX are making progress, but the real opportunity is to develop an internationally accepted approach.

If we are to adopt AI at the pace and scale now needed to improve care, and to address widening workforce and capacity gaps, we need to focus on the current absence of international standards on AI adoption. This could help to inform developers before they start to produce algorithms, and support the safe application of those algorithms to specific populations.

Put simply, this is about what we need to do in order to make sure we adopt AI with similar diligence to that applied to safely adopting new medicines – but without having to wait the years it can take to get important medicines to patients.

Arriving at this international consensus will mean a lot of rapid progress and dialogue – and will, most likely, involve sharing lessons from across different sectors beyond healthcare.

Here are six suggestions of some of the components that could underpin an international  model and help healthcare to safely accelerate adoption.

Clinical safety

We need to embed AI into tools that can allow healthcare settings to examine the clinical safety of an algorithm. Healthcare organisations already have tools for clinical safety in their systems which gather data on the performance of doctors and nurses. Interfaces from AI algorithms should feed those same systems.

We should report on AI in the same way as a doctor or nurse. There has been a lot of work from the Royal College of Radiologists about supporting junior colleagues to develop in their careers; similar mechanisms could help to peer review the work done by the AI. This is about creating the same feedback cycles that we have for humans in order to understand where AI may have faltered, or misinterpreted, so that we know where improvement is needed.

Bias detection

This is about examining demographics based on age, gender, ethnicity and other factors, and determining where bias might exist. Healthcare organisations need to understand whether there are people for whom an algorithm might work differently, or not work as effectively.

It might not be suitable for paediatrics, for example. Skin colour, and a great many other factors, can also potentially be significant. If a bias is detected, two options then exist; training that bias out of the algorithm, or creating a set of acceptable pathways for people with whom it won’t work and continuing to use it for groups where a bias isn’t present.

This could involve answering some big, practical and ethical questions around access and equity. For example, is it appropriate to have a manual pathway for someone if the algorithm doesn’t work safely for them, and to use the AI for the remainder of the population? However, to even get to those questions requires transparency.

Algorithm developers need to be transparent about the cohorts used to train the algorithm. As a healthcare provider you can then consider if this matches your cohort, or if there is a mismatch you should be aware of. You can then choose to segment your cohorts or populations, or capacity accordingly, or choose a different algorithm.

See part two for more suggestions!

Don’t forget to follow us on Twitter like us on Facebook or connect with us on LinkedIn!

Be the first to comment

Leave a Reply