Paging Dr. ChatGPT: The Rise of AI in Healthcare

A cute round robot wearing a stethoscope, new technologies and lifestyle

AI tools for health are poised to influence not only patient knowledge but also clinician behaviour, according to Pritesh Mistry, a fellow in digital technologies at The King’s Fund

CREDIT: This is an edited version of an article that originally appeared in Digital Health

In his article, Mistry highlights the growing role of AI in healthcare and the implications for patients, clinicians and the NHS. Mistry notes that AI’s influence in healthcare is already evident. Applications such as AI scribes are reducing clinicians’ administrative workloads, while dermatology tools are accelerating diagnoses. However, the broader impact of AI will not depend solely on how healthcare systems like the NHS implement these technologies. Increasingly, some AI solutions are being made available directly to the public.

Public Access to GenAI Tools

Recent announcements of health-focused GenAI tools, including ChatGPT and Claude, illustrate this trend. While currently available for health use in the US, these tools are expected to expand to the UK. They can process large volumes of information, combine it with internet sources and AI-trained data, and respond to user queries. Members of the public are increasingly using these tools to access medical information, from explanations of conditions and treatments to questions to ask clinicians.

Even though these tools are not formally promoted as wellbeing resources, Mistry observes that the UK public is already using them to enhance their understanding of health conditions, services and care options.

Potential Benefits and Privacy Considerations

The latest public announcements, though light on detail, mark a positive step forward. The new tools aim to provide a protected environment for health-related queries, without offering formal diagnoses. They also seek to simplify the input of health information and, assuming ongoing adherence to data protection standards, incorporate privacy safeguards by ensuring user data is not used for model training. Currently, however, these healthcare-focused tools are not yet available in the UK. As a result, UK users accessing GenAI tools for health may be doing so without equivalent privacy protections.

Risks and Limitations

Mistry warns that GenAI models can and do make errors. They may provide incorrect or country-specific information or even fabricate responses entirely. This poses a risk that patients could make poorly informed decisions, potentially worsening their health outcomes.

To mitigate these risks, Mistry recommends action in three key areas:

  1. Public Engagement and Education

It is essential to understand how people are using GenAI tools and to build public skills and confidence in navigating them appropriately.

  1. Supporting NHS Staff

Clinicians and healthcare staff must understand how these tools affect patient expectations, service demand and patient-clinician interactions. Resources and capabilities need to be in place to respond effectively and consistently.

  1. Collaboration Between the NHS and AI Developers

Working closely with AI developers can help maximize patient benefits while minimising risks. This includes public education, professional engagement, evidence generation, accessible data and the implementation of agreed safeguards.

As the traditional Google search gives way to ChatGPT conversations, health-focused public GenAI tools are likely to become available in the UK soon. Their use is expected to grow, presenting both opportunities and challenges for healthcare systems.

Don’t forget to follow us on Twitter like us on Facebook or connect with us on LinkedIn!

Be the first to comment

Leave a Reply