Google DeepMind – should we be worried?

Data firm DeepMind has been using NHS data to help tackle ‘failure to rescue’ events – when a hospital clinician fails to treat a patient in time. The intentions are good, but concerns about data security have hit the headlines, leading some to question whether valuable, patient-identifiable information is being given away too easily.

Practice Business investigates the DeepMind story and looks at what safeguards are in place for practices and patients – asking whether we should be worried

In November 2018 DeepMind announced it would transfer Streams – a digital health app it developed which uses identifiable patient data provided by the NHS – to its parent company, global tech giant Google, which acquired the firm in 2014. The transfer of data raised concerns as it means that the app will no longer be scrutinised by an independent review panel appointed to ensure that patient data is managed securely.

The transfer of responsibility, and the disbanding of the panel, dramatically reduces the oversight of patient data – raising questions about just how safe and secure our data is, and just what it might be used for.

Speaking to Wired, Julia Powles, research fellow at New York University’s School of Law, who focuses on law and tech, is concerned that Google is, “…gaining unprecedented access to the best repository of health information on the planet – that of the NHS – in a way that patients have zero control over.”

The issue has become political, with the government demanding a ‘full explanation’ as to why data has been transferred from DeepMind to Google. It’s a story that’s continuing to unfold and one which is raising significant moral and legal questions.

DeepMind and data

The NHS as a whole doesn’t have an agreement with DeepMind; rather, the organisation has individual data-sharing arrangements with some NHS hospital trusts. Legally, the responsibility for the protection of patient data lies with the NHS body to build whatever appropriate safeguards they need into any contract they enter into with a third party.

The DeepMind website states that the data controller is the hospital trust, which remains in control of the security of the data – DeepMind is merely a processor of that data, using it in strict accordance with the law and the instructions of the controller.

How secure this is being scrutinised, but the most recent stories aren’t the first time DeepMind has hit the headlines. In 2017 the app was criticised by the Information Commissioner’s Office (ICO) for failing to protect the data of 1.6 million hospital patients, raising questions about just how much access to our data the NHS should give to external firms.

National guidance

Nationally, the NHS is committed to ensuring data security in the connected healthcare system, and continues to develop new guidance. We’ve collated some of the most relevant below.

You might also like...  Patient matters

In December 2018, NICE published its Evidence standards framework for digital health technologies which the healthcare standards body hopes will make it easier for innovators and commissioners to understand what ‘good levels of evidence for digital healthcare technologies look like’, assisting them in making decisions on new technology. Led by NHS England, the standards have been developed in partnership with Public Health England, MedCity and DigitalHealth.London.

The Department of Health’s Initial code of conduct for data-driven health and care technology establishes the standards that the health service expects technology companies to meet. Standards should be open and clean, and data use should be limited to ensure patients are protected.

Healthcare apps like DeepMind need to pass an assessment designed by NHS England to ensure that they are safe and secure.  These Digital Assessment Questions cover both clinical and technical standards and are designed to test the security of apps and ensure that data is protected at all times.

Data controls

One of the key issues raised by the DeepMind crisis is whether patients are being informed about how their data is being used, and whether they are given the chance to opt-out of data-sharing should they want to. The NHS has responded, giving patients the choice as to how their data is used and shared – including the ability to opt out entirely; in 2018, a new national data opt out was launched by NHS Digital for those patients who want to deny the sharing of personal data beyond individual care and outside of the NHS.

One of the key issues raised by the DeepMind crisis is whether patients are being informed about how their data is being used
i

While patients should be encouraged to take control of their data, allowing researchers access to data is important in helping them to develop new treatments and in understanding the efficacy of existing procedures and medicines. The NHS has invested in robust data security systems and the risk of a data breach is low; this is information that should be conveyed to patients who are concerned about the use of their data.

The issue of data security in the NHS is more important than ever. In the past, patients and practices were often in the dark as data decisions were being taken behind closed doors. Today, the NHS is pushing towards greater transparency – and increasing control – of the way our data is shared and used, protecting us all.

Perhaps we shouldn’t be worried about DeepMind, but whenever and wherever our data is being shared, we should all remain vigilant.

Don’t forget to follow us on Twitter, or connect with us on LinkedIn!