Artificial intelligence firm DeepMind and a London hospital trust, the Royal Free London NHS Foundation Trust, have signed a five-year deal to develop a clinical app called Streams. The deal extends the already controversial partnership between the London-based startup, which was bought by Google in 2014, and the healthcare trust.
The Streams app is for healthcare professionals. According to the Financial Times, it will trigger mobile alerts when a patient’s vital signs or blood results become abnormal so that a doctor can intervene quickly and prevent the problem escalating.
The trust said that Streams has, thus far, been using algorithms to detect acute kidney injury, and added that it would
alert doctors to [a] patient in need “within seconds”, rather than hours [and] free up doctors from paperwork, creating more than half a million hours of extra direct care
The aim is to use Streams as a diagnostic support tool for a far wider range of illness, including sepsis and organ failure.
OK, so that’s the what. Now for the controversial bit: the how…
The app quite obviously relies on access to patient data.
A story in New Scientist earlier this year raised concerns that the partnership had given DeepMind access to “a wide range of healthcare data on the 1.6 million patients … from the last five years”, and noted that the data will be stored in the UK by a third party and that DeepMind is obliged to delete its copy of the data when the agreement expires.
In a follow-up story published this week, New Scientist revealed that the UK’s Information Commissioner’s Office began investigating the data-sharing agreement following its revelations. A statement from the office says that it is “working to ensure that the project complies with the Data Protection Act”.
But is that enough?
Privacy firms have raised concerns that medical records are being collected on a massive scale without the explicit consent of patients. Phil Booth, coordinator of medConfidential, queried the value of the app:
Our concern is that Google gets data on every patient who has attended the hospital in the last five years and they’re getting a monthly report of data … [but] because the patient history is up to a month old, [it] makes the entire process unreliable and makes the fog of unhelpful data potentially even worse.
Academics have also raised concerns. Speaking to the Financial Times, Julia Powles, a lawyer who specializes in technology law and policy from the University of Cambridge, highlighted that:
We do not know – and have no power to find out – what Google and DeepMind are really doing with NHS patient data, nor the extent of Royal Free’s meaningful control over what DeepMind is doing.
Give Google a chance?
When Natasha Loder asked:
That’s exactly it, isn’t it? The issue is not with what Google is trying to achieve, but that fact that it is Google doing it.
Doing it right
I have no issues with technologies being used to improve patient outcomes … provided the right people are doing it, for the right reasons and that it’s done in the right way.
Here we have Google creating an app that really needs real-time data to be useful. Surely it could potentially put patients at risk if the data are not up to the minute when you’re talking about things like organ failure and sepsis. Won’t the doctor need to know what’s been happening with the patient in the last weeks, days, hours and even minutes?
On my second point, Google is not doing the work for profit. Mustafa Suleyman, head of DeepMind Health and DeepMind’s co-founder, told the FT:
We get a modest service fee to supply the software. Ultimately, we could get reimbursed [by the NHS] for improved outcomes.
So you have to ask why. To access to data? To gain a foothold in health analytics? To test possibilities? To build a proof of concept it can sell in the future?
I suspect all of those are near the truth.
Does Google really need to be given this data at all? Wouldn’t it have been a lot safer if the NHS Trust had trialled the app on Google’s behalf, keeping the data safely in-house? After all, if you wanted to test-drive a piece of technology, wouldn’t you ask for the technology to test rather than hand over your data?
Or is this something that can only be accessed as a service, in other words, where data need to sit on the service provider’s machines? If that’s the case, we need to seriously look at how organizations access cloud-based third-party services that require a local copy of data. If we don’t, we risk finding copies of patient, student, citizen and other very personal data here, there and everywhere in the future.