London Lab Advances Use of A.I. in Health Care, but Raises Privacy Concerns

SAN FRANCISCO — Each year, one out of every five patients admitted to a hospital in the United States for serious care develops acute kidney injury.

For a variety of reasons, these patients’ kidneys suddenly stop functioning normally and become unable to properly remove toxins from the bloodstream. The condition can permanently damage the kidneys, cause other illnesses or even lead to death. Acute kidney disease, or A.K.I., contributes to nearly 300,000 deaths in the United States each year, according to a 2016 study.

But if the condition is identified in its early stages and properly treated, it can be stopped or reversed.

In a paper published on Wednesday in the science journal Nature, researchers from DeepMind, a London artificial intelligence lab owned by Google’s parent company, detail a system that can analyze a patient’s health records, including blood tests, vital signs and past medical history, and predict A.K.I. up to 48 hours before onset.

The paper is part of widespread efforts to build technology that can automatically diagnose or predict illness and disease, from diabetic blindness to meningitis to cancer. In academia and industry, particularly at companies like Google and DeepMind, researchers are rapidly improving this new type of automated health care.

But there are many questions regarding the research, especially when it involves big corporate labs. To build and improve their automated systems, such labs must acquire vast amounts of patient data from hospitals and other medical institutions. That has repeatedly raised concerns over patient privacy.

In 2017, a British government watchdog agency ruled that DeepMind had violated patient privacy in acquiring medical records from the country’s National Health Service. In November, after saying that it would not share such data with Google, the London lab said it was transferring the unit that acquired the records to the American technology giant, prompting complaints from privacy advocates in Britain and elsewhere.

With Google, privacy concerns are heightened because the company already controls so much data describing what people do online.

DeepMind’s new research is based on what is called a neural network, a complex mathematical system that can learn tasks by analyzing vast amounts of data. By analyzing thousands of dog photos, for instance, a neural network can learn to recognize a dog.

Tech giants like Google already use such technology to recognize faces in photos, identify spoken words and translate languages on popular internet services and consumer devices. Now, researchers are applying the idea to health care.

In the new paper, DeepMind researchers describe a system that learns to predict acute kidney injury by identifying patterns in over 700,000 patient records from the Department of Veterans Affairs. The system was reasonably accurate with its predictions, but it still missed almost half of the cases of A.K.I.

“This perhaps points at the need to look into other data sources that may paint a more complete picture of the patient’s clinical reality,” said Dr. L. Nelson Sanchez-Pinto, a researcher at Northwestern University who was not involved in the DeepMind paper but is exploring similar technology.

Because the system learns from the medical history of mostly male patients admitted to V.A. hospitals, it is also unclear how well the technology would work when used with patients outside that particular population.

As Dr. Sanchez-Pinto indicated, the system could be improved with more, and more varied, data. But that is where DeepMind and Google are running into problems.

After the ruling that DeepMind had acquired medical data from the British National Health Service illegally, the lab’s use of that and other data has been closely watched. The data was not used in the company’s A.K.I. research, and it is unclear whether it will be transferred to Google.

The transfer of DeepMind’s health unit to Google is still pending as the company negotiates with various partners over how various data sets would be used, said Dominic King, who oversees the unit.

“Partners must give their permission for all that data to move over,” he said. “That is taking some time.”

In the past, DeepMind painted itself as a British operation that was mostly separate from Google’s global ambitions. Its position is now more complicated. And some critics question whether corporate labs like DeepMind are the right organizations to handle the development of technology with such broad implications for the public.

“Other machine-learning researchers can do this same work,” said Julia Powles, a professor of technology law and policy at the University of Western Australia whose research has focused on DeepMind’s use of health care data.

Source Link

Get more stuff like this

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.