22. 06.2018

Hey, Google! When will I die?

Google’s AI capabilities can predict with greater accuracy than a hospital’s computers when a critically ill patient will die, according to a new study.

Using a patient’s entire chart, Google’s deep learning methods were able to predict, 24 hours after admission, the risk of that patient’s death at 19.9% while the regular hospital’s computers predicted that risk at 9.3%. The patient died 10 days after admission.

Google’s researchers worked with others from UC San FranciscoStanford Medicine, and The University of Chicago Medicine. Their new algorithms accurately predicted several medical events based on data gleaned from de-identified electronic health records (EHR), says the retrospective study published in npj Digital Medicine.

The algorithms analyzed more than 46 billion data points, including clinical notes for 216,221 adult patients hospitalized for at least 24 hours in two U.S. academic medical centers.  In addition to in-hospital mortality, the AI was able to predict 30-day unplanned readmissions and patients’ final discharge diagnoses better than traditional clinically-used predictive models.

Such predictive abilities could prove useful as more public and private health insurers require that health providers demonstrate value — more efficient and effective management of the health of the people they care for.

Unlike other applications of deep learning to EHR data, Google’s new predictive abilities did not rely on hand-selected variables deemed important by an expert, the researchers explained. Rather, the AI was able to interpret raw EHR information, including clinical notes, that traditional models have been unable to use.

“If a clinical team had to investigate patients predicted to be at high risk of dying, the rate of false alerts at each point in time was roughly halved by our model,” the researchers wrote of their efforts at predicting patient mortality. “Moreover, the deep learning model achieved higher discrimination at every prediction time-point
compared to the baseline models.”

Because previous neural network models have been too opaque to gain clinicians’ trust, the researchers showed which data their model looked at for each patient. In the case study, the model identified elements of the patient’s history and radiology findings, critical data points that a clinician would also use.

Predictions based on such data analysis may someday help clinicians decide how to care for patients, the researchers wrote. Google published a blog post about the research.

Source: Medical Design & Outsourcing