Google I/O 2018 highlights AI health projects

By Dave Muoio
04:21 pm
Share

Google has unleashed a tidal wave of product and feature updates through the ongoing Google I/O developers’ conference, and it’s no surprise that the intersection of artificial intelligence and healthcare was a recurring spotlight among them. Through keynote speeches and simultaneously released online blog posts, the company highlighted a handful of tech-driven healthcare efforts that seem to be bearing fruit.

“Last year at Google I/O we announced Google AI, a collection of our teams and efforts to bring the benefits of AI to everyone,” Google CEO Sundar Pichai said during the event’s keynote. “… Healthcare is one of the most important fields AI is going to transform.”

Pichai made his case by recapping the company’s work on interpreting retina images to detect diabetic retinopathy. Along with flagging relevant observations sometimes unnoticed by human reviewers, he said that the company found they could use these same eye scans to predict patients’ five-year risk of adverse cardiovascular events.

“Last year, we announced our work on diabetic retinopathy, a leading cause of blindness, and we used deep learning to help doctors diagnose it earlier,” he said, “and we’ve been running field trials since then at … hospitals in India, and the field trials are going really well. We are bringing expert diagnosis to places where trained doctors are scarce.”

Pichai’s following example gave another of how AI could support doctors by scanning EHRs and calculating a patient’s risk of future medical events.

“If you go and analyze over 100,000 data points per patient, more than any single doctor could analyze, we can actually quantitatively predict the chance of readmission 24 to 48 hours earlier than traditional methods. It gives doctors time to act.”

Dr. Alvin Rajkomar, research scientist at Google AI, and Eyal Oren, product manager at Google AI, offered a deeper dive into the technology on the company’s blog and in a paper recently published in Nature Partner Journals: Digital Medicine.

Their deep learning approach — a collaboration with UC San Francisco, Stanford Medicine, and The University of Chicago Medicine — reviewed a total of 46,864,534,945 retrospective EHR data points collected from 216,221 adult patients hospitalized for at least 24 hours at two US academic medical centers. From these data, the team’s deep learning models were able to predict upcoming in-hospital mortality, 30-day unplanned readmission, prolonged length of stay, and all of a patient’s final discharge diagnoses with an accuracy that outperformed traditional predictive models across the board.

Rajkomar and Oren noted in the blog post that these results are still early, but due to the project’s heavy focus on accuracy and scalability (i.e. interoperability) are a promising roadmap for machine learning applications in healthcare.

“Doctors are already inundated with alerts and demands on their attention — could models help physicians with tedious, administrative tasks so they can better focus on the patient in front of them or ones that need extra attention? Can we help patients get high-quality care no matter where they seek it? We look forward to collaborating with doctors and patients to figure out the answers to these questions and more,” they wrote in the post.

A separate machine learning application posted on Google’s blog likely won’t be a mainstay in every physician’s office, but could offer a resource for those with visual impairments. Coming to the Android Play Store this year in the US, the Lookout app uses the smartphone camera to view an individual’s surroundings and warn them with audio cues when objects, text, or people are nearby.

“After opening the app and selecting a mode, Lookout processes items of importance in your environment and shares information it believes to be relevant — text from a recipe book, or the location of a bathroom, an exit sign, a chair or a person nearby,” Patrick Clary, a product manager with Google’s Central Accessibility Team, wrote in the post. “Lookout delivers spoken notifications, designed to be used with minimal interaction allowing people to stay engaged with their activity.”

The app offers separate modes, such as “Home” or “Work & Play,” to better recognize its environment and offer more relevant notifications. In addition, Clary wrote that the app’s machine learning capabilities will allow it to better learn what people are most interested in receiving notifications about as more people use the app, allowing it to deliver these results more frequently.

“The core experience is processed on the device, which means the app can be used without an internet connection. Accessibility will be an ongoing priority for us, and Lookout is one step in helping blind or visually impaired people gain more independence by understanding their physical surroundings,” the post concluded.

Share