Verily’s India project reveals the work of applying AI to real-world clinical care

A new machine learning eye disease screening tool wasn't conceived by an eye doctor, or any physician for that matter, but by a research scientist at Google with an interest in artificial intelligence. 

That scientist, Varun Gulshan, was looking for a medical or science project that might ideally help people in his home country of India.

But since no one on the Google team had any experience creating medical devices, turning the AI model into an actual medical device required turning the project over to Verily, Alphabet's healthcare and life sciences research arm.

Three years later, eye doctors at Aravind Eye Hospital in Madurai, India, are using that machine learning algorithm to screen patients for diabetic retinopathy, with the aim of using technology to fill gaps in patient access to eye doctors. By integrating the machine learning algorithm into their screening process, hospital physicians have more time to work closely with patients on treatment while increasing the number of screenings that can be performed.

The real-world clinical use of the machine learning-enabled screening tool—essentially an algorithm trained to quickly interpret retinal scans for signs of disease—marks an important step in taking AI from research to the patient bedside. And it's an example of Verily's quietly expanding footprint in the world of healthcare and its ambitions to lead the industry in overcoming challenges that have so far stymied the broader use of AI in clinical care.

"I think our unique role here is taking and understanding what’s currently going on with health conditions, but then saying, what new tools could be used? I think the diabetic retinopathy program is a great example of that," Jessica Mega, M.D., chief medical and scientific officer at Verily, told FierceHealthcare. 

In her work as a cardiologist at Brigham and Women's Hospital and a medical researcher, Mega explored novel cardiovascular therapies and using genetics to tailor therapies to patients. Leading Verily's clinical efforts builds on her interest in the use of technology to solve complex health problems, she said. Diabetes-related eye disease presented a good clinical use case for machine learning because it is a significant healthcare problem: diabetic retinopathy is the leading cause of preventable blindness in adults, Mega said.

RELATED: Google, Verily use machine learning to detect diabetic eye disease

“If you think about AI and image recognition and all of the advances, even outside of healthcare, you can take those tools and apply it to something like reading an eye scan," Mega said. "It’s the perfect marriage of a medical situation that is high need and technology that has matured in the last few years with the end goal of reducing blindness.”

Proof of concept

Google and Verily—both under the Alphabet umbrella—have been conducting a clinical research program with a focus on India, where studies involving Aravind as well as Sankara Nethralaya, an eye hospital in Chennai, India, demonstrated that the algorithm performed on par with general ophthalmologists and retinal specialists assessing the images for disease. 

According to Verily officials, the two companies started the clinical research in India due to an increasing need to expand access to screenings there. There is a shortage of more than 100,000 eye doctors, and only 6 million out of 72 million people with diabetes are screened for the eye disease, the companies said. Hospital officials are optimistic that technology can help fill that gap.

Verily, originally Google Life Sciences, once part of Google X, was spun out back in 2015 to lead Alphabet’s healthcare and life sciences research.

Mega joined the research organization before the spinout and said early work on the algorithm for the retinal imaging program began within Google. “Over time, we realized it was best to co-develop the product going forward because it’s one thing to have an algorithm and it’s another thing to wrap it around with what it really means to have an end-to-end solution that clinicians can start using. On the Verily side, we bring some of the clinical expertise, the regulatory expertise, and the quality systems to bring this clinical product to market.”

At the same time, Mega acknowledged that integrating machine learning into care delivery is a significant challenge. “You’re going to see this across the board, with people coming up with an interesting idea, but how do you actually scale this? How do you get the right tools, in this case, getting the right image of the eye with the algorithm that’s working in real time, and then getting a clinical diagnosis? We’ve been working hard on making this a reality.”

RELATED: More clinical evidence needed to accelerate adoption of AI-enabled decision support: report

Currently, one limitation is getting the technology to work in different clinical environments in order to scale it. “There are tools that may work in certain urban centers, but if you want to have a global impact, how do you create tools that work in rural environments?”

As part of the research program in India, physicians and screening technicians at Aravind vision centers in remote locations continue to test the screening tool. One challenge they are trying to tackle is the quality of the images coming in as the remote clinics use different technology. “We’re using this as a testing environment to see what is going to work and what we can scale,” Mega said.

Mega sees use cases for AI in other clinical applications including imaging, pathology, radiology, and clinical decision support. “It’s a tool to help us solve clinical problems, and I think we’re going to see this tool used more and more,” she said.

Verily’s work at the intersection of healthcare and technology

The retinal imaging program is one of more than a dozen Verily healthcare partnerships and projects. Onduo, a Verily-Sanofi collaboration launched two years ago, is a digital diabetes management program that provides patients with sensors and other tools to manage their conditions and access to a virtual clinic.

“We’re bringing in continuous glucose data to try to get a deeper understanding of the biology and wrap that around with support to improve outcomes with diabetes,” she said. “We’re starting to see some early positive signals.”

The Project Baseline study is much broader in scope and aims to collect patient data from myriad sources—sensors, imaging, EHRs—to study the transition from health to illness. The company is partnering with Duke University and Stanford Medicine to collect, organize and analyze broad phenotypic health data from approximately 10,000 people over the course of four years.

“We have programs and projects that are addressing today’s health situations, but we want to think broadly about what health looks like in the next 10 to 20 years,” Mega said.

As clinicians and researchers look to integrate AI into medical practice, the technology raises some ethical questions: how do clinicians understand what comes out of a technology “black box” and how do they avoid bias in the algorithms? Many medical groups, including radiologists, are tackling these issues.

Addressing the issue of bias, Mega said many of the same principles of developing clinical trials apply to the development of AI systems.

“Think about a clinical trial, if you only have a very limited sample, or you only recruit people with a certain background, then the question is, do the results that you see generalize to the broader population? We just need to have that level of rigor when it comes to AI,” she said. “We need to make sure that when we create, train and then validate algorithms that we’re dealing with a diversity of input and make sure that this is going to be representative of the population.”