Google reveals new offerings, advancements in LLMs/AI during Check Up event

The tech giant announced a new AI-enabled chest X-ray offering, an all-inclusive dermatology dataset, expanded partnerships and personal health LLMs for Fitbit.
By Jessica Hagen
06:01 pm
Share

Photo: Jane Sobel Klonsky/Getty Images

Google made several announcements during its Check Up event on Tuesday, including the release of its new MedLM for Chest X-ray, advancements in personalization at Fitbit Labs and the release of its Skin Condition Image Network (SCIN). 

Dr. Karen DeSalvo, Google Health's chief health officer, kicked off The Check Up event by highlighting the increased use of AI in healthcare.

"We are at an inflection point in AI where we can see its potential to transform health on a planetary scale," DeSalvo said.

Last year, Google launched its medically-tuned LLM, MedLM, two foundational models built off Med-PaLM 2, designed to answer medical questions, generate insights from unstructured data and summarize medical information.  

"It seems clear that in the future, AI won't replace doctors, but doctors who use AI will replace those who don't," DeSalvo said. "We must remember that AI is just a tool, and at the end of the day, health is human."

MedLM for Chest X-ray

Yossi Matias, Google's vice president of engineering and research, said language is just one dimension and inherently multimodal, so systems must be built to seamlessly analyze various data types. 

"We're expanding our MedLM family of models to include multimodal modalities, starting with MedLM for Chest X-ray, available in an experimental preview on Google Cloud," Matias said. 

MedLM for Chest X-ray will enable classification of findings, semantic search and more, with the goal to improve the efficiency of radiologists' workflows.

Clinical workflows

On the clinical end, Aashima Gupta, director of healthcare solutions for Google Cloud, and Dr. Michael Schlossr, SVP of care transformation and innovation at HCA Healthcare, announced a collaboration to align Google's AI with HCA's clinical expertise.

HCA is using the technology to engage patients and improve administrative workflows, such as documentation and medical record summarization. 

Greg Corrado, distinguished scientist and senior research director at Google, said the company has also been evaluating and fine-tuning Gemini models for healthcare, specifically regarding advanced reasoning; long-context window tasks, such as sifting through text to extract relevant information to answer questions about a patient's history; and multimodality, the ability to absorb and reason across multiple data types such as images, audio and text.

Medicine is inherently multimodal, Corrado said, and healthcare professionals regularly interpret signals across a plethora of sources, including medical images, clinical notes, electronic health records and lab tests. 

"As a proof of principle, we built a model that can generate radiology reports based on a set of open access deidentified chest x-rays. We found that the majority of the reports generated by our model were considered to be comparable in quality to radiologists' reports," he said. 

Regarding significantly more complex tasks, such as report generation for 3D brain CTs, the model-generated reports were judged by independent clinicians to be on par or better than manually-created reports. 

Still, he noted that although the technology shows promise, the AI is not ready to be trusted to generate radiology reports independently, but shows it is time to consider AI's ability to assist radiologists in report generation. 

AI and health equity

Analytically evaluating AI outcomes is essential, and health equity is crucial to ensuring AI models don't do more harm than good. 

"We've been exploring whether AI can help people better understand their dermatology issues or concerns," said Dr. Ivor Horn, director of health equity and product inclusion at Google. 

"Along the way, we realized that many existing dermatology datasets include primarily skin cancers and other several conditions but lack common concerns like allergic reactions. Plus, images are often captured in a clinical setting and may not reflect a diversity of images, including different parts of the body, different skin tones and more."

In response, Google built a dataset, dubbed Skin Condition Image Network (SCIN), that is inclusive of skin tones from a diverse group of people with different levels of conditions. 

The dataset was developed in conjunction with Stanford Medicine and will be made available to everyone. 

"Thousands of people contributed photos to help build an open-access dataset with over 10,000 images of skin, nails and hair," Horn said. "Dermatologists then labeled the deidentified images with a possible diagnosis. Then, they rated them based on two skin tone scales to make sure the dataset includes an expansive collection of conditions and skin types."

Human genome sequencing

Additionally, the company's research teams have been working to improve genome sequencing to identify variances in a person's DNA, such as markers that identify whether one has an elevated risk of developing breast cancer. 

The company partnered with Stanford to study the use of DeepVariant to help identify disease-causing variants in critical NICU cases in a shorter period than standard care.

DeepVariant is an analysis pipeline that uses a neural network and imaging classification to identify genetic variants from DNA sequencing data.

"DeepConsensus and DeepVariant, our open-source analysis tools, have been important contributions to an effort called the Human Pangenome Project. This has created a new human reference genome that contains the sequences of multiple individuals with diverse ancestors," said Shravya Shetty, Google's engineering director.

This allows for a broader view of the human genome, allowing scientific discovery to be more inclusive of individuals from all backgrounds. 

Personal health LLM

Regarding its wearables, Google announced it is building personal AI into its product portfolio for premium users to understand their personal health metrics, including launching experimental AI features in Fitbit Labs. 

Fitbit Labs will bring together users' multimodal time series health-and-wellness data. Users will be able to generate charts for data points they wish to visualize. 

Users will be able to interact with the insights in a free-form chat space to understand how different aspects of their health correlate or interact.

The features will be available later this year for a limited number of users who enrolled in the Fitbit Labs program in the mobile app. 

"We want to deliver even more personalized health experiences with AI, so we're partnering with Google research, health and wellness expert doctors and certified coaches to create personal health large language models that can reason about health and fitness data and provide tailored recommendations similar to how a personal coach would," Tang said.

"This personal coach LLM will be a fine-tuned version of our Gemini model. This model, fine-tuned using high-quality research case studies based on deidentified diverse health signals from Fitbit, will help users receive more tailored insights based on patterns in sleep schedule, exercise intensity, changes in heart rate variability, resting heart rate and more."

Tang said the personal health LLM will power future AI features across the tech giant's portfolio, allowing users to have more personalized health features. 

Share