Q&A: Google's chief clinical officer on AI regulation in healthcare

Dr. Michael Howell, chief clinical officer at Google, discusses Med-PaLM's evolution in 2023 and recommendations for regulators developing rules for AI use in healthcare.
By Jessica Hagen
02:41 pm
Share

Dr. Michael Howell, chief clinical officer at Google

Photo: HIMSS TV

Dr. Michael Howell, chief clinical officer at Google, sat down with MobiHealthNews to discuss noteworthy events in 2023, the evolution of the company's LLM for healthcare, called Med-PaLM, and recommendations for regulators in constructing rules around the use of artificial intelligence in the sector. 

MobiHealthNews: ​​What are some of your big takeaways from 2023?

Dr. Michael Howell: For us, there are three things I'll highlight. So, the first is a global focus on health. One of the things about Google is that we have a number of products that more than two billion people use every month, and that forces us to think truly globally. And you really saw that come out this year. 

At the beginning of the year, we signed a formal collaboration agreement with the World Health Organization, whom we have worked with for a number of years. It's focused on global health information quality, and on using tools like Androids Open Health Stack to bridge the digital divide worldwide. We also saw it in things like Android Health Connect, which had a number of partnerships in Japan. Google Cloud having partnerships with Apollo hospitals in India or with the government of El Salvador, really focused on health. And so, number one is a truly global focus for us.  

The second piece is that we focused a huge amount this year on improving health information quality and reducing misinformation and fighting misinformation. We've done that in partnership with groups like the National Academy of Medicine and medical specialty societies. We saw that really pay dividends this year, especially on YouTube, where now you can go, and you can see - doctors or nurses or licensed mental health professionals, the billions of people who look at health videos every year - can see the reasons that sources are credible in a way that's very transparent. In addition, we have products that lift up the best quality information.  

And then the third, I mean - no 2023 list can be complete without AI. It's hard to believe it was less than a year ago that we published the first Med-PaLM paper, our medically tuned LLM. And maybe I'll just say that the thing that's been, that's a big takeaway from 2023, is the pace here. 

We look on the consumer side at things like Google Bard or search generative experiences. These products weren't launched at the beginning of 2023, and they're each live now in more than 100 countries.

MHN: It's amazing that Med-PaLM was only launched less than a year ago. When it was first released, it had around a 60% accuracy range. A couple of months later, it went up to 85%+ accuracy. Last reported, it was at 92.6% accuracy. Where do you anticipate Med-PaLM and AI making waves in healthcare in 2024?

Dr. Howell: Yeah, the unanswered question as we went into 2023 was, would AI be a science project, or would people use it? And what we've seen is people are using it. We've seen HCA [HCA Healthcare] and Hackensack [Hackensack Meridian Health], and all of these really important partners begin to actually use it in their work. 

And the thing you brought out about how fast things are getting better has been part of that story. Med-PaLM is a great example. People have been working on that question set for many years and getting better three, four or 5% at a time. Med-PaLM was quickly 67 and then 86 [percent accurate].

And then, the other thing we announced in August was the addition of multimodal AI. So, things like how do you have a conversation with a chest X-ray? I don't even know ... that's on a different dimension, right? And so I think we'll continue to see those kinds of advances.

MHN: How do you have a conversation with a chest X-ray?

Dr. Howell: So, in practice, I'm a pulmonary and critical care doc. I practiced for many years. In the real world, what you do is you call your radiologist, and you're like, "Hey, does this chest X-ray look like pulmonary edema to you?" And they're like, "Yeah." "Is it bilateral or unilateral?" "Both sides." "How bad?" "Not that bad." What the teams did was they were able to take two different kinds of AI models and figure out how to weld them together in a way that brings all the language capabilities into these things that are very specific to healthcare. 

And so, in practice, we know that healthcare is a team sport. Turns out AI is a team sport also. Imagine looking at a chest X-ray and being able to have a chat interface to the chest X-ray and ask it questions, and it gives you answers about whether there is a pneumothorax. Pneumothorax is the word for a collapsed lung. "Is there a pneumothorax here?" "Yeah." "Where is it?" All those things. It's a quite remarkable technical achievement. Our teams have done a lot of research, especially around pathology. It turns out that teams of clinicians and AI do better than clinicians and do better than AI, because each is strong in different things. We have good science on that.

MHN: What were some of the biggest surprises or most noteworthy events from 2023?

Dr. Howell: There are two things in AI that have been remarkable in 2023. The speed at which it has gotten better, number one. I have never seen anything like this in my career, and I think most of my colleagues haven't either. That's number one.  

Number two is that the level of interest from clinicians and from health systems has been really strong. They've been moving very quickly. One of the most important things with a brand new, potentially transformational technology is to get real experience with it, because, until you have held it in your hands and poked at it, you don't understand it. And so the biggest pleasant surprise for me in 2023 has been how rapidly that has happened with real health systems getting their hands on it, working on it. 

Our teams have had to work with incredible velocity to make sure that we can do this safely and responsibly. We've done that work. That and the early pilot projects and the early work that's happened in 2023 will set the stage for 2024.

MHN: Many committees are starting to form around creating regulations around AI. What advice or suggestions would you give regulators who are configuring those rules?

Dr. Howell: First is that we think AI is too important not to regulate and regulate well. We think that, and it may be counterintuitive, but we think that regulation well done here will speed up innovation, not set it back.  

There are some risks, though. The risks are that if we end up with a patchwork of regulations that are different state-by-state or different country-by-country in meaningful ways, that's likely to set innovation back. And so, when we think about the regulatory approach in the U.S., I'm not an expert in regulatory design, but I've talked to a bunch of people who are in our teams, and what they say really makes sense to me - that we need to think about a hub-and-spoke model. 

And what I mean by that is that groups like NIST [National Institute of Standards and Technology] set the overall approaches for trustworthy AI, what are the standards for development, and then that those are adapted in domain-specific areas. So, like with HHS [Department of Health and Human Services] or FDA [U.S. Food and Drug Administration] adapting for health.  

The reason that that makes sense to me is that we know that we don't live our lives only in one sector as consumers or people. And all the time, we see that health and retail are part of the same thing, or health and transportation. We know that the social determinants of health determine the majority of our health outcomes, so if we have different regulatory frameworks across those, that will impede regulation. But for companies like us, who really want to color inside the lines, regulation will help.  

And the last thing I'll say with that is that we've been active and engaged and part of the conversation with groups like the National Academy of Medicine, who have a number of committees working on developing a code of conduct for AI in healthcare, and we're thankful to be part of that conversation as it goes forward.

MHN: Do you believe there's a need for transparency regarding how the AI is developed? Should regulators have a say in what goes into the LLMs that make up an AI offering?

Dr. Howell: There are a couple of important principles here. So, healthcare is a deeply regulated area already. One of the things that we think is that you don't need to start from scratch here.

So, things like HIPAA have, in many ways, really stood the test of time, and taking those frameworks that exist and that we operate in, know how to operate in, and have protected Americans in the case of HIPAA, that makes a ton of sense rather than trying to start again from scratch in places where we already know what works.  

We think it's really important to be transparent about what AI can do, the places where it's strong and the places where it's weak. There are a lot of technical complexities. Transparency can mean many different things, but one of the things we know is that understanding whether the operation of an AI system is fair and whether it promotes health equity, we know that that's really important. It's an area we invest deeply in and that we've been thinking about for a number of years.  

I'll give you two examples, two proof points about that. In 2018, more than five years ago, Google published its AI Principles, and Sundar [Sundar Pichai, Google's CEO] was the byline on that. And I've got to be honest, in 2018, we got a lot of people saying, "Why are you doing that?" It was because the transformer architecture was invented at Google, and we could see what was coming, so we needed to be grounded deeply in principles.  

We also, in 2018, took the unusual step for a big tech company of publishing an important peer-reviewed journal, a paper about machine learning and its chance to promote health equity. We've continued to invest in that by recruiting folks like Ivor Horn, who now leads Google's efforts in health equity, specifically. So we think that these are really important areas going forward.

MHN: One of the biggest worries for many people is the chance of AI making health equity worse.

Dr. Howell: Yes. There are many different ways that can happen, and that is one of the things we focus on. There are really important things to do to mitigate bias in data. There's also a chance for AI to improve equity. We know that the delivery of care today is not filled with equity; it's filled with disparity. We know that that's true in the United States. It's true globally. And the ability to improve access to expertise, and democratize expertise, is one of the things that we're really focused on.

The HIMSS AI in Healthcare Forum is taking place on December 14-15, 2023, in San Diego, California. Learn more and register

Share