Q&A: The potential implications of AI on healthcare disparities

Dr. Jay Bhatt, managing director at Deloitte, warns that without high-quality algorithms technology has the potential to increase health inequities and drive mistrust.
By Jessica Hagen
02:12 pm
Share

Photo courtesy of Deloitte

The COVID-19 pandemic highlighted disparities in healthcare throughout the U.S. over the past several years. Now, with the rise of AI, experts are warning developers to remain cautious while implementing models to ensure those inequities are not exacerbated. 

Dr. Jay Bhatt, practicing geriatrician and managing director of the Center for Health Solutions and Health Equity Institute at Deloitte, sat down with MobiHealthNews to provide his insight into AI's possible advantages and harmful effects to healthcare. 

MobiHealthNews: What are your thoughts around AI use by companies trying to address health inequity?

Jay Bhatt: I think the inequities we're trying to address are significant. They're persistent. I often say that health inequities are America's chronic condition. We've tried to address it by putting Band-Aids on it or in other ways, but not really going upstream enough.

We have to think about the structural systemic issues that are impacting healthcare delivery that lead to health inequities – racism and bias. And machine learning researchers detect some of the preexisting biases in the health system.

They also, as you allude to, have to address weaknesses in algorithms. And there's questions that arise in all stages from the ideation, to what the technology is trying to solve, to looking at the deployment in the real world.

I think about the issue in a number of buckets. One, limited race and ethnicity data that has an impact, so that we're challenged by that. The other is inequitable infrastructure. So lack of access to the kinds of tools, you think about broadband and the digital kind of divide, but also gaps in digital literacy and engagement.

So, digital literacy gaps are high among populations already facing especially poor health outcomes, such as the disparate ethnic groups, low income individuals and older adults. And then, challenges with patient engagement related to cultural language and trust barriers. So the technology analytics have the potential to really be helpful and be enablers to address health equity.

But technology and analytics also have the potential to exacerbate inequities and discrimination if they're not designed with that lens in mind. So we see this bias embedded within AI for speech and facial recognition, choice of data proxies for healthcare. Prediction algorithms can lead to inaccurate predictions that impact outcomes.

MHN: How do you think that AI can positively and negatively impact health equity?

Bhatt: So, one of the positive ways is that AI can help us identify where to prioritize action and where to invest resources and then action to address health inequity. It can surface perspectives that we may not be able to see. 

I think the other is the issue of algorithms having both a positive impact in how hospitals allocate resources in patients but could also have a negative impact. You know, we see race-based clinical algorithms, particularly around kidney disease, kidney transplantation. That's one example of a number of examples that have surfaced where there's bias in clinical algorithms. 

So, we put out a piece on this that has really been interesting, that shows some of the places that happens and what organizations can do to address it. So, first there's bias in a statistical sense. Maybe the model that is being tested doesn't work for the research question you're trying to answer.

The other is variance, so you do not have enough sample size to have really good output. And then the last thing is noise. That something has happened during the data collection process, way before the model gets developed and tested, that impacts that and the outcomes. 

I think we have to create more data to be diverse. The high-quality algorithms we're trying to train require the right data, and then systematic and thorough up-front thinking and decisions when choosing what datasets and algorithms to use. And then we have to invest in talent that is diverse in both their backgrounds and experiences.

MHN: As AI progresses, what fears do you have if companies don't make these necessary changes to their offerings?

Bhatt: I think one would be that organizations and individuals are making decisions based on data that may be inaccurate, not interrogated enough and not thought through from the potential bias. 

The other is the fear of how it further drives mistrust and misinformation in a world that's really struggling with that. We often say that health equity can be impacted by the speed of how you build trust, but also, more importantly, how you sustain trust. When we don't think through and test the output and it turns out that it might cause an unintended consequence, we still have to be accountable to that. And so we want to minimize those issues. 

The other is that we're still very much in the early stages of trying to understand how generative AI works, right? So generative AI has really come out of the forefront now, and the question will be how do various AI tools talk to each other, and then what's our relationship with AI?

And what's the relationship various AI tools have with each other? Because certain AI tools may be better in certain circumstances – one for science versus resource allocation, versus providing interactive feedback. 

But, you know, generative AI tools can raise thorny issues, but also can be helpful. For example, if you're seeking support, as we do on telehealth for mental health, and individuals get messages that may have been drafted by AI, those messages aren't incorporating kind of empathy and understanding. It may cause an unintended consequence and worsen the condition that someone may have, or impact their ability to want to then engage with care settings.

I think trustworthy AI and ethical tech is a paramount – one of the key issues that the healthcare system and life sciences companies are going to have to grapple with and have a strategy. AI just has an exponential growth pattern, right? It's changing so quickly.

So, I think it's going to be really important for organizations to understand their approach, to learn quickly and have agility in addressing some of their strategic and operational approaches to AI, and then helping provide literacy, and helping clinicians and care teams use it effectively.

Share