Q&A: How to properly implement data to ensure effective generative AI use

Anita Mahon, executive vice president and global head of healthcare at EXL, discusses how the analytics firm helps payers and providers utilize data within generative AI.
By Jessica Hagen
01:18 pm
Share

Photo courtesy of EXL

Varying use cases for generative AI in healthcare have emerged, from assisting providers with clinical documentation to helping researchers determine novel experimental designs. 

Anita Mahon, executive vice president and global head of healthcare at EXL, sat down with MobiHealthNews to discuss how the global analytics and digital solutions company helps payers and providers determine which data to implement into their LLMs to ensure best practices in their businesses and offerings. 

MobiHealthNews: Can you tell me about EXL?

Anita Mahon: EXL works with most of the largest national health plans in the U.S., as well as a broad range of regional and mid-market plans. Also PBMs, health systems, provider groups and life sciences companies. So, we get a pretty broad perspective on the market. We've been focused on data analytics solutions and services and digital operations and solutions for many years.

MHN: How will generative AI affect payers and providers, and how will they remain competitive within healthcare?

Mahon: It really comes down to the uniqueness and the variation that will actually already be resident in that data before they start putting them into models and creating generative AI solutions from them. 

We think if you've seen one health plan or one provider, you've only seen one health plan or one provider. Everyone has their own nuanced differences. They're all operating with different portfolios, different parts of their member or patient population in different programs, different mixes of Medicaid/Medicare exchange and the commercial, and even within those programs, wide variety across their product designs, local, regional market and practice variations – all come into play.

And every one of these healthcare organizations has kind of aligned themselves and designed their internal app, their products and their internal operations to really best support that segment of the population that they're aligning themselves with. 

And they have different data they're relying on today in different operations. So, as they bring their own unique datasets together, married with the uniqueness in their business (their strategy, their operations, the market segmentation that they've done), what they're going to be doing, I think, is really fine-tuning their own business model. 

MHN: How do you ensure that the data provided to companies is unbiased and will not create more significant health inequities than already exist?

Mahon: So, that's part of what we do in our generative AI solution platform. We're really a services company. We're working in tight partnership with our clients, and even something like a bias mitigation strategy is something we would develop together. The kinds of things we would work on with them would be things like prioritizing their use cases and their road map development, doing blueprinting around generative AI, and then potentially setting up a center of excellence. And part of what you would define in that center of excellence would be things like standards for the data that you're going to be using in your AI models, standards for testing against bias and a whole QA process around that. 

And then we are also offering data management, security and privacy in the development of these AI solutions and a platform that, if you build upon, has some of that bias monitoring and detection tools kind of built-in. So, it can help you with early detection, especially in your early piloting of these generative AI solutions.

MHN: Can you talk a bit about the bias monitoring EXL has?

Mahon: I know certainly that when we're working with our clients, the last thing we want to do is allow preexisting biases in healthcare delivery to come through and be exacerbated and perpetuated through the generative AI tools. So that's something we need to apply statistical methods to identify potential biases that are, of course, not related to clinical factors, but related to other factors and highlight if that's what we're seeing as we're testing the generative AI.

MHN: What are some of the negatives that you've seen as far as using AI in healthcare?

Mahon: You've highlighted one of them, and that's why we always start with the data. Because you don't want those unintended consequences of carrying forward something from data that isn't really, you know, we all talk about the hallucination that the public LLMs can do. So, there's value to an LLM because it's already, you know, several steps forward in terms of its ability to interact on an English-language basis. But it's really critical that you understand that you've got data that represents what you want the model to be generating, and then even after you've trained your model to continue to test it and assess it to ensure it's generating the kind of outcome that you want. The risk in healthcare is that you may miss something in that process.  

I think most of the healthcare clients will be very careful and circumspect about what they're doing and gravitate first towards those use cases where maybe, instead of offering up like that dream, personalized patient experience, the first step might be to create a system that enables the individuals that are currently interacting with the patients and members to be able to do so with much better information in front of them.

Share