North America

Experts share tips on turning datasets into useful machine learning algorithms

To take full advantage of AI, organizations need the right team, clear metrics and goals, and the right expectations about real-world accuracy.
By Jonah Comstock
03:03 pm
Share

Artificial intelligence and machine learning can help health systems in many ways, from improving backend efficiencies to monitoring patients to streamlining imaging analysis. But even when a health system has good data sets, turning those into useful and accurate algorithms is no easy feat.

On a panel at the HIMSS Machine Learning & AI for Healthcare conference in Boston today, moderated by HITN Managing Editor Mike Milliard, representatives from Indiana University Health, Advocate Aurora Health, Beth Israel Deaconess Medical Center and JVION laid out some of the best practices and potential pitfalls of rolling out AI systems in the enterprise.

“There’s no AI without data, and the more data you have, the more opportunity AI presents,” Tina Esposito, Advocate Aurora’s chief health information officer, said in a statement. “So to jump in without creating an infrastructure is kind of naive. But I think an organization has to accept that data is a core asset, data has to be managed that way, there’s opportunities, and I think that you need to put the resources down to support those opportunities to answer some really big questions.”

Understanding the data

“Understanding where your data is coming from is very important,” BIDMC’s Steven Horng, who leads informatics for the hospital’s emergency division, said when explaining that a learning algorithm can be easily thrown off by a small change in how data is entered or coded.

“Having an organizational structure that can really bring out those nuances in the data is really important,” he said. “So we really try to vertically orient our platform so we have analysts in each department that also interface with the bigger analytical team. We’ve also started to retool a lot of our folks within our organization to learn machine learning. Whether or not they do the analytics themselves, to understand it is vitally important, so when they do analytics on the front end, they collect data that will be useful on the backend.”

But even with a strong team, it’s in the nature of AI that it will be working with more data than humans can handle — making it a losing proposition for humans to spot check the algorithm. So a key to good data governance is figuring out how to automate that too.

“If yesterday we measured pain on a score of zero to 10 and today we use a scale of zero to 100 and we don’t remember to update all of our models, then what?” he said. “Now we have all of these predictions which are way off, and building the automated surveillance methodology to catch those outliers is an ongoing challenge.”

Don’t take metrics or goals for granted

Everyone agrees that algorithms need to meet certain standards to be used in clinical care. But figuring out where to set that bar, or even what metrics to set it against, can be much more involved than expected.

“Essential building blocks are committed resources, … governance, metrics and visualization. If you can’t see what your metrics show, if you don’t have metrics defined, if you don’t have agreement in what you’re looking at [you can’t validate your data],” John Showalter, chief product officer at Jvion, said. “It took us six months to get an agreed upon — signed off by all the CIOs — definition of readmissions.”

Michael Schwarz, IU’s director of decision support and analytics, said his organization took a year to define metrics like readmissions and length of stay.

Clear definition of metrics is important, as is clear definition of your goals in beginning a project, Esposito said.

“When you have a hammer, everything looks like a nail,” she said. “We get sucked into the buzzwords, but I think [it’s important to be] cautious and careful and understand which tools exist and what’s the right tool for the problem. This is a means to an end. What’s that end? Clearly define it and make it your North Star. And ensure that the tools you’re employing to help answer those questions make sense.”

Real-world drift and looking to the future

One thing panelists cautioned against was equating an algorithm’s performance in a controlled data warehouse environment with its real-world performance.

“You’re in research and you get this amazing accuracy, and a ton of excitement around it,” Schwarz said. “Then you take all that data and you throw it in a blender because it’s now in a transactional [EHR] system and some of the data gets entered late or inaccurately. People say ‘Wait a second, you said we had this accuracy and now we’re running it and it’s not there, did you pull the wool over my eyes?’”

Because of this tendency, studies published about AI’s accuracy often don’t tell the whole story.

“It’s not just test and train. It’s test, train, and real-world validate, and it’s the real-world validation that matters,” Showalter said. “It’s a three-step process and most of what’s published is a two-step process.”

But despite all the work that remains to be done, Showalter said AI is poised to become the new normal in medicine… eventually.

“Probably the best parallel we have in medicine to this type of prediction is going from the microscope analysis of blood more toward flow cytometers. … Now we look back, it’s only been like 35 years, but we would never imagine taking a clinician’s time to look at a microscope do a white blood cell count and count with the clicker except in extraordinary circumstances. The bread and butter is it’s done with a machine, and if you pull a doctor or nurse off the floor probably two in a 100 can tell you how it actually works. That’s where prediction is going.”

Share