Optum research head describes AI's 'black box' problem and its impact on healthcare

By Dave Muoio
04:47 pm
Share

To all of the people who work with machine learning and neural networks healthcare technologies, but can’t describe exactly how the gears are turning — don’t worry, you’re not alone.

In a sold-out discussion hosted at Optum’s Boston office and organized by the Design Museum Foundation, Sanji Fernando, vice president and head of OptumLabs’ Center for Applied Data Science, explained how the ongoing difficulties researchers face in explaining the decision-making process of these networks is limiting their role in healthcare. Along with describing the potential therapeutic benefits of an easily interpreted AI, he noted the ways in which the technology is best being implemented in the meantime by his and other companies, and what the next steps might be.

“There’s some amazing breakthroughs in AI that we expect will transform health and healthcare across the world,” Fernando said during his talk on Friday. “And while some of those breakthroughs may radically change the way you receive care, there’s a long road and journey before we accomplish that — along the way, though, there’s really valuable impact that AI can have in how we receive healthcare today.”

Fernando’s talk largely centered on neural networks — computing systems that are able to perform specific tasks after being fed thousands or millions of binary datapoints. With these, the systems can identify the specific variables or characteristics necessary to (using Fernando’s examples) interpret whether an online photo contains a cat, or if diabetic retinopathy is present in a patient’s imaging.

The issue is that researchers only have a very clear view of the beginning and end of this complex process, Fernando explained.

“As of today, … it’s very hard to understand exactly why the neural network decided that you had a reticular hemorrhage. It takes in lots of information but it doesn’t give us a lot of interpretability, and when we think about diagnosing someone — complex condition, it could be a cancer diagnosis — it is challenging for a physician or a clinician to really be confident [in relying] on this prediction or inference without a deeper understanding why it came to that decision,” he said. “That’s why we think there’s some amazing work happening in academia, in academic institutions, in large companies, and within the federal government, to safely approve this kind of decision making, but there’s a lot of work we have ahead of us to be able to understand what’s happening inside these deep learning [systems].”

This “black box” problem is a large reason for the healthcare tech industry’s hesitancy surrounding machine learning-based clinical decision support tools, he continued.

“Explainable AI is going to be extremely important for us in healthcare in actually bridging this gap from understanding what might be possible and what might be going on with your health, and actually giving clinicians tools so that they can really be comfortable and understand how to use [it] in a clinical decision support setting,” he said. “And I think that’s possible — there’s lots of great work happening at MIT, Stanford, and many other institutions, and while we’re not there right now today, we might solve this tomorrow, or Monday, or six months from now. We think this will be solved, and that’s going to be really exciting.”

A fully explainable AI may not yet be a reality, but Fernando noted that enough strides have been made to at least give people a rough sense of what factors are influencing a neural network. But even with these, he stressed the importance of educating users on how best to handle their inherent specificity and sensitivity shortcomings, and of building neural network-based tools in a way that they can naturally complement those who are providing care.

“It’s an important design element for us to interact with AI so it doesn’t feel like a complete black box; so that we have some understanding [and] can marry the AI with the human process,” he said.

In the meantime, Fernando said that Optum and others are frequently looking to AI and neural networks to solve non-clinical tasks that are “a little boring, but extremely important in delivering healthcare today.”

“We’ve recognized that while there is a black box problem, there’s lots of processes and steps in delivering healthcare today in the US that require a certain amount of manual work: administrative processing of claims, determining if providers are properly reimbursed,” he said. “Those questions are capabilities and processes that we may not need to know exactly why, because we’re not cutting anyone out of the loop in decision making, but we’re optimizing and increasing the productivity of everyone in the healthcare system.”

Other long-term goals for AI researchers highlighted by Fernando included “one-shot learning” — training an AI using just a few points of data, which could be useful for addressing new or emerging diseases — and “generalizable AI” — a system capable of numerous tasks ranging from medical diagnoses to parallel parking a car. Fernando viewed didn’t see either of these goals being achieved anytime soon, but was at least somewhat encouraged by the progress of Amazon’s Alexa and similar technologies in achieving the latter.

Regardless of the form it takes, Fernando was fairly certain that AI is nowhere close to fully replacing trained humans in determining care. Much like autopilot technology in the aviation industry, he said that it will first and foremost remain a support tool for medical experts.

“We’re not talking about replacing people, we’re really talking about helping people focus on the most complex parts of the process,” he said.

Share