Trustworthy AI has the ability to transform healthcare

Experts at CES 2021 discuss how to overcome the barrier to trust in something that is hard to understand and visualize.
By Mallory Hackett
12:40 pm
Share

By this point in time, artificial intelligence and machine learning are deployed by healthcare organizations for almost every aspect of the care journey. From clinical decision-making support to automating the revenue cycle, AI has become an integral tool in healthcare.

A problem remains in how clinicians, staff and patients can trust something that they can’t see and oftentimes don’t understand. Health experts addressed this issue during a CES 2021 virtual panel discussion on Tuesday.

WHERE THE TRUST ISSUES COME FROM

For many machine learning algorithms, users can’t see what’s going on at the data input level, which can make it difficult to comprehend how the software is creating its recommendations, according to Christina Silcox, a policy fellow of digital health at the Duke-Margolis Center for Health Policy.

“Communication really is key,” she said during the panel. “You can’t just set up the software in front of somebody and say ‘Trust me,’ particularly if they need to make decisions based on that information that is really going to be critical to the patient.”

At the regulatory level, AI is sometimes left out of important approval processes that build consumer trust.

“Not all of these products are considered medical devices,” Silcox said. “Therefore they’re not under FDA’s authority and they don’t necessarily get that FDA stamp of approval and that third-party trusted reviewer vetting.”

Silcox also pointed out that software isn’t always able to get patented, which can lead manufacturers to rely on trade secrecy.

“That means they might be more reluctant to share details that they otherwise might if they had that patent software,” she said.

When information about a product is concealed, the public misses out on learning details that could build trust in it.

This trust-building information could include independent performance data, what population the software is intended for and how it should be used, explanations of why and how the product makes its decisions, details about the data that the algorithm was trained with, what the input requirements are, and how the software will be evaluated and updated over time, according to Silcox.

HOW TO BUILD UP TRUST IN AI

Garnering confidence in AI must be done at three levels, according to Pat Baird, the senior regulatory specialist at Philips. It needs to achieve technical trust, regulatory trust and human trust.

The first level asks if the algorithm does what it was designed to do.

“So, just from a purely intellectual standpoint, is this a solid application or not?” Baird said.

On the regulator side, the software must be able to stand up to different agencies’ expectations and requirements, Baird said.

But at the end of the day, the product must stand up to user scrutiny. 

“We’re also talking about interacting with human beings and sometimes they’re not necessarily purely going to follow the technical trust and regulatory trust. They’re going to have some questions,” Baird said. “If you have a bad user interface of your product, people aren’t going to like it and they might not trust it.”

Part of that human-interaction trust comes from taking into account the differences among the user populations.

“Depending on who the stakeholder is, who that user is, we’re going to have to customize it for that particular application,” Baird said. “Those different considerations are very, very important at really [knowing] who your customer is, who your end user is.”

One user population that is especially important to understand regarding their preferences is clinicians, according to Jesse Ehrenfeld, chair of the American Medical Association’s board of trustees.

Based on AMA’s surveys of clinicians across the country, they want to know if it will work for them and their patients, if they will get paid for it and if they are liable for the product’s decisions, Ehrenfeld said.

Another way to build trust in AI is to make the applications themselves better, which often means improving data.

“As a nation, we really need to strengthen our healthcare data infrastructure and put a focus on improving digital health data,” Silcox said.

Most importantly, according to Silcox, data needs to be interoperable and linkable.

“That’s really how we’re going to improve AI and make sure that it’s as useful as possible,” she said.

Creating trust in AI has the power to transform healthcare, according to Baird.

“Technology has already dehumanized healthcare,” he said. “I’m hoping this will help re-humanize healthcare by freeing up the caregivers, letting them give care, and some of the things that we can let the computers do, we can let the computers do now.”

 

Share