FDA takes first steps toward new guidance for adaptive medical AI

An exploratory whitepaper dropped yesterday lays out a framework that the FDA hopes to present in a future draft guidance.
By Jonah Comstock
01:16 pm
Share

FDA Commissioner Dr. Scott Gottlieb may be stepping down, but he isn’t slowing down on his way out the door. Yesterday the agency dropped a 20-page exploratory whitepaper on how it could address artificial intelligence and machine learning algorithms.

“As algorithms evolve, the FDA must also modernize our approach to regulating these products,” Gottlieb wrote in a statement. “We must ensure that we can continue to provide a gold standard of safety and effectiveness. We believe that guidance from the agency will help advance the development of these innovative products.”

The problem with FDA’s existing methods, as they pertain to AI, is that they are ill-equipped to handle algorithms that constantly learn and self-improve, since FDA generally requires manufacturers to resubmit clearance when major modifications are made to software.

As such, the proposed new framework would allow companies to include in their premarket submissions for AI products their plans for anticipated modifications. This includes what the agency has dubbed “SaMD Pre-Specifications” (SPS) and “algorithm change protocol” (ACP).

The SPS lays out the sorts of changes the company anticipates making, while the ACP details the processes they will follow to make those changes.

Certain changes, like improvements in performance and changes in data inputs, would not require a new submission, but others, like significant changes to intended use, would.

FDA gives the example of a physician-facing mobile app for examining skin legions, which collects data from users and uses it to improve the algorithm. The company would not need to seek new approval for the gradual improvement of the algorithm, nor to make the app compatible with additional smartphone cameras. But if they used the algorithm in a patient-facing app, that would require a new clearance.

The whitepaper is not a draft guidance, and the FDA is seeking comments on it before moving further.

WHY IT MATTERS

The industry has been asking for this kind of guidance for a long time. Epstein Becker Green partner and CDS Coalition chairman Bradley Merrill Thompson penned a piece on the topic for MobiHealthNews nearly two years ago raising some of these same concerns.

“On the whole, I'm very excited to see FDA so enthusiastic about the future of AI, and so willing to look at new innovative approaches to regulating it,” Thompson told MobiHealthNews in an email. “I just hope that the agency can carry through with some of these new initiatives all the way to completion. As between the two programs — precertification and this new AI initiative — speaking personally I think I'm more excited about the new AI initiative.”

Dr. Eric Topol, who recently published a book, “Deep Medicine,” about AI in healthcare, was also supportive of the move from the agency.

“It's really good that the framework reflect the need to go past ‘locking’ of AI algorithms that are autodidactic,” he wrote in an email. “By freezing them at the time of approval, that loses their potential to be even better with respect to performance accuracy. That FDA is looking into ways to dynamically adapt to prevent algorithmic locking is a positive sign, represents progress.”

WHAT’S THE TREND

FDA has been approving AI algorithms for a while now (we rounded up 12 examples last fall). But so far these algorithms have been limited in their ability to learn. Gottlieb explained in his statement.

“The artificial intelligence technologies granted marketing authorization and cleared by the agency so far are generally called ‘locked’ algorithms that don’t continually adapt or learn every time the algorithm is used,” he wrote in his statement. “These locked algorithms are modified by the manufacturer at intervals, which includes ‘training’ of the algorithm using new data, followed by manual verification and validation of the updated algorithm.”

This clearance could open the door for another kind of algorithm.

“These machine learning algorithms that continually evolve, often called ‘adaptive’ or ‘continuously learning’ algorithms, don’t need manual modification to incorporate learning or updates,” Gottlieb wrote. “Adaptive algorithms can learn from new user data presented to the algorithm through real-world use. For example, an algorithm that detects breast cancer lesions on mammograms could learn to improve the confidence with which it identifies lesions as cancerous or may learn to identify specific subtypes of breast cancer by continually learning from real-world use and feedback.”

This isn’t coming out of left field — Gottlieb alluded to this guidance last year at Health Datapalooza in Washington, D.C.

Share