Recommendations to regulators configuring rules on AI use in healthcare

Digital health leaders say frameworks already exist that can be adopted for AI regulation, but going too far with rules could stifle innovation.
By Jessica Hagen
04:35 pm
Share

Photo: Hill Street Studios/Blend Images/Getty Images

Federal agencies like the FDA and HHS have established safety programs to ensure safe and trustworthy artificial intelligence use in various sectors, including healthcare. 

Digital health executives relayed their recommendations, advice, and suggestions for regulators configuring rules around AI use in healthcare to MobiHealthNews, including flagging AI-generated content and building off existing regulatory frameworks. 

Ann Bilyew, SVP, health and group general manager, WebMD Ignite

"Don’t overdo it. Many of the protections we need are already there in preexisting regulations like HIPAA in the U.S. and GDPR in Europe. Some specific regulations may need to be tweaked or revised, but the frameworks are there. 


Sam Glassenberg, CEO and founder of Level Ex

"AI-generated content should be held to the same standards as any other content in healthcare (peer review, transparent data/citations, etc.), with one major caveat: AI-generated content must always be flagged as such. It is crucial that, in any content review process, if content is AI-generated, it must be flagged as AI-generated to all reviewers. 

"Human writers might make copy errors or might misunderstand a concept, and so on, but if they lack understanding of a concept, they will likely avoid writing in too much depth or detail. GenAI is the opposite: It will produce completely incorrect medical information in incredible detail. If references or data don’t exist, it will make them up  expanding on incorrect information that makes its content more believable at the expense of accuracy. It is crucial that in any content review process if content is AI-generated, it must be flagged as AI-generated to all reviewers."


Kevin McRaith, president and CEO of Welldoc

"Firstly, regulators will need to agree on the required controls to safely and effectively integrate AI into the many facets of healthcare, taking risk and good manufacturing practices into account.

"Secondly, regulators must go beyond the controls to provide the industry with guidelines that make it viable and feasible for companies to test and implement in real-world settings. This will help to support innovation, discovery and the necessary evolution of AI."


Amit Khanna, senior vice president and general manager of health at Salesforce

"We need regulators to define and set clear boundaries for data and privacy, while at the same time allowing technology to transform the industry. Regulators need to ensure regulations do not create walled gardens/silos in healthcare but instead, minimize the risk while allowing AI to reduce the cost of detection, delivery of care, and research and development."


Dr. Peter Bonis, chief medical officer at Wolters Kluwer Health

"The executive order on the safe, secure and trustworthy development and use of artificial intelligence has layered a set of directives to various federal agencies to establish AI regulations. These directives must be considered in the context of an existing regulatory framework that affects a variety of healthcare applications. Clarity and navigability will be crucial to achieve a balance that creates a constructive set of regulatory guidance that does not stifle innovation. Federal agencies developing such policies should do so transparently, with involvement of the public and other stakeholders and, critically, with rich interagency collaboration."


Eran Orr, CEO of XRHealth

"Patients need to know from the beginning when something is AI-based and not an actual clinician. There needs to be full disclosure to patients when that is the case. The industry needs to bridge the gap from where we are today in terms of AI tools; however, healthcare doesn't have room for errors  it needs to be fully reliable from the beginning."

Share