Future Proofing Healthcare Automation

The following is a guest article by Niall O’Connor, Chief Technology Officer at Cohere Health

Constructing a Responsible AI Framework for Prior Authorization

The American healthcare system is on the precipice of a critically necessary digital transformation. Anyone who has experience with healthcare, even just as a consumer, knows technology and change are needed. A recent report found that 89% of healthcare organizations are still using fax machines and 39% are using pagers. It’s time to future-proof healthcare automation, and I’m not the only one suggesting that now is the time for a digital transformation in healthcare.

Gartner projects expenditure on enterprise IT within the healthcare and life sciences sector to expand by 9.1% during 2023, reaching $240.5 billion. Healthcare leaders are allocating additional funds towards IT services and software, with anticipated yearly growth rates of 11.9% and 13.1%, respectively. Most recently, health industry investors have focused more on artificial intelligence (AI).

As industry leaders have eased into adopting AI solutions and machine learning techniques to expedite the provision of high-quality care and optimize administrative workflows, an enormous opportunity to improve prior authorization processes has surfaced. The process of obtaining prior authorization to evaluate the medical necessity of treatments has plagued healthcare professionals and patients for years. In short, due to their complex nature, inappropriate prior authorization denials contribute to delays in patient care and increased administrative burdens for physicians and their staff. Healthcare leaders have thus set their sights on AI to address the deficiencies within a system characterized by outdated and overly complex prior authorization approval procedures.

While cutting-edge AI tools offer significant potential for enhancing operational efficiency, streamlining processes, and improving patient outcomes, it is crucial that AI-driven technology is utilized responsibly and well-regulated. To gain insight into effective regulation, one must first understand the key ways to establish guardrails around AI’s role in the prior authorization approval process, and what details and actions to consider when leveraging automation.

Creating the Foundation for Responsible AI 

The efficacy of AI hinges on the quality of input data, necessitating an acknowledgment of its inherent limitations to ensure responsible utilization. Addressing these concerns and adhering to four critical considerations for fostering responsible AI transformation have the ability to deliver high-quality, value-based care and enhance patient outcomes.

  • Accountability: Responsible use of AI entails a robust collaboration between clinical experts and software engineers, ensuring that the creation, assessment, and refinement of AI models are guided by deep, evidence-based clinical practices.
  • Inclusiveness & Equity: AI models with specific health plan policies uphold consistent standards, prevent erroneous care denials, and maintain equity for vulnerable patient populations whose care would otherwise be affected by social determinants.
  • Privacy & Security: Safeguarding sensitive patient information requires clinical oversight, and technology must be designed to support that. AI models for prior authorization requests should exclude patient identifiers, relying solely on essential treatment data such as type, date of care, and diagnosis.
  • Transparency: Decisions driven by AI must be rooted in clinical data, and transparent practices are vital in minimizing the risk of AI models recommending unjustified prior authorization denials.

Building the Case for AI Accountability 

One in three doctors report adverse patient events stemming from prior authorization issues and delays. Failing to proactively address the potential for negative outcomes, such as algorithmic bias, is unwise and could exacerbate today’s legacy prior authorization concerns. 

In response, the American Medical Association (AMA) has been advocating for increased regulatory supervision of artificial intelligence’s application in assessing prior authorization requests. This oversight includes evaluating whether health plans adhere to a comprehensive and equitable process and mandating human examination of patient records before deciding to deny care. This is one of the major steps to mitigate the risks associated with leaning too heavily on technology rather than utilizing it as a support tool.

One misconception about leveraging AI in the prior authorization process is that the technology makes the final decision; however, AI does not and should never be used to automatically deny authorization requests–or cause delays in patients receiving the care they need. Accountability is the foundation of responsible AI: it stresses that clinical expertise must not only govern the prior authorization processes in real-time but also be integral to AI model development from the outset. 

One example of this overreliance on AI was uncovered in 2022 during an investigation into the use of AI-based technology for the prior authorization review process. Over a period of two months, investigators found that more than 300,000 pre-approved claims were denied by doctors, and did not adequately review the AI-generated denial recommendations.

These fears have prompted growing apprehension about excessive reliance on AI and inadequate clinical oversight. Again, AI’s primary role should be to expedite positive health outcomes and guide providers toward the best treatment options. Physicians cannot rely solely on AI to make final decisions on prior authorization requests without actively scrutinizing the data. We must ensure intelligent decision-making is conducted responsibly and with accuracy. By championing responsible AI alongside advanced clinical innovation and oversight, the healthcare industry is charting a course toward a more patient-centric, precise, and equitable healthcare system.

About Niall O’Connor

Currently, Niall serves as the Chief Technology Officer for Cohere Health. Prior to joining Cohere, Niall acted as Chief Technology Officer for Genospace, a cloud-based software company focused on delivering personalized medicine. Niall was a member of the founding team at Genospace, which was acquired by Sarah Cannon, the Cancer Institute of HCA. Niall also held software engineering positions at Dana-Farber Cancer Institute and Fidelity Investments. 

   

Categories