Health IT’s Role in Fixing Healthcare’s Biggest AI Misconceptions

The following is a guest article by Dave DeCaprio, Co-Founder and Chief Technology Officer at ClosedLoop

No matter where you look – from TikTok filters to the latest ChatGPT release – artificial intelligence (AI) has a profound and growing impact on our daily lives. The market for AI is expected to show strong growth in the coming decade; its current value of nearly $100 billion USD is expected to increase twentyfold by 2030, reaching nearly $2 trillion.

AI has the potential to transform the ever-growing healthcare industry. However, many healthcare organizations don’t know where to start. Leaders struggle to distinguish true AI solutions capable of delivering clinical value from the marketing buzz that rides the hype cycle, and many of them may not even know what they’re looking for in an AI solution. Those with a better understanding of AI worry about the possible effects of bias too.

Now is the time for healthcare IT leaders to educate the market, guiding healthcare organizations to make more informed decisions about adopting and implementing AI, while also helping them see beyond the hype. This includes dispelling misconceptions and explaining relevant use cases, such as using large language models (LLMs) for note-taking and computer vision for digital pathology. It’s a tall order. Broad education is no easy task and this industry is notoriously slow to adopt cutting-edge technology.

To help healthcare organizations fully capitalize on the potential of AI, let’s separate fact from fiction and dig into three common misconceptions about healthcare AI that I often see perpetuated from my perspective as co-founder of a healthcare AI company. 

Misconception: AI Technology isn’t Ready to be Used in Practice Today

Many people still view AI as a mysterious, futuristic technology thanks to its complexity and pop culture depictions. In reality, AI has existed for decades, with the term being coined by scientists in 1956. Today, AI and machine learning are readily used to more efficiently detect cancer, develop drugs, manage administrative tasks, and much more. Forward-thinking organizations are using AI to predict who is most likely to experience potentially preventable negative health outcomes and events, such as unplanned hospital admissions or chronic disease progression. These insights enable clinicians to take targeted, informed actions that use limited care resources optimally and produce better health and financial outcomes. Beyond the most common use cases for AI, healthcare organizations are also tapping into the generative AI craze, as seen in the Mayo Clinic’s partnership with Google Cloud, among other examples.

Overcoming this misconception and using AI to increase efficiency and curb costs is paramount as the industry reels from the biggest clinician staffing shortages ever, staggering burnout, and unsustainable spending. In addition to predicting health risks and enhancing care delivery processes, AI is also widely used to automate time-consuming administrative tasks, freeing up clinicians and other personnel to focus on more valuable and enjoyable work.

Misconception: AI will Replace Clinicians

One of the biggest misconceptions I hear is that AI will replace teams of trained physicians, nurses, and other clinical staff. Put simply, AI is not a replacement for humans but rather an augmentation of human capabilities. With AI, medical professionals are still making the decisions and delivering care, but their decisions are also informed by AI-driven insights that use clinically relevant data to surface the right people and make accurate predictions about their future health outcomes.

Poring over massive amounts of unstructured data from EHRs, claims, and other sources simply isn’t feasible for humans, and in no scenario is a doctor sifting through their entire patient database to determine who merits proactive intervention. However, AI is perfect for this task, and it enables healthcare organizations to achieve better outcomes by empowering clinicians with insights, not by replacing them.

Misconception: Algorithmic Bias is Inevitable in Healthcare AI

Across industries, one of the major concerns about AI is its propensity to perpetuate biases based on how algorithms are trained. In healthcare, where AI-generated predictions influence life-altering decisions, this is a valid concern. It’s true that algorithmic bias is pervasive across the algorithms healthcare organizations use for risk stratification.

A seminal paper from Obermeyer et al on algorithmic bias in healthcare from 2019 found evidence of racial bias in an Optum algorithm that covered 70 million lives. It inappropriately used health costs as a proxy for health needs, and as a result, Optum’s algorithm incorrectly “learned” that Black members are healthier than equally sick white members. This systematically disadvantaged and led to worse outcomes for Black members while prioritizing white members for care and special programs, despite white members being less sick on average.

The truth is algorithmic bias isn’t inevitable in healthcare. Algorithms are not inherently biased, but if an AI/ML model isn’t trained, managed, and audited correctly, it can worsen health disparities across gender, race, and socioeconomic statuses, as seen in the Optum case. When harnessing the power of predictive analytics, users must also assume responsibility for auditing algorithms for bias prior to deployment and frequently monitoring them as they’re used in practice, or else well-intentioned efforts could inadvertently backfire.

Moving Past Misconceptions: What You Should Look for in an AI Solution

While the uptick in AI adoption is promising and apt, healthcare organizations that adopt AI still need to know exactly how and where each application of AI can make a significant impact. It’s not enough to settle for uninterpretable “black box” solutions and assume they’re delivering value. Healthcare data scientists and clinicians need visibility into the inner workings of algorithms to ensure that AI-driven recommendations are helping, not hurting, their populations. My advice: look for an AI solution that is explainable, intuitive, allows you to audit for bias, and can be tailored to your organization’s unique circumstances and specific needs. Once you find the right one, the promise is infinite.

   

Categories