Problems With Epic’s Sepsis Prediction Model Underscore Larger Issues With Algorithmic Prediction Models

Recently, a small blaze of negative publicity erupted when research was published suggesting that Epic’s deterioration index designed to predict the onset of sepsis performs far worse than the vendor claims.

Researchers behind the study, which appears in JAMA Internal Medicine, examined a cohort of 27,697 patients undergoing 38,455 hospitalizations, concluding that sepsis occurred in 7% of the hospitalizations. The Epic proprietary sepsis model had predicted a hospitalization-level area under the receiver operating characteristic curve of 0.63, which is substantially worse than that reported by Epic.

Under these circumstances, clinicians would have needed to check in with 109 Epic-flagged patients to find one that needed an intervention to avoid sepsis.

Not only that, the Epic technology didn’t do a good job of managing alerts, generating them for 18% of all of the hospitalized patients studied.  This double failure – to accurately predict sepsis and the tendency to create high volumes of needless alerts – is a matter of concern, the authors note.

None of this is good. But arguably, the problem here is not that it points to a potentially major flaw in the Epic tool – which surely, the vendor will work out quickly given its otherwise close relationships with customers – but what it says about how hospitals are using predictive models like the sepsis model.

In their write-up, the study’s authors note that while hundreds of hospitals use the model, Epic hasn’t told the world much about how the technology functions or performs.  In colorful language also cited by our friends at HISTalk, the researchers suggest that we could be looking at “an underbelly of confidential, non-peer-reviewed model performance documents that may not accurately reflect real-world model performance.”

Moreover, they conclude, the fact that Epic’s model has been widely adopted despite poor performance raises fundamental concerns about sepsis management on a national level.

Unfortunately, this tracks with a larger trend identified by a survey conducted by MedCity News which looked at the use of algorithmic tools to manage their COVID-19 caseload. To conduct the study, the publication contacted 80 hospitals to learn about their use of decision-support systems implemented during the pandemic.

The most commonly used model used by these hospitals was Epic’s deterioration index, but many hospitals also developed their own algorithms.  Over time, these models took on tasks such as predicting which patients were most likely to test positive, determining which should be contacted first for monoclonal antibody treatment and which should be enrolled in at-home monitoring programs.

This turns out to be a somewhat risky strategy. An ongoing review of more than 200 COVID-19 risk-prediction models found that the majority had a high risk of bias, including problems with racial disparities in care and results which priorities patients seen remotely by physicians for the administration of the COVID-19.

None of this is to suggest that using healthcare AI, predictive algorithms and decision support tools shouldn’t play a part in hospital decision making. However, it does point to a mountain of work the industry will need to do if it wants to use these tools safely.

About the author

Anne Zieger

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

   

Categories