ChatGPT May Not be Ready to Revolutionize the Healthcare Industry Quite Yet, But There’s Promise for the Future

The following is a guest article by Heather Lane, Senior Architect of Data Science at athenahealth  

Artificial intelligence (AI) and machine learning (ML) have transformed numerous industries in recent years, and healthcare is no exception. AI-powered chatbots, such as ChatGPT, have emerged as tools that have the potential to drastically transform healthcare and the delivery of personalized care. However, while these chatbots hold promise for revolutionizing healthcare, there are several reasons why these technologies require more time to prepare for widespread adoption. While there is no denying that ChatGPT is appealing, and thus, brings human level “charm,” arguments can be made that it currently lacks “sincerity,” as it has no understanding of “truth” or “correctness.” The human touch is still needed to help AI achieve its full potential.

Current Concerns: Why ChatGPT isn’t Ready to Substitute Humans

In healthcare, accuracy is paramount, and a wrong diagnosis or advice from ChatGPT could result in severe consequences for a patient. Currently, potential for errors remains in ChatGPT’s responses since it has been trained on vast amounts of data, much of which includes race, gender, ethnicity, and other stereotypes, using machine learning algorithms that can make mistakes. This could result in biased or non-factual responses for patients and providers. 

Although this type of technology can be beneficial, and the chatbot can help facilitate interaction between patients and providers, there are concerns about the accuracy of answers and whether it can consider each case’s uniqueness when being used as a summarization tool, for example. There is no guarantee that ChatGPT’s answers will not be too generic and leave behind relevant information that might be key to the patient and their diagnosis or critical to their course of care. Equally dangerous in a healthcare context, ChatGPT and related AI systems are known to “hallucinate” false statements.

The Future Use of ChatGPT

While we have talked about some of the barriers associated with ChatGPT, it is also important to think about some of the incredible ways it could be used as it continues to evolve and improve. In many ways, ChatGPT could prove to be the holy grail for providers and patients.

For example, there is enormous potential to use it for communicating with patients beyond the clinical setting and support with communication barriers, even closing care gaps. In a large study conducted with Spanish speakers living in the U.S., about 25 million people reported receiving a third less healthcare than other Americans. In addition to that, the study found that Spanish speakers had 36% fewer outpatient visits compared to non-Hispanic adults. This clearly shows us the need for technology to improve language barriers. ChatGPT, or other AI-based language translation systems, can serve as a resource for multilingual interaction, simultaneous translation, and can help to communicate a message in a patient’s first language, reducing the language-based gaps in health care and improving the patient experience. That said, this technology lacks the emotional intelligence and empathy often required for dealing with health-related issues.

Another area ChatGPT has potential within healthcare is chart summarization. When patients receive a diagnosis, doctors often hand them a very dense packet (digital or paper) containing all they need to know about their condition. ChatGPT has the potential to help by summarizing and simplifying the extensive document into a few sentences of relevant, digestible information. 

An AI-powered chatbot that can go through a patient chart and pull relevant information targeted to the provider’s specialty and appointment type would enable providers to spend more time with patients and ensure that they have the right information at their fingertips. For example, the information providers need for a brand-new oncology patient is reasonably different than an annual physical or a procedural follow-up. Imagine if ChatGPT could provide relevant information for each provider safely and effectively without missing key elements: the power of giving patients information in a more timely and accurate fashion could not only improve patient experience and lead to better outcomes, but also increase job satisfaction for healthcare providers. 

As we mentioned earlier, however, we need to first be confident that ChatGPT can do this safely, effectively, and fairly, guaranteeing that relevant information critical to patient care is not missed. In the interim, human testing to ensure accuracy will be critical.

ChatGPT’s Potential Today: How to Think About It

As the healthcare industry considers adopting and implementing ChatGPT because of its potential benefits, it is crucial to understand valid concerns about its readiness. The fact that there are accuracy risks proves that the real challenge for technologists and partnering clinicians will be to evaluate each AI-powered capability and define their trust, safety, and value measures. 

As AI technology continues to advance, it is essential to carefully consider the risks and benefits of implementing AI-powered solutions in healthcare, ensuring that patient safety and well-being remain a top priority. There is a very thin maturity growth curve among different capabilities, and rapid innovation and technology diversification can change perspectives quickly. In practice, clinicians need to explore and learn about the not-yet-ready technologies, as solutions can quickly move from “not ready” to “can’t live without.”  That education will empower clinicians to evaluate each capability and make informed decisions based on value and readiness, ensuring they do not get too far ahead of their technology skis or get left behind.

About Heather Lane

Heather Lane is the Senior Architect of the Data Science team and of the Data subdivision of Platform & Data Services at athenahealth. She has technical oversight of the Machine Learning, Artificial Intelligence, and Natural Language processing activities at athenahealth. Dr Lane received her PhD in Machine Learning and Computer Security from Purdue University in 2000, spent two years as a postdoc in the CSAIL lab at MIT studying Reinforcement Learning and Decision Theory, and then became a professor of Computer Science at the University of New Mexico in 2002. She spent ten years at UNM, researching Machine Learning with applications to Biosciences and Neuroscience, and achieving tenure there. In 2012, she moved to industry, working for Google for five years before moving to athenahealth to head the development of our Data Science team. Since joining athenahealth, she has overseen development of over a dozen ML projects that collectively provide tens of millions of dollars of annual cost savings to athena and its customers. Outside work, she is a wife, mother, SF geek, gamer, biker, hiker, sailor, and camper.

   

Categories