Why Generative AI in Healthcare Requires a Focus on Caregivers and Patients

The following is a guest article by Tina Manoharan, Global Lead of Data Science & AI at Philips

“Don’t overwhelm me with more data. Help me get more value out of the data I already have.” 

This feedback from a physician has stuck with me ever since I gave one of my first public presentations on the opportunities of AI and big data in healthcare, many years ago. Today, with the groundswell of interest in generative AI, it’s a message that seems more pertinent than ever. 

Much has already been written about how generative AI can boost healthcare innovation, in times when healthcare professionals are under more stress and pressure than ever before. But how do we ensure that generative AI truly supports doctors and nurses in providing better patient care, rather than adding even more complexity to their work?

For generative AI in healthcare to deliver on its promise, the focus should be on the caregiver and the patient first and foremost – using technology as a way of enhancing the human care experience.

Taking a People-Centered Approach to Generative AI in Healthcare

My ongoing conversations with healthcare leaders and professionals have unveiled three key elements of taking a people-centered approach to generative AI that delivers better care experiences and, ultimately, better patient outcomes:

Start with Needs, Not with Technology

The most beneficial innovations using generative AI, like any other innovation, will be need-driven rather than technology-driven. According to a recent survey by Bain & Company, healthcare leaders see the biggest short-term opportunities of generative AI in reducing the administrative burden on their staff and enhancing operational efficiencies. 

We are already seeing many promising developments in this direction. These include, for example, automated notetaking of physician-patient encounters, as well as conversational interfaces that support quick and intuitive exploration of all available patient data to help physicians understand the patient history and context, without them having to look up each piece of data manually. In the future, such interfaces may also provide predictive recommendations for diagnosis and treatment, based on an analysis of similar patients. 

Encouragingly, healthcare professionals are receptive to these developments. In fact, the Philips Future Health Index 2023 report shows that being at the forefront of AI in healthcare is now the number one consideration for younger healthcare professionals in choosing where to work (cited by 49%). This is a far cry from the initial trepidation we saw among healthcare professionals just a few years back when the rise of AI was shrouded by fears of job losses. By and large, healthcare professionals have come to appreciate that AI will assist and empower them, rather than replace them.

Embed Generative AI Deeply into the Clinical Workflow

If there’s anything we have learned in healthcare over the past decade, it’s that well-intended digital solutions meant to simplify the lives of physicians and nurses can also have unintended consequences. What looks like a promising innovation can inadvertently create more complexity in clinical practice. Take radiology, for example: radiologists work under immense time pressure in highly complex environments, running different applications in parallel on multiple screens. One study found that introducing AI into the radiologist’s workflow actually increased their workload. That’s because it added another thing for them to monitor, often requiring them to open a separate application. 

Generative AI therefore needs to be integrated deeply into the clinical workflow, assisting caregivers at the point of decision-making. For example, rather than asking radiologists to open a separate software application to run an AI-based image analysis, it should be automatically overlaid on the image they are already inspecting. Similarly, AI can make it easier for nurses in the ICU to keep a caring eye on patients by automatically integrating real-time vital signs measurements with other data to alert them when a patient’s health is about to deteriorate. In both cases, AI supports the workflow of the caregiver, rather than complicating it.

Build Trust by Using Generative AI in a Safe and Responsible Way

Next to the challenge of workflow integration, generative AI also amplifies other existing challenges around the use of AI in healthcare, such as trust, lack of transparency, and the risk of bias. The tendency of generative AI to occasionally generate incorrect but plausible outputs has sparked concerns that an overreliance on AI could lead to erroneous decisions that put patients in harm’s way. Earlier this year, the World Health Organization rightly called for caution – stressing the importance of safe, effective, and ethical use of generative AI.

A ‘human in the loop’ will be required to ensure that any suggestions and recommendations provided by AI truly benefit patients. As anyone who has used ChatGPT or similar tools can attest, their output can feel like magic. But in healthcare, where patient lives are at stake, physicians will only trust AI if they can easily understand and explain how it arrives at its conclusions. Conversational interfaces could help physicians understand why AI makes certain suggestions and recommendations, and therefore, how those outputs will impact patient care.

Fair and unbiased use of AI also becomes even more critical with the rise of generative AI in healthcare. If a large foundational model is fed with biased data, it can perpetuate or amplify existing healthcare disparities – at a scale that has even greater potential for harm than traditional AI models. To safeguard fair and unbiased decision-making supported by generative AI, we must fine-tune foundational models with diverse data sets and rigorously validate their performance for specific use cases and patient cohorts before they are deployed in a clinical setting.

Close collaboration across the healthcare industry – including with regulatory bodies – will be essential to address these challenges head-on, without letting them slow down the pace of innovation. In the next 10 years, technology will only become more powerful, and the appetite for AI among healthcare professionals is already there. Now is the time to start learning together, always keeping the caregiver and the patient top of mind – because in an age of rapidly advancing machine intelligence, it’s ultimately the human care experience that should benefit.

About Tina Manoharan

As the Global Lead of Data Science & AI at Philips, Tina’s focus is to advance data and AI-powered propositions to actionable clinical, operational, and consumer health insights. Her team helps to leverage data science and AI to support Philips businesses and functions with the creation of AI-enabled smart connected devices, services, and solutions. Prior to this role, she held various positions as an innovation and business leader. She holds a PhD in computer science from Heriot-Watt University, Edinburgh (UK).

   

Categories