Helping Healthcare Professionals Evaluate AI Offerings

Steve Weber
AJ Grotto

The following is a guest article by Steve Weber, Professor of the Graduate School at UC Berkeley, and AJ Grotto, Director, Program on Geopolitics, Technology, and Governance at Stanford University

Generative artificial intelligence technologies promise to improve healthcare in large part by enabling healthcare professionals to utilize massive troves of health-relevant data. They will fuel improvements across the healthcare value chain, starting with a better understanding of physiology and disease, to efficiencies in disease management, treatment protocols, and even billing and reimbursement. The technology companies facilitating this push into AI-powered health services know that strengthening their footholds in the health sector could mean a significant windfall. But what matters most is what’s good for patients, and that means caution is warranted.

Putting patients first means that health systems, their government partners, and regulators need to look at companies’ security and interoperability track records to ensure the technologies being adopted keep patients’ private records safe and usable across providers. Competition, then, is an important aspect of this focus on patients. Regardless of arguments about natural monopolies and standards, it is far too early in the game for any single solution to be permitted to dominate the market.

To its credit, the healthcare industry has digitized at a remarkable rate: 96% of hospitals in the United States use electronic health records (EHR). Yet the vast majority of EHR remains unstructured and under-utilized. The piecemeal approach of health systems to digitization is a far cry from the well-engineered data lakes other industries – such as financial services – have built with large-scale distributed file systems to support faster, easier, and more productive data access and usage.

AI technologies could help accelerate improvements in healthcare for patients, payors, and providers, but they have to win over skeptics in order to preserve caregiver and patient trust. A recent survey found that more than half of healthcare professionals don’t believe AI is ready for medical use. This was reflected in a BMJ Global Health article published this year, which attributed professionals’ apprehension to concerns regarding AI errors that cause patient harm, data privacy and security, and incomplete or biased data.

Health systems tend to favor an incremental approach that focuses on patient outcomes, as it should be, rather than simply on enhancing data use or improving day-to-day business processes. As technology companies launch bespoke healthcare products, ink strategic partnerships, and tout their expansive visions at industry events, we don’t want the bright shiny object of generative AI to get ahead of protecting patients’ privacy and confidentiality. Americans are already and appropriately uncomfortable with the idea of AI being used in their own healthcare, in part because of how major technology companies already exploit their health data and the leaks that these companies experience.

Along with government partners, healthcare professionals should be at the forefront of evaluating and adopting AI technology. Microsoft presents an important opportunity to do so following the recent expansion of its partnership with Epic. Together, they promise to help solve major challenges in health care through AI in the Microsoft Cloud. This collaboration makes sense conceptually: Healthcare practitioners would benefit from seamless conversational interfaces and appreciate reducing manual, labor-intensive processes like entering clinical notes into Epic. These benefits will be overshadowed, though, if data in Microsoft’s purview is insecure or non-interoperable.

That is why it is important to remember Microsoft’s poor cybersecurity track record. Its products were at the core of two of the largest hacks of government and critical infrastructure in recent memory, and just last September a conversational AI developer owned by Microsoft, Nuance Communications, disclosed that it was the victim of an attack in which hackers gained unauthorized access to 1.2 million patients’ data.

Microsoft + Epic is a powerful player that would naturally aim to dominate the healthcare market, but would at the same time create a single plane of attack for malicious actors; just one vulnerability could open the door for potential intruders to open what might be the world’s most valuable vault. And Microsoft is already under serious scrutiny for how it has tried to manipulate markets in other industries, including legislators increasing the wariness of Microsoft’s monopolistic dominance in government technology.

Microsoft appears to like walled gardens, and while that might be attractive to the company and its shareholders, it is not acceptable to risk locking healthcare professionals into Microsoft and thus becoming the gatekeeper of patients’ data – it would directly clash with the potential of generative AI and what it could offer patients.

Any technology provider that wants to add a new layer to healthcare systems — be it generative AI innovations, cloud collaboration tools, or otherwise — must demonstrate credibility and performance across security and interoperability first and foremost. Patients come first, and privacy is paramount in healthcare. Advancements in AI and the cloud have too much potential to be shortchanged by anti-competitive corporate practices and strategies. Assessing these technologies and business models now on security and interoperability criteria will help ensure healthcare professionals adopt the right patient-focused solutions that can improve health outcomes over the long term. 

   

Categories