Embracing Automation to Increase Connection

The following is a guest article by Travis Bias, DO, MPH, FAAFP, Family Medicine Physician and Chief Medical Officer of the Clinician Solutions Team at 3M’s Health Information Systems Division

The release of ChatGPT has ignited dueling flames of excitement and concern regarding the potential of artificial intelligence (AI). Will AI end humanity, or is it going to save the world and usher in a prosperous boom that will recreate our ’90s glory?

The success of AI in health care will fully depend on how clinicians embrace these tools, using them to automate repetitive tasks while channeling augmented outputs through human clinicians’ relationships with patients.

AI uses in healthcare already abound. AI has helped match diseases to cures that already exist and are cheap; AI is automating elements of the cumbersome prior authorization process; AI is generating draft in-basket notes that consume loads of brain power and time; AI is predicting falls in elderly patients; and, finally, AI is generating clinical documents from simply listening to the patient-physician conversation. These are not wishes that might come to reality in a decade; these are all in use today. The horse is already out of the barn.

Before even seeing the patient, clinicians spend an incredible amount of time sorting through voluminous patient-specific information or the latest evidence regarding a disease state. AI in the form of natural language processing, mining both unstructured narratives and discrete information fields, can organize and sort relevant information within the patient context. Or it can be used to surface the relevant evidence on a particular topic in a physician’s natural workflow, without excessive clicking, scrolling, or remembering multiple passwords en route to finding such information. While these may seem like small inconveniences, these are all steps that add up to command a ton of physician time and attention, very little of which we take joy in completing.

Generative AI is absolutely going to augment the way physicians work. Right now, for every hour I spend with patients, I spend two hours clicking boxes, writing paragraphs, and searching for information in the electronic health record (EHR). All of this is to ensure accurate, compliant, and complete documentation. Much of that seemingly simple documentation could be handled safely by a machine that has been trained to hear “no, doctor, I don’t smoke,” and to click the “non-smoker” box. The technology is already there to listen to a conversation between patient and physician, transcribe that encounter, and then generate a clinical note that is organized and narrated as if a physician wrote it. That draft still needs to be reviewed by a physician, with their name electronically signed at the bottom in approval of the information captured; this is how the information documented is vetted today. This is just one example of how AI can alleviate administrative burdens for a physician so they can focus on more complex functions, such as synthesizing information or identifying clinical patterns, and prioritizing face time with their patients.

The above documentation use case makes clinicians nervous. Will the auto-generated note, even with a couple of minor errors, be better than me starting the document – with all the existing templates – from scratch? Will AI hallucinate or will it omit key bits of evidence? Perhaps. But when a clinician cannot complete a chart until several hours – or days – after seeing the patient, they may fumble pieces of the documentation too.

The other uses include starting a draft of in-basket messages to patients, detecting potential falls, or adjudicating the steps of a prior authorization process. None of these are tasks physicians particularly love doing manually, nor does the health system in its current state excel in accurately addressing them.

As these solutions proliferate and combine to complement each other, AI is going to increasingly offload clerical tasks and free up time for clinicians. I would have paid a lot of money for that 12 years ago when I first went into practice.

So, why the worry among physician leaders? In no other sector, other than security, is there a greater concern regarding the effects of AI than in health care.

First, there are misconceptions among well-meaning innovators who simply don’t know what they don’t know about medical decision-making or the practical daily tasks of many health workers.

A couple of months ago at STAT’s Breakthrough Summit, I heard Vinod Khosla confidently predict that “within five to six years, the FDA will approve a primary care app qualified to practice medicine.” With a nod of respect to the primary care folks, he opined that the specialist tasks and decision-making would be automated even sooner. Count me in the group that does not fully believe the bots are coming for my job. However, I do believe AI will have an enormous impact on the delivery of health care in the next decade. Those two beliefs are not necessarily mutually exclusive.

Additionally, physicians are concerned about its impact on the quality of our work; patients are worried about its effects on safety. These concerns absolutely require attention, but it is not like we are already rocking it in health care.

On the contrary, AI may be what we need to bolster how our healthcare system delivers for patients.

The way forward will involve multiple AI applications, layered on top of and next to each other, ideally stitched together in a patchwork of technological (and in this case, clinical informatics) power, auto-clicking through processes to satisfy administrative needs, alleviating unnecessary stressors clinicians face, allowing human physicians to focus on, well, the human part of our job: interacting with the patient in front of us.

In my first year of residency, I remember being overwhelmed by simultaneously learning the medicine and also where in the EHR to find certain information, what International Classification of Disease (ICD) or Current Procedural Terminology (CPT®) code to choose, and, oh by the way, how to look up from my laptop to connect with my patient with sincere empathy.

Clinicians are right to be wary of technology’s output. We should vet every tool for its underlying methodology and its accuracy. But we cannot – we should not – stick our heads in the sand. It is through the proactive embrace of AI that we will decrease the mouse clicks and increase our job satisfaction by focusing on the complex tasks we were actually trained to perform. And it will allow us to connect more deeply with the patients who seek our help.

About Dr. Travis Bias 

Travis Bias, DO, MPH, DTM&H, FAAFP, is a Family Medicine Physician and Chief Medical Officer of the Clinician Solutions Team at 3M’s Health Information Systems Division. He has practice experience in employer-based and private (both small and large multi-specialty group) settings. Travis previously taught medicine in Kenya and Uganda and as a lecturer at the Milken Institute School of Public Health on Comparative Global Health Systems and Global Health Diplomacy. These experiences inform his advocacy for stronger health systems as a member of his local medical society, formerly a Trustee with the Texas Medical Association, and as a member of the San Francisco Committee of Human Rights Watch.

   

Categories