Home » Interviews » Currently Reading:

HIStalk Interviews Adam Wright, PhD, Director, Vanderbilt Clinical Informatics Center

February 19, 2020 Interviews 1 Comment

Adam Wright, PhD is professor of biomedical informatics and director of the Vanderbilt Clinical Informatics Center at Vanderbilt University Medical Center in Nashville, TN.

image

Tell me about yourself and your new job.

I’m a professor of biomedical informatics at Vanderbilt University Medical Center. I also direct the Vanderbilt Clinical Informatics Center, or VCLIC. As a professor, the main part of my job is research. I get grants, write papers, and teach. I teach a lot of the students in our biomedical informatics and medical school courses. Then I also do some service. I help direct the decision support activities here at Vanderbilt, trying to make sure that we have good alerts and other decision support tools and that we’re not unnecessarily burdening our users.

What are the best practices in getting clinician feedback when developing and monitoring CDS alerts?

You need to involve clinicians when you are developing any alert that will affect them. There’s this tendency for orthopedic surgery to say, “We should ask anesthesia to respond to this alert. We should really tell those guys what to do.” That’s almost never the right answer. It almost always works better when users are involved in the development of an alert.

I’m also a huge fan of using data. We have enough data in our data warehouse to forecast ahead of time when an alert will fire, who it will fire to, and which patients it will fire on. Looking at the data is often really illuminating. We’ve just been dealing with some alerts here at Vanderbilt that were firing in the operating room and suggesting giving a flu shot to a patient who’s in the middle of surgery. That’s just not a timely moment to give a flu shot.

You can figure that out after it’s live, but you are better off looking at some data and guessing what’s going to happen. Then making sure that you’ve added all the proper exclusions and tailoring to make sure that it’s firing for only the right people. That’s the most important thing in building the alert and designing it to not frustrate people.

Once it’s live, you need to, on almost on a daily basis, look at your alerts and see how often they are firing, who they are firing for, and trying to figure out if some users are particularly likely to accept an alert or particularly unlikely to accept an alert. There’s this classic problem where alerts fire for patients who might be on comfort measures only. That may not be appropriate for a lot of alerts. Or there’s a particular user type, like a medical assistant, who may not be empowered to act on an alert, but is receiving it anyway. We have found that by looking at the data, we can add additional restrictions and exclusions to the logic until we get the alerts to the right person at the right time.

We have a goal of between 30% and 50% acceptance for our alerts. We don’t always get there, but we see in the literature a lot of places that are at 1% or 2% acceptance for alerts. That is almost certainly a problem, because then people get fatigued and start tuning the alerts out.

Are hospitals comfortable including a “did you find this alert useful” feedback mechanism, knowing that they are then obligated to take action accordingly? Or to allow clinicians who don’t find an alert useful, such as a nephrologist who is annoyed at drug-renal function warnings, to turn them off?

We have a policy here that we are trying to build feedback buttons into all of the alerts. When you see an alert, there’s a little set of smiley faces in the corner. You can vote whether you like the alert a lot, not too much, or not at all. You can click to vote and you can type in a comment. I try to respond to all of those quickly and try to understand the person’s thinking, their rationale.

We were worried that people would use the feedback comments to to grumble about alerts or how they don’t like the EHR. In fact, people tend to give thoughtful comments about why the alert didn’t apply to a patient or it didn’t fit well in their workflow.

We got another one about a week ago about influenza vaccinations. Some clinics don’t stock the flu shot. They don’t have it in their refrigerator, so they can’t give it. We had some conversations with our leadership about whether we should start stocking and administering the flu shots in those clinics, but decided that wasn’t going to be practical. We were able to then edit the alert so that it doesn’t fire for those people.

I agree that some alerts that might make sense for a primary care doctor or hospitals that wouldn’t necessarily make sense for a specialist who really knows that area. It’s futile to show an alert to somebody who says they don’t want it and our data suggests that they are unlikely to accept it. We have to  target our alerts to people who are likely to be willing to accept them.

It’s almost a false sense of security. If we are really worried about renal dosing for medicines and we know that we have an alert that doesn’t work, we shouldn’t just congratulate ourselves for having a renal dosing alert. We should consider more carefully what workflows we have and what additional protections we could put in place to make sure that patients with impaired renal function get the proper medicines rather than congratulating ourselves for having an alert that we know doesn’t do anything.

Default ordering values are important, as emphasized again in a recent study that demonstrated reduced opioid use when default prescribing quantities were lowered. Do you account for this by assuming that physicians aren’t paying attention and will most often accept whatever comes up by default, or is more complex psychology involved?

We had an admission order set that had cardiac telemetry checked by default. We saw that people were ordering telemetry on almost all of the internal medicine patients when they used that order set. We were getting feedback that in many cases, it wasn’t appropriate. As an experiment, we kept it in the order set, but switched it from being checked by default to being unchecked by default. We saw a huge reduction in the number of patients who were ordered cardiac telemetry.

We worried about the risk of that. We did some analysis to see if patients were either having more bad cardiac events or even just if people were then ordering cardiac telemetry the next day or later in the visit, like they somehow missed it in the admission. What we saw was that there was no increase in cardiac problems. There was no pattern where people were ordering delayed telemetry.

You have to be thoughtful about this. You have to get clinical feedback from users. You have to understand what the risks are. I am a huge fan of measurement. We made this change and we measured it the next day. If we had seen that there was a problem, we would have felt confident that we could quickly roll the change back and analyze it. We felt safer knowing that we would be able to monitor it.

In terms of the psychology, some of it is just being on autopilot. You’re admitting a lot of patients, and the computer in some ways seems to almost speak for the institution. The computer is telling you, “We generally recommend that you order cardiac telemetry for patients like this.” That may not be what the builder of that order set intended when they checked it off, but that’s the message that is getting communicated to the intern or PA. They’re likely to trust that that’s the standard of care, that’s the practice here. I’ve seen that again and again. People are willing to trust defaults.

I don’t think it’s laziness. I don’t think it’s that they don’t read it. A lot of things in medicine are soft calls. You might just want to do what people usually do. Seeing something checked or not checked in an order set is an easy way to think that you’re getting a read of the organization’s standard practice.

Your two most recent jobs have been with huge health systems that were among the last to switch from a homegrown EHR to a commercial product in Epic, and both institutions were known for programming their self-developed systems to give clinicians extensive, documented guidance for making decisions upfront rather than punishing them with warnings when they did something wrong. Does Epic give you enough configuration capability provide similar order guidance capability?

Both organizations had for decades developed and used their own electronic health record and CPOE system and then switched to Epic in the last few years. I had a lot of anxiety about that switch. We were used to having the total control that comes with having developed your own software. We could literally pull up the source code of the order entry screen and change it to do whatever we wanted.

I would say that I’ve been pleasantly surprised by the number of levers we have and customizations that we have available to us in Epic. They have thought through most of the common use cases and built some hooks so that we can even go so far as to write custom MUMPS code that changes the way things work.

We have generally been able to find ways to implement things. They might happen at a slightly different point in the workflow or they might look a little bit different than the user expected, but I would say that it’s rare that we come up with a piece of clinical logic that we are not able to faithfully implement in Epic. I was pleasantly surprised. I was actually quite nervous about this and it went better than I thought it was going to.

How do you approach EHR configuration knowing that changes may take more clinician time or increase their level of burnout?

The EHR gets a lot of blame for burnout, and some intrinsic properties of the EHR contribute to burnout. But I also think there’s a lot of regulatory, quality, and safety programs that are implemented through the EHR. The EHR gets blamed for having to enter all this information or to sign the order in a certain way, but some of that is triggered by external forces, like how we get paid for healthcare or how we report quality.

I generally don’t like it when I am asked to implement decision support purely for an external reason, such as because some regulator or somebody else wants us to do it. I would rather partner with clinicians who are likely to have to actually do the work, asking them if are there alternative workflows that we didn’t think of that could achieve the same regulatory goal and meet our obligation to our payers and regulators without  burdening people with point-of-care, interruptive pop-up alerts.

As we  move toward value-based payment, where we’re paid to take care of a patient over the course of a year, we have more opportunities to use things like registries and dashboards. We can have a care manager or a navigator do some of the work, or send some messaging directly to the patient, instead of popping up a message at the beginning of the primary care doctor visit and forcing them to answer a question right then.

One of the things that I’ve tried to do everywhere I’ve worked is to look at requests such as, “Please build a new interruptive pop-up that affects user X.” We go one step backwards and say, what’s going on that makes you think we need to do that? Have we considered all the options before we do this last-ditch effort of interrupting somebody in the middle of their visit?

What are the most pressing informatics priorities at Vanderbilt?

Physician burnout is certainly one of them. We are hearing increasingly from our users that they are spending a lot of time outside the clinic writing notes and finishing their documentation. We are also adapting the EHR to new care models, like value-based payment and telemedicine. We’ve been working on some new approaches for patients to get care either at home or at satellite sites that are not right here in downtown Nashville that might be more convenient to them. There’s been a lot of work trying to get the EHR to do that.

I also have a big interest in academic informatics. Eighty percent of my job is working as a professor. We started this new VCLIC, the Vanderbilt Clinical Informatics Center. One of the goals of that is to help us navigate this transition from a self-built EHR to Epic. There’s a lot of things that we used to know how to do. How do we get data out of our system? If we have a new idea for a medication prescribing workflow, how can we pilot it in the EHR? Some of that knowledge went away when we made the transition to Epic.

The goal of VCLIC is to make people at Vanderbilt say, it’s easy to interface with EStar, which is what we call Epic here. Whether that means getting data out of the system or putting a new intervention in the system. I want people in the informatics department, in clinical departments, or the pharmacy to be able to know how to get the data and know how to do stuff.

We call it paving the road. Getting access to the data warehouse might be based on bumping into the right person or getting a favor. We want to figure out, what are the requirements to get access? What training do you need to have? What do you need to do or sign to acknowledge the privacy issues? How do you protect the data? Then make it clear to people how they can interact with this new commercial EHR in the ways that they were used to in interacting with our self-developed EHR for the last couple of decades.

Do you have any final thoughts?

This is an exciting time in the field of informatics. We got through this hump of adoption of EHRs. Most doctors and most hospitals are using EHRs. There’s a growing sense that we are not getting everything we expected or hoped out of that investment.

The good news is that achieving adoption was one of the hardest parts. Now we need to be thoughtful about using data, engaging with users, getting feedback, and making smart decisions about how we can improve the EHR so that we get the value out of it in terms of improved patient outcomes and reduced costs that we were hoping would appear.

Some people are in a moment of despair about EHRs. I’m actually in a moment of real excitement. We have everything lined up to be able to give value. We just need to be smarter about how we do that.



HIStalk Featured Sponsors

     

Currently there is "1 comment" on this Article:

  1. Very encouraging perspective. The entire clinical informatics community depends on places like Vanderbilt to advance the field. To see how they have restructured and refocused after their Epic implementation is exciting. We all benefit when an institution embraces their system and sets aside resources for clinical informatics research on it.







Text Ads


RECENT COMMENTS

  1. Minor quibbles about the story on the new Epic campus: The company *aims* to provide individual offices to 80% of…

  2. Having been to Argentina just 6 months ago, let me just say this: Argentina has problems that go back a…

  3. The AHA certainly knows what running amuck is about, having gotten away with it for years.

  4. “The credentialing game is a joke! It is easier to buy a gun than to get a job.” Interesting, because…

  5. The ruling does seem to actually touch on things like UserWeb access "...the final rule applies to terms and conditions…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.