HIPAA and Machine Learning at Loyal

Deep learning traditionally involves funneling huge data sets into layers of software to produce actionable insights. But when strict privacy requirements control what can appear in the output, machine learning methods have to change.

I talked to Abhi Sharma, chief product officer at Loyal, about how they do machine learning. Loyal, I’m told by Sharma, created one of first HIPAA-compliant chatbots. They are continuing to refine their chatbots and extend AI into other areas of the patient experience, using a combination of common machine learning and some new technologies inspired by Google’s BERT algorithm, with large language models.

Figure 1 shows a typical screen from one of Loyal’s chatbots, offering several options plus a box for free-form text.

The chatbot displays "How Can We Help You" and offers a half-dozen options along with a text box for typing a message.
Figure 1: Initiating a conversation in a chatbot.

 

The chatbot displays "How Can We Help You" and offers a half-dozen options along with a text box for typing a message.
Figure 1: Initiating a conversation in a chatbot.

Loyal’s partners include more than 400 hospitals with different sizes and locations. Loyal’s core business, chat, has carried more than 30 million messages, and the company also offers services such as physician search.

Loyal creates customized chatbots for each client, asking them what their needs are. For instance, one clinical facility might want to concentrate on patients asking for information on particular conditions, whereas another is concerned with directing a patient to the proper person to handle their reported symptoms.

For the desired services, Loyal creates a standard tree structure of questions and expected answers, using natural language processing to parse answers. Loyal also tracks patient responses and suggests new topics to the clinician.

Loyal respects HIPAA with two major design choices. First, it maintains a separate database for each client, isolated from other clients. This contrasts with typical cloud services that are multi-tenant.

Loyal also masks data and subjects data to human review data before it’s added to the training set. The company is also considering the use of synthetic data and additional ways to reduce the use of real data in their models for privacy and security reasons.

Loyal also keeps the field names on all data items, instead of throwing all text indiscriminately into machine learning. Thus, it can distinguish whether the word “park” refers to parking a car, a hospital named South Park, or a doctor named Park.

I find Loyal an interesting case of how to adopt common machine learning and data analysis to unique conditions. They use techniques similar to ChatGPT not to create boring essays or bad poetry, but to generate natural-sounding responses to patient inquiries.

About the author

Andy Oram

Andy is a writer and editor in the computer field. His editorial projects have ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. A correspondent for Healthcare IT Today, Andy also writes often on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, named USTPC, and is on the editorial board of the Linux Professional Institute.

   

Categories