How Healthcare Communication Platforms Can Harness Generative AI in a HIPAA-Compliant Way

The following is a guest article by Nate MacLeitch, Founder and CEO at QuickBlox

The demand for AI-powered Communications Platform as a Service (CPaaS) in healthcare is evident: 75% of executives believe that generative AI has reached a stage where it is poised to reshape the healthcare industry.

When the sector is set to face shortages of between 37,800 and 124,000 physicians by 2034, generative AI could alleviate administrative work, such as summarizing patient calls and managing electronic health records (EHRs), to allow medical staff to focus on improving patient care. Yet, privacy laws and a fragmented market have kept the healthcare industry from reaping generative AI’s full potential — until now.

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law that safeguards sensitive patient health information (PHI) from being disclosed. Up to now, HIPAA-compliant data security and generative AI haven’t gone hand-in-hand as AI is trained on large datasets centrally and often by third parties. However, de-identification and federated training of deep-learning models are now helping industries leverage AI to stay protected.

Healthcare professionals must explore the benefits of generative AI for communications platforms to provide the best service to their patients. But, more importantly, know how to implement them in a HIPAA-compliant way to keep patients and their data safe.

Benefits of Generative AI for Healthcare Communication Platforms

Natural language processing (NLP) — a subset of generative AI — is already applied in healthcare. It can extract information from medical research, EHRs, audio files, and chatbot conversations. Pair this with large language models (LLMs) such as ChatGPT, and health professionals can increase response time when offering medical guidance to patients virtually.

For example, nurses and doctors can use voice transcription to record clinical details, speeding up the EHR workflow. Or they can create virtual assistants that generate outputs to support nurses with treatment plans and help them answer patients promptly.

In addition, generative AI tools powered with multilingual language translation and sentiment analysis can enhance communication between healthcare professionals and patients from diverse backgrounds. NLP can assess a patient’s tone through their choice of language and identify trigger words or markers in conversation.

These triggers can prompt healthcare professionals to ask more targeted questions to understand if there are underlying issues, such as depression or avoidance of taking medication — leading to more investigation, another referral, or updated care.

Generative AI can be immensely valuable in healthcare CPaaS, helping engage patients through chatbots, advance AI-driven telehealth support, and improve administrative efficiency, reducing the burden on healthcare professionals.

What It Means To Be HIPAA Compliant

While the HIPAA Privacy Rule applies to all forms of PHI, physical, spoken, and electronic, the HIPAA Security Rule focuses on safeguarding the confidentiality, integrity, and security of patient’s electronic protected health information (ePHI).

The overriding principles for HIPAA-compliant use of CPaaS are receiving patient consent, along with administrative, physical, and technical safeguards. This includes appointing security personnel responsible for implementing policies and training all workforce members on them.

Policies will include strict access controls and authentication for nurses and patients, encrypting data in transit and rest, along with measures to confirm that e-PHI has not been improperly altered. Healthcare providers offering CPaaS must also follow general CPaaS security standards and ensure continuous auditing to protect sensitive health information.

Harnessing AI in a HIPAA-Compliant Way

Here are three best practices to ensure your CPaaS AI is HIPAA compliant.

Obtain Patient Consent

It’s essential to obtain informed consent from patients when using their data for healthcare CPaaS applications. Health professionals must communicate how patient data will be used, for how long the data will be stored, the purpose of AI interventions, and the security measures in place. Make sure you provide access to consent forms in advance and complete the Telehealth consent teach-back documentation to ensure all information is understood.

For instance, hospitals are required to regularly delete data that is no longer needed for analysis or treatment. However, retention periods vary by state: In Arkansas, adults’ hospital medical records must be retained for ten years after discharge, but in Florida, this period is reduced to five years after the last patient contact.

Although it is a legal requirement in healthcare to transparently communicate how AI is used in patient’s care and how their privacy is protected, doing this also builds patient trust. Strengthening confidence in patient confidentiality can lead to more open conversations with patients and better diagnoses as a result.

Choose AI Vendors That Are HIPAA-Compliant

It’s important to note that any HIPAA compliance certification for software only confirms a solution is compliant at the moment when the compliance certificate is issued.

To determine if a CPaaS or NLP provider is HIPAA-compliant, it’s imperative to continuously audit them for security and compliance, along with all in-house systems that interact with patient data, too. Work closely with your legal team to ensure that the Business Associate Agreement (BAA) details about data protection and the handling of PHI, data retention policies, and auditing processes are HIPAA compliant. This checklist can help give you a head start.

One way to protect patients’ data when operating with third parties is to use techniques like data anonymization and de-identification to remove personal and identifiable PHI while preserving data utility for research and analysis. Data masking and encryption are both pseudonymization techniques that can anonymize and protect PHI data.

Other key components to outline are breach notification procedures and disaster recovery plans.

Secure Generative AI Model Training With Federated Learning

Aggregating medical data — everything from X-rays to EHRs to conversation transcripts — and analyzing them at scale could lead to new ways of detecting and treating illnesses. In the future, AI could even help spot patterns between patients’ verbal behavior or spoken symptoms and data captured in scans.

Federated learning enables companies to collectively train a distributed model without the need to disclose sensitive medical records.

With a federated approach, data remains on private servers, processing data from the source. Developers within the healthcare organization download the deep-learning model from a data center in the cloud, train it with their private data, and then summarize and encrypt the model’s new configuration.

Still, internal and external developers must comply with ethical considerations such as receiving informed consent to use data, algorithmic fairness, and data privacy. Developers can mitigate biases by ensuring diverse training data and continuously auditing algorithms to prioritize patient welfare and transparency in decision-making.

Generative AI-powered CPaaS offers immense efficiencies in the EHR workflow, releasing strain on healthcare workers. Well-defined BAAs, continuous auditing, and close collaboration with legal and IT teams will help healthcare organizations harness their potential while safeguarding patient’s data in a HIPAA-compliant way.

About Nate MacLeitch

Nate MacLeitch is a highly experienced business professional with a diverse background in industries such as telecom, media, software, and technology. He began his career as a Trade Representative for the State of California in London and has since held key leadership positions, including Head of Sales at WIN Plc (now Cisco) and COO at Twistbox Entertainment (now Digital Turbine). Currently, he serves as the CEO of QuickBlox, a leading communication platform. Beyond his work experience, Nate is actively involved as an advisor and investor in startups like Whisk.com, Firstday Healthcare, and TechStars. He holds degrees from UC Davis and The London School of Economics and Political Science (LSE).

   

Categories