Using AI Responsibly and Ethically in Healthcare

The following is a guest article by Michael Armstrong, Chief Technology Officer at Authenticx.

From disease detection and treatment management to patient communications, AI plays an ever increasing role in the healthcare industry. While AI will increase effectiveness and efficiency, some worry about the cost. Will healthcare organizations sacrifice data privacy as they implement and rely more on AI?

Consider the numbers. In 2021, healthcare data breaches impacted 45 million people. The Department of Health and Human Service’s (HHS) Office for Civil Rights received reports of 63 breaches of 500+ records in September 2022. Those reports included 30 data breaches — hacking and IT incidents — of 10,000 or more patient records. One clinic’s database and system configuration files were deleted, affecting more than 3.6 million patients. The first six months of 2022 saw 337 security breaches affecting 19 million records and costing, on average, $10.1 million per incident — a 9.4% increase from 2021.

In a rapidly expanding era of AI, conversational analytics and data, healthcare organizations must build a foundation of compliance, transparency and trust dedicated to keeping patient information and communications secure.

AI and Data

The healthcare industry generates nearly 30% of the world’s data volume. This high volume of data requires AI and ML to process, analyze and manage. AI helps geographically dispersed medical teams (and personnel working in the same building or campus) access patient medical data.

When used correctly — with the proper safety precautions in place — AI becomes more than a tool with its ability to augment and support human judgment to enhance the patient experience within healthcare. 

  1. AI improves operational efficiency with workflow automation in different departments, allowing healthcare providers to focus on their patients. It can facilitate predictive equipment maintenance triggering alerts to prevent disruptions and downtime. It can forecast and manage patient flows organization-wide, from admission through discharge.
  2. AI supplements healthcare providers’ expertise and supports decision-making. It can segment and quantify medical images to increase consistency and diagnostic confidence.
  3. AI empowers people to own responsibility for their health and well-being by monitoring behaviors and providing personalized insights and actionable recommendations to develop — and maintain — healthy habits.

AI offers numerous use cases and opportunities throughout the healthcare industry — in patient care, R&D and other settings. It enables more efficient workflows and facilitates more informed, strategic decision-making. It can increase and enhance data analysis to improve patient experiences and outcomes. 

For example, unstructured conversational data — recorded conversations occurring thousands of times daily across healthcare organizations worldwide — provides a wealth of information and insights. AI enables analysis of these conversations and the compilation of large datasets to inform better decision-making, identify personnel training needs and pinpoint pain points patients encounter along their journey.

Protecting Health Data

As the healthcare industry’s reliance on AI increases, organizations must establish clear guidelines and guardrails to govern its use and strategic use.

AI and Ethics: Cognitive and Algorithmic Biases

AI relies on the validity of the datasets from which it pulls information. But because it requires massive amounts of data, it’s impossible to eliminate bias. Algorithmic biases occur when data scientists use incomplete data lacking full representation of a specific patient population. 

Biases can lead to differential treatment and negative patient outcomes. Bias introduced into training data sets will transfer and algorithms then reflect that bias, so data scientists are currently focused on eliminating all bias in data used for training.

Because AI can unintentionally exacerbate existing health inequalities, it’s paramount for data sets to be as objective as possible. One emerging approach towards creating objective, unbiased data sets is deep learning. This modeling approach relies on more comprehensive and layered meanings through logical associations matching how human brains build neural networks of understanding (using multiple sources to build understanding vs a single source of knowledge), which limits the impact of individual biases in algorithmic results. Building a network of connections and associations is more all-encompassing.

AI and Patient Privacy

Healthcare organizations must be able to protect individual patients’ privacy safely, without fear data may be stolen. A global and national focus remains on identifying and mitigating data privacy and security risks. The Health Insurance Portability and Accountability Act (HIPAA), as well as state privacy regulations and laws, also contribute to healthcare data governance, with one of HIPAA’s provisions guaranteeing the privacy and security of everyone’s health information.

Before accessing and using patient data, healthcare organizations use de-identification, which HIPAA defines as data with different identifiers removed and includes:

  • Biometric identifiers like voices and fingerprints
  • Birth/death/admission/discharge dates
  • Full-face photographs and other comparable images
  • Names, ages, emails and mailing addresses
  • Other numerical identifiers, including account, certificate/license, device, fax, health plan beneficiary, medical record, serial, social security, telephone and vehicle numbers
  • Web universal resource locators (URLs)

Tools already exist to mitigate the potential privacy and safety risks inherent when dealing with such vast data: aggregated trends, anonymization, redaction, and voice obfuscation. 

Ongoing Evaluation and Trust of AI Tools

The industry needs human oversight to evaluate AI-generated recommendations. To avoid and prevent intentional (or accidental) misuse of AI systems, teams must monitor the performance of AI-enabled tools. 

Privacy laws must remain consistent — and flexible — to accommodate ongoing AI innovation, especially as its use within the healthcare industry is not yet standardized. But organizations can take steps to protect themselves and their patients by:

  • Championing full transparency about how algorithms are trained and validated
  • Obtaining patient consent
  • Thoroughly vetting and assessing third-party vendors
  • Implementing additional controls, including multi factor authentication (MFA)
  • Incorporating endpoint security and anomaly detection into their security architecture

AI has the potential to transform the healthcare industry, revolutionizing treatments and diagnostics and improving patient experiences and outcomes. But when evaluating and implementing AI-assisted tools, leadership shares responsibility for assessing and addressing risks by putting controls in place to protect patient data and ensure its accuracy.

About Michael Armstrong

Michael Armstrong is the Chief Technology Officer at Authenticx and a foundational leader in building our solution infrastructure. In this role he leads a team of data engineers and scientists to translate big visionary ideas into practical and actionable software. Michael has extensive experience in engineering, data architecture, product development, and business intelligence. 

   

Categories