Industry Voices—Next-generation artificial intelligence needs transparency of process to build trust and acceptance

Artificial intelligence is already changing medicine across many specialties, and in gastroenterology, new AI developments are coming online at breakneck speed.

AI protocols and devices are being created and refined so they can identify abnormalities in a colonoscopy, diagnose disease, predict outcomes, and assist with treatment. With some already in use and others with potential to come into practice in the next one to five years, the opportunities are endless for how AI can contribute to better, more efficient patient care.

Research has shown that after being “trained” through machine learning with thousands of photos and videos from actual colonoscopies, a computer-assisted diagnosis system can accurately spot and diagnose abnormalities during colonoscopy. When refined and adopted, this technology could increase a skilled endoscopists’ speed and effectiveness.

RELATED: Accenture: AI, blockchain will have 'transformational' impact in next 3 years

In addition, AI-based robots that can detect colorectal polyps in real time are already assisting endoscopists in training and also making them more precise by reducing the miss rate of polyp detection using existing procedural technology. Computers with AI will increasingly be able to tirelessly carry out time-consuming, error-prone repetitive tasks such as calculation of quality metrics and chart review and follow-up tracking, leaving physicians more time for direct patient care and education.

AI needs to build human trust

To get widespread acceptance in this rapidly evolving field, parallel work needs to be done to validate and build trust in AI.

Clear protocols are needed for the development and validation of algorithms, and each candidate algorithm should undergo standardized testing in a controlled environment by a reputable third party to provide evidence for its performance.

The results of such testing must be transparent and available to physicians and scientists. Also, ethical standards for use of the technology, including issues around ownership of data that it generates, need to be developed and accepted. And since everything must be paid for, our payer systems need to catch up with the changing times. The utility and necessity of such systems must be quantified with a detailed risk-benefit analysis, and the implementation of such systems in clinical practice must be reimbursed accordingly.

But, most critically, AI needs to build trust with clinicians and patients.

RELATED: Venture capital investment in AI and mental health startups surges in Q2: report

In its current state, AI happens in a “black box” from the user’s perspective, with no clear insight into the logic behind the decisions/results generated by the algorithm, forcing the end user to blindly assume that the conclusions are accurate. But that may not be a wise leap of faith.

In multiple cases, an AI system developed and demonstrated bias as a result of a lack of diversity in training. During machine learning, the computer is shown hundreds of thousands of examples, and the algorithm “learns” how to appropriately categorize new real-world cases based on a large number of samples it was trained on.  

Facial recognition technology has in the past infamously shown embarrassing bias and errors, especially in the identification and detection of nonwhite faces. It turned out that this was primarily due to the training data predominantly containing white faces. Even today, multiple facial recognition systems are less accurate in identifying people of color. These sorts of errors can lead to mistrust and skepticism about AI.

This problem should be approached in a two-pronged manner: Gastroenterologists and medical personnel should be educated about the logic behind classification and prediction algorithms. They should be educated about the strengths and pitfalls of each algorithm and learn when to use and when to discard the insight from the algorithm. Such education must be made an integral part of medical education in medical school, residency and fellowship in view of the coming ubiquity of such algorithms. Secondly, the underlying mechanisms of each algorithm must be made more transparent and understandable.

RELATED: Children's National, KenSci studying the use of AI to improve pediatric critical care

Significant interest in this domain is coming from a surprising place: the world of national security. The Defense Advanced Research Projects Agency is working on a new approach to artificial intelligence technology called Explainable Artificial Intelligence, or XAI, to create algorithms that explain the logic and decision-making underpinning its predictions. XAI systems will provide transparency by showing the user the evidence it used in reaching a conclusion. It will also continually self-evaluate and present its areas of weakness to allow the end user to make informed decisions.

Thus, improved education in conjunction with XAI will help to build trust and confidence among both physicians and patients for the use of algorithms in medical practice and decision making.

While AI will never replace a skilled gastroenterologist, it has the potential to become an indispensable companion to contribute to better care and better outcomes for patients in the long run.

Sushovan Guha, M.D., Ph.D., is physician executive director of Banner – University Medicine Digestive Institute.