Contributed: The power of AI in surgery

Artificial intelligence's potential role in preoperative and intraoperative planning – and surgical robotics – is significant.
By Dr. Liz Kwo
10:41 am
Share

Photo: cofotoisme/Getty Images

Artificial intelligence (AI), defined as algorithms that enable machines to perform cognitive functions (such as problem solving and decision-making), has changed for some time now the face of healthcare through machine learning (ML) and natural language processing (NLP).

Its use in surgery, however, took a longer time than in other medical specialties, mainly because of missing information regarding the possibilities of computational implementation in practical surgery. Thanks to fast developments registered, AI is currently perceived as a supplement and not a replacement for the skill of a human surgeon.

And although the potential of the surgeon-patient-computer relationship is a long way from being fully explored, the use of AI in surgery is already driving significant changes for doctors and patients alike.

For example, surgical planning and navigation have improved consistently through computed tomography (CT), ultrasound and magnetic resonance imaging (MRI), while minimally invasive surgery (MIS), combined with robotic assistance, resulted in decreased surgical trauma and improved patient recovery.

How AI is shaping preoperative planning

Preoperative planning is the stage in which surgeons plan the surgical intervention based on the patient's medical records and imaging. This stage, which uses general image-analysis techniques and traditional machine-learning for classification, is being boosted by deep learning, which has been used for anatomical classification, detection segmentation and image registration.

Deep learning algorithms were able to identify from CT scans abnormalities such as calvarial fracture, intracranial hemorrhage and midline shift. Deep learning makes emergency care possible for these abnormalities and represents a potential key for the future automation of triage.

Deep learning recurrent neural networks (RNN) – which have been used to predict renal failure in real time, and mortality and postoperative bleeding after cardiac surgery – have obtained improved results compared to standard clinical reference tools. These findings, achieved exclusively through the collection of clinical data, without manual processing, can improve critical care by granting more attention to patients most at risk in developing these kinds of complications.

AI's role in intraoperative guidance

Computer-assisted intraoperative guidance has always been regarded as a foundation of minimally invasive surgery (MIS).

AI's learning strategies have been implemented in several areas of MIS such as tissue tracking.

Accurate tracking of tissue deformation is vital in intraoperative guidance and navigation in MIS. Since tissue deformation can't be accurately shaped with improvised representations, scientists have developed an online learning framework based on algorithms that identify the appropriate tracking method for in vivo practice.

AI assistance through surgical robotics

Designed to assist during operations with surgical instruments' manipulation and positioning, AI-driven surgical robots are computer-manipulated devices that allow surgeons to focus on the complex aspects of a surgery.

Their use decreases surgeons' fluctuations during surgery and helps them improve their skills and perform better during interventions, hence obtaining superior patient outcomes and decreasing overall healthcare expenditures.

With the help of ML techniques, surgical robots help identify critical insights and state-of-the-art practices by browsing through millions of data sets. Asensus Surgical has a performance-guided laparoscopic AI robot that provides information back to surgeons, such as size of tissue, rather than requiring a physical measuring tape. At the same time, human skills are used for programming these robots by demonstration – and for teaching them by imitating operations conducted by surgeons.  

Learning from demonstration (LfD) is used for "training" robots to conduct new tasks independently, based on accumulated information. In the first stage, LfD splits a complex surgical task into several subtasks and basic gestures. In a second stage, surgical robots recognize, model and conduct the subtasks in a sequential mode, hence providing human surgeons with a break from repetitive tasks.

The objective of broadening the use of autonomous robots in surgery and the tasks these robots conduct especially in MIS is a difficult endeavor. JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) – the first public benchmark surgical activity dataset – featured kinematic data and synchronized video for three standard surgery tasks conducted by surgeons from Johns Hopkins University with different levels of surgical skills.

The kinematics and stereo video were captured. The subtasks analyzed were suturing, needle passing and knot tying. The gestures – the smallest levels of a surgery's significant segments – performed during the execution of each subtask – were recognized with an accuracy of around 80%. The result, although promising, indicated there is room for improvement, especially in predicting the gesture activities conducted by different surgeons.

For many surgical tasks, reinforcement learning (RL) is an often used machine-learning paradigm to solve subtasks, such as tube insertion and soft tissue manipulation, for which it is difficult to render precise analytical models. RL algorithms are formatted based on policies learned from demonstrations, instead of learning from zero, hence reducing the time needed for the learning process.

Examples of AI-assisted surgery

The interaction between humans and robots is an area that enables human surgeons to operate surgical robots through touchless manipulation. This manipulation is possible through head or hand movements, through speech and voice recognition, or via the surgeon's gaze.

Surgeons' head movements have been used to remotely control robotic laparoscopes. "FAce MOUSe" – a human-robot interface – monitors in real time the facial motions of the surgeon without any body-contact devices required. The motion of the laparoscope is simply and accurately controlled by the facial gestures of the surgeon, hence providing noninvasive and nonverbal cooperation between the human and the robot for various surgical procedures.

In 2017, Maastricht University Medical Center in the Netherlands used an AI-driven robot in a microsurgery intervention. The surgical robot was used to suture blood vessels between 0.03 and 0.08 millimeters in a patient affected by lymphedema. This chronic condition is often a side effect that occurs during treatment of breast cancer that causes swelling as a result of built-up fluids.

The robot used in the procedure, created by Microsure, was manipulated by a human surgeon. His hand movements have been reduced to smaller and more accurate movements conducted by "robot hands." The surgical robot was also used to fix the trembles in the surgeon’s movements, ensuring the AI-driven device was properly conducting the procedure.

Robotic Hair Restoration enables surgical robots to harvest hair follicles and graft them into precise areas of the scalp, with the help of AI algorithms. The robot conducts MIS without requiring surgical removal of a donor area and eliminates the need for a hair transplant surgeon to manually extract one follicle at a time in a few-hours-long procedure.

Da Vinci cardio surgery is robotic cardiac surgery conducted through very little incisions in the chest, cut with robot-manipulated tools and very small instruments. Cardio robotic surgery has been used for different heart-related procedures such as coronary artery bypass, valve surgery, cardiac tissue ablation, tumor removal and heart-defect repair.

Gestonurse is a robotic scrub nurse that has been designed for handling surgical instruments to surgeons in the operating room. The objective is reducing the errors that may occur that would have a negative consequence on the outcome of the surgery.

Its efficiency and safe use have been proved during a mock surgical procedure performed at Purdue University, where Gestonurse used fingertip recognition and gesture deduction for manipulating the needed instruments.

Conclusion 

Surgeons create partnerships with scientists to capture, process and classify data across each phase of care to provide useful clinical context. Artificial intelligence has the potential to transform the way surgery is taught and practiced.

For surgical robots, surgeon-robot collaborations will consider regulatory and legal inquiries, such as the point where an independent robot ceases to be a simple AI-driven device, or the lack of experience of regulatory bodies in dealing with this new type of machinery's approval and validation. The future of AI in surgery is exploding, and it is exciting to see where it will take us.


About the Author
 
Dr. Liz Kwo is a serial healthcare entrepreneur, physician and Harvard Medical School faculty lecturer. She received an MD from Harvard Medical School, an MBA from Harvard Business School and an MPH from the Harvard T.H. Chan School of Public Health.
Share