Deep Learning for Medical Imaging: Use Cases and Network Types

The following is a guest article by Mariia Kovalova, Healthcare Technology Researcher at Itransition

Medical imaging is crucial for various medical applications, serving diagnostic and treatment purposes. According to Grand View Research, the medical image analysis software market is projected to grow steadily at a CAGR of 7.7% from 2023 to 2030. However, the current approach to reading medical images has its drawbacks – it’s labor-intensive, time-consuming, and comes with diagnostic errors and significant costs. 

Fortunately, there is a way to alleviate the task with deep learning solutions for analyzing medical images. But what are those tools, how do they work, and how to employ them in your practice? Let’s delve into the topic deeper.

Medical Imaging: Why Go for Deep Learning

Deep learning is a branch of machine learning that emerged in the mid-2000s. By utilizing artificial neural networks, this method mimics how the human brain processes data and extracts insights to make informed decisions. As a result, deep learning emulates human intelligence efficiently.

The technology offers two benefits for medical image analysis:

  • Deep learning models can work with smaller datasets. It is a critical feature for healthcare providers
  • AI-driven algorithms can detect correlations and dependencies invisible to the human eye

These specifics made deep learning networks valuable for automated medical image analysis. But how are those networks built?

Deep Learning: Architectures and Use Cases

The architecture of a neural network defines network essential parameters, such as the number, size, and type of layers used. These parameters directly impact how the network processes and interprets data, ultimately influencing its ability to learn and drive conclusions. With meticulous architectural planning, it is possible to fine-tune neural networks to achieve peak performance across a wide range of tasks and applications. Four types of deep learning architecture work well for medical image analysis. We’ll consider them along with use cases.

Convolutional Neural Networks (CNNs)

CNNs are a powerful class of supervised neural networks designed to process pixel values. With the ability to handle images as input, they use filters to capture essential features and effectively differentiate between various data points. A convolutional network typically consists of three layers:

  1. Convolutional Layer – a layer for extracting essential features from the input data
  2. Pooling Layer(s) – additional convolutional layer(s) used for downsampling or reducing the dimensionality of the extracted images without losing critical data
  3. Fully Connected Layer – a layer for performing classification based on the features obtained by the two previous layers

Though deep learning networks reached high accuracy scores in feature extraction, they require substantial computing resources. For healthcare providers, running them can be troublesome. Fortunately, CNNs can help. Researchers from the University of Belgrade (Serbia) developed a simple CNN algorithm comprising two convolutional blocks for analyzing MRI scans for brain tumors. The model reached an accuracy of 96% without excessive computer power.

CNNs also played a vital role during the coronavirus pandemic. Scholars from the University of Wuhan developed a CNN model for analyzing lung CT scans. When the researchers compared the system’s performance to that of radiologists, they discovered that the tool could detect COVID-19 infection in 68% of patients whose CT scans were deemed normal by medical experts. Hence, the model successfully identified complex patterns that could go unnoticed by the human eye.

Recurrent Neural Networks (RNNs)

An RNN, with its powerful recurrent connections, possesses the remarkable ability to remember patterns from previous inputs. It is beneficial in medical imaging, where the Region of Interest (ROI) is often spread across adjacent slices, such as CT scans or MRIs. As a result, there exists a strong correlation between consecutive slices. 

RNNs can extract meaningful information from sequential data by capturing inter-slice contexts. This feature enhances their effectiveness in processing input slices and allows for a deeper understanding of the data. The RNN architecture comprises two fundamental components: intra-slice and inter-slice information extraction. For the former, any CNN model can be employed. For the latter, the RNN extracts valuable insights across multiple slices. It facilitates a comprehensive understanding of data patterns.

RNNs are used for predicting the likelihood of a disease and its progression via analyzing medical images of the brain. For example, the US Computational Neuroscience Laboratory (CNS) researchers developed CNN and RNN models to analyze brain MRI scans. To enhance the performance of their CNN+RNN architecture, they have incorporated various mechanisms. These include a longitudinal layer, as well as consistency regularization. These enhancements enable the model to accurately capture the progression of diseases over time.

Generative Adversarial Networks (GANs)

A GAN is a powerful type of neural network designed for unsupervised learning. The network consists of two competing models: a generator and a discriminator. The generator creates new data samples that closely resemble the training data, while the discriminator’s role is to distinguish between real training data and the generated output. This unique interplay between the two models allows GANs to generate high-quality synthetic data that can be invaluable in various applications.

For example, GANs can be used for anomaly detection. Thus, an international team of researchers developed MADGAN, an unsupervised GAN for detecting anomalies in MRI scans of the brain. Trained on MRI scans of healthy people, the system can identify the accumulation of subtle anatomical abnormalities and hyper-intense enhancing lesions for late-stage Alzheimer’s disease and metastasis of the brain on multi-sequence MRI scans.

GANs are also valuable for medical researchers. The networks can synthesize realistic-looking medical images for research purposes, as real medical data is often scarce. In addition, GANs can improve medical image quality by denoising or reducing image graininess.

Conclusion

Medical image analysis solutions powered by deep learning technologies can reduce the risk of diagnostic errors and ensure timely interventions. But how to implement such a tool successfully? The key to successful deployment is cooperation with an experienced machine-learning solutions provider. Professional technical experts will help you choose an optimal architecture for your clinic’s specific goals and ensure effective model training.

About Mariia Kovalova

Mariia Kovalova is a Healthcare Technology Researcher at Itransition, a custom software development company headquartered in Denver, CO. Having working experience with both the healthcare and IT industry, she is constantly on the lookout for technologies that will help providers optimize their processes, enhance patient experiences and build up more resilience in the face of the rapidly-changing world.

   

Categories