AI holds great promise for visual fields like dermatology, but faces many challenges

Finding robust data sources, correcting for variations in images and avoiding bias are all hurdles to overcome.
By Jonah Comstock
04:19 pm
Share

Computer vision has great promise for helping to democratize fields like wound care, dermatology and more. However, as companies explore this potential, they’re also discovering a number of challenges to overcome.

The data problem

“Getting the data is really the biggest challenge, not the AI,” Karen Panetta, IEEE fellow and dean of graduate engineering at Tufts University who studies AI use cases in healthcare, told MobiHealthNews in an interview earlier this year. “We’ve already got the models, we just need more training data to validate this expertise. And then, again, getting doctors to also validate, to get random things from a cellphone, and you want multiple doctors to do it because they have to agree.”

There are existing clinical data sets, but the pictures they contain are clinical images, taken in controlled conditions, often with a dermascope, which is a specialized medical instrument for taking pictures of the skin.

Furthermore, getting access to medical datasets is very difficult, since patients have to have consented to have their data used in research and most have not. Even if they have, the researchers have to secure IRB approval for access to the images.

Mary Sun is a medical student at the Icahn School of Medicine at Mount Sinai Medical Center in New York, but she’s also working remotely with teledermatology company First Derm on improving the company’s AI algorithms. She says a traditional clinical research approach doesn’t bring in anything near the scale needed for machine learning.

“The issue in general is that a lot of the data people are pulling in from healthcare studies, the enrollment process for clinical trials is generally very slow, and very manual. Not that this is specific to any one institution, but if you have sample sizes in the low hundreds or not even 100 — and there are plenty of studies that are 30, 50, 70 participants, just because it’s so difficult to get a willing cohort that will show up for all the testing you need, and generally you do recruitment locally so it’s just the patient population that’s available to you and so on and so forth— and that’s problematic, right? … The whole point of a classifier is to be generalizable across a population, and you’re limited to a small amount of data, that already is an issue from a statistical standpoint.”

This is leading a number of companies like First Derm to create direct-to-consumer teledermatology or dermatology triage tools, in which patients consent to sharing their data in exchange for free access.

Even companies like VisualDx, which have a robust dataset from years in the CDS space, have to balance patient privacy considerations.

“In our professional tool, when a doctor takes a picture of a patient the image is analyzed on the phone and the image is dumped,” CEO Art Papier told MobiHealthNews. “So we never see the image because of confidentiality. So I think on the consumer side there’s an opt-in where the user can click a box where they say they’re willing to train data and can create a feedback loop and start training the system. We’ll be getting going with that this year.”

Cleaning up messy images

What most researchers and developers want is an AI that can look at pictures taken with a smartphone in a home environment. And smartphone camera pictures can look very different from one another, even if they’re of the same thing. Differences in angle, lighting and focus need to be accounted for.

The Mars rover ... if you think about what it’s doing, it’s actually imaging the planet surface using multiple cameras and creating a reconstruction," Carlo Perez, CEO of AI wound care company Swift Medical, said. "But if you think about the surface of Mars and you put it on my body, it actually looks like a wound. Except that a wound is far more complex to interpret than the surface of Mars. It’s actually a harder problem because you have to account for variations in skin color, different types of tissue, different levels of moisture. It’s a very hard vision problem that can be augmented and solved with AI.”

For dermatology, Sun and the team at First Derm have been working on ways of computer-treating the image to improve accuracy.

“There’s a couple things we can do,” she said. “First, there’s general lighting guidance that we provide users. But also, generally, people take a larger picture than you need, like they’re not just going to take a picture of the boil. So you can zero it against the best base reference that you have which is another portion of their skin in the photo. And there’s a couple of imaging methods that we have been experimenting with.”

Dermatologists today are especially aware of the risk of creating algorithms that only work with certain races of people, an easy and dangerous training bias to introduce.

“It’s something that we’re trying to be really cognizant of,” Sun said. “This is something that gets left out of the AI conversation sometimes. So obviously we’re not going to pretend we can classify someone’s race or cultural background from an online photo, but we are doing our best to make sure that we are using concepts like the reference zeroing — making sure we’re not using one standard as the standard for everything and also kind of cross-validating what we’re doing in different populations, so it’s not just ‘We’re going to try to average out these effects.'”

“One of the things we’re really proud of at VisualDx is for the last 18 years we’ve been cataloguing the spectrum of disease in people of all colors,” Papier said. “So the goal of Aysa is to give a really unique experience to someone, if they have light skin or dark skin. We give them a very custom experience.”

The limits of AI accuracy

Even once a computer can make an apples to apples comparison between different lesions in different lightings taken with different smartphone cameras, most researchers don’t think it will be able to make a perfect diagnosis.

One reason for that is that even human dermatologists disagree sometimes, especially in the absence of additional information.

“There are circumstances where the rash looks identical and the machine learning will not know the difference between a rash from a drug reaction or a rash from a viral infection from traveling around the world,” Papier said. “Those rashes look the same, you have a patient who has a viral infection from a rare mosquito-borne disease in the Caribbean here and you have one who has a drug reaction to penicillin, the rashes look near identical. So the machine learning can put you in the neighborhood, but you need the history.”

Ultimately, what researchers are hoping to create is a largely accurate algorithm that can help give piece of mind to people concerned about their skin — and possibly too embarrassed to seek other help.

“The end goal is first to develop a pretty robust image classifier,” Sun said. “So there is an accessibility point with this. Especially with something like pictures of your genitals, it can be an embarrassing topic, or it can be something you put off seeing a doctor for or you don’t really share with anybody or you just hope for it to go away. … So we’re hoping to build something that you don’t have to pay tons of money for, that will give you a pretty good idea of what’s going on and that can be a kind of intermediary to see if you need to pursue more aggressive treatment. So we’re targeting accuracy goals of around 80 percent for that, across a couple common categories of dermatological issues.”

While there is a risk of giving false negatives that could endanger patients, there’s also no alternative. Patients who are concerned about their health, but not concerned enough to go to a doctor, are going to look somewhere.

“The fact is that most people, when they or a family member is sick, or in the case of dermatology they have a rash, they go to Google,” Papier said. “They look right away on Saturday or Sunday and the question is can we give them reliable, helpful information that is safe for them that’s better than what they’re currently doing?”

Focus on Artificial Intelligence

In November, we take a deep dive into AI and machine learning.

Share