Machine learning app scans faces and listens to speech to quickly spot strokes

Researchers say that their tool detected cases with 79% accuracy, and did so within minutes.
By Dave Muoio
03:14 pm
Share

A patient at Houston Methodist Hospital, participates in a smartphone screening test to analyze stroke-like symptoms she's experiencing. Photo credit: Houston Methodist Hospital.

Researchers from Penn State University and Houston Methodist Hospital recently outlined their work on a machine learning tool that uses a smartphone camera to quickly gauge facial movements for sign of a stroke.

The tool – which was presented as a virtual poster at this month's International Conference on Medical Image Computing and Computer Assisted Intervention – relies on computational facial motion analysis and natural language processing to spot sagging muscles, slurred speech or other stroke-like symptoms.

To build and train it, the researchers used an iPhone to record 80 Houston Methodist patients who were experiencing stroke symptoms as they performed a speech test. According to a Penn State release, the machine learning model performed with 79% accuracy when tested again on that dataset, which the researchers said is roughly on par with emergency room diagnoses using CT scans.

“Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan,” James Wang, professor of information sciences and technology at Penn State, said in a release from the university. “We are trying to simulate or emulate this process by using our machine learning approach.”

WHY IT MATTERS

Should the tool's performance fall in line with standard diagnostics, the researchers said its roughly four-minute turnaround time would provide a clinical advantage to emergency room teams racing the clock. A delayed diagnosis means more lost neurons and worse outcomes for the patient.

“In severe strokes it is obvious to our providers from the moment the patient enters the emergency department, but studies suggest that in the majority of strokes, which have mild to moderate symptoms, that a diagnosis can be delayed by hours, and by then a patient may not be eligible for the best possible treatments,” John Volpi, codirector of the Eddy Scurlock Stroke Center at Houston Methodist Hospital and a coauthor of the research, said in a statement.

“If we can improve diagnostics at the front end, then we can better expose the right patients to the right risks and not miss patients who would potentially benefit,” he said.

While the researchers said the dataset and tool could be applied within a clinical setting, they also floated the possibility of deploying it as a resource for caregivers or patients to help them know when to seek care.

THE LARGER TREND

Deep learning and mobile devices have been tapped over the years to support stroke detection. As far back as 2011, researchers detailed an iPhone app that scanned medical imaging to provide clinical decision support.

Fast forward to 2018, and Viz.ai's Contact app received a De Novo clearance from the FDA for highlighting potential evidence of a stroke among CT results for clinicians. Last year also saw news of an app from Cheil Hong Kong and the Hong Kong Stroke Association that, similar to Penn State and Houston Methodist's, uses facial recognition technology to spot stroke symptoms.

Share