Meaningful digital health engagement is more complicated than how long or often users log in

A new study conducted among SilverCloud Health program users employed machine learning to identify relationships between app-use frequency and clinical outcomes.
By Dave Muoio
12:05 pm
Share

Sustained engagement is a key metric when determining whether a digital health-based intervention is finding its mark, but the industry has been anything but uniform in determining objective measures of user behavior and their relationship to clinical outcomes.

A study published Friday in JAMA Network Open takes another approach and employs machine learning techniques to better describe engagement among 54,604 patients provided a digital health intervention – in this case, an internet-based cognitive behavioral therapy (iCBT) tool for depression and anxiety symptoms developed by SilverCloud Health.

"We used machine learning to build a probabilistic graphical modeling framework to understand longitudinal patterns of engagement with iCBT," the study's authors, which included researchers from Microsoft Research Cambridge and SilverCloud Health, wrote. "We hypothesized that these patterns would allow us to infer distinct, heterogeneous patient behavior subtypes. We further hypothesized that these subtypes are associated with the intervention’s success of improving mental health and that different subtypes of engagement are associated with differences in clinical outcomes."

The researchers' cohort comprised of de-identified patient data collected between January 31, 2015 and March 31, 2019 from SilverCloud's Space From Depression and Anxiety treatment program. This intervention included digital journals, quizzes, CBT exercises and live human guidance.

The researchers defined and measured two types of engagement: use of the iCBT program within a given week, and use of one of the 14 sections of the program within a given week. These were reviewed alongside objective screens of depression and anxiety – the Patient Health Questionnaire-9 (PHQ-9) for the former and Generalized Anxiety Disorder-7(GAD-7) for the latter.

Across the cohort, patients spent a mean 111 minutes using the iCBT program and improved their clinical scores for both depression and anxiety symptoms.

By applying the modeling framework to the cohort, the researchers identified five heterogeneous subtypes of engagement: low engagers, late engagers, high engagers with rapid disengagement, high engagers with moderate decrease and very high engagers.

These groups had varying changes in their depression and anxiety symptom scores after the 14-week program. For example, mean depression score decreases were greatest among high engagers with rapid disengagement and actually increased among the late engager group.

WHY IT MATTERS

The researchers highlighted subtype trends suggesting that clinical outcomes were not uniformly proportional to the amount of time patients spent with the intervention, and suggested different types of engagement with the content. For example, the high engagers with moderate decrease group more often used the core modules of the tool such as mood tracking and goal-based activities, whereas the very high engager group were more likely to to use relaxation and mindfulness tools.

"These insights may facilitate tailoring of interventions for specific subtypes of engagement," they wrote. "For example, we may be able to front-load specific recommendations of content associated with improved therapy engagement and clinical outcomes for patients within particular subtypes. Such patterns may elucidate different modalities of engagement that can help us to better triage patients for different therapy modules or activities."

Still, the researcher's metrics and approach may not be telling the full story of meaningful digital health engagement.

In an accompanying commentary, Dr. John Torous of the Beth Israel Deaconness Medical Center and Harvard Medical School, and Erin Michalak and Heather O'Brien, both from the University of British Columbia, Vancouver, wrote that it was "encouraging" to see companies objectively evaluating their products and sharing those data publicly. That being said, both the study and digital health at large tend to focus on "blunt" behavioral measures that do little to describe users' affective or cognitive investment with the tool.

"Although it is essential to take a longitudinal perspective with engagement and examine use over time, duration of use itself is not a reliable indicator of engagement," they wrote. "Studies in human-computer interaction have shown that it is difficult to disambiguate negative, frustrating experiences with technology from positive, absorbing ones based on this measure, and that a person’s willingness to return is more telling of engagement. Thus, we might ask whether the user is continuing to come back to the app rather than focusing on their session length and degree of active engagement with different tools."

WHAT'S THE HISTORY?

Poor engagement among users provided a health app or other digital health intervention is a long-standing concern, and often can hamper investigations on whether or not the tool is delivering a clinical benefit. Mental health apps in particular have become something of a Wild West, with many flooding the market without supporting evidence of their efficacy.

Share