April 08, 2019
3 min read
Save

Standards needed to assess user engagement with mental health apps

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Image of John Torous
John Torous

Although 40 studies evaluating mental health apps showed that their app had positive reports of usability, satisfaction, acceptability or feasibility, no study used the same combination of criteria or the same thresholds to evaluate the app, according to findings from a systematic review.

Perspective from Laura Germine, PhD

This lack of consensus makes it hard for researchers to compare results across studies, to understand what makes apps engaging for different users and to determine their real-world use, according to the review published in Psychiatric Services.

“Understanding which, if any, apps to use as part of clinical care can be challenging. Knowing the risks and benefits of these apps is the first step towards making an informed decision,” John Torous, MD, MBI, director of the digital psychiatry division at Beth Israel Deaconess Medical Center, Harvard Medical School, told Healio Psychiatry. “Some apps note research evidence that they may be easy to use and have been tested by patients. This is important to know, but what does it really mean?”

Usability, user satisfaction, acceptability or feasibility — known as “user engagement indicators” (UEIs) — represent the ability of an app to engage and sustain user interactions, according to the researchers. They conducted a systematic review of multiple clinical databases to examine how studies have measured and reported on these UEIs for mental health apps for depression, bipolar disorder, schizophrenia and anxiety.

The researchers looked at subjective (ie, satisfaction questionnaires and interviews regarding usability) and objective (ie, usage frequency, response to prompts and trial retention) criteria used to evaluate UEIs in each study along with data on factors that might have influenced usability (ie, whether patients were involved in the app design process or were given incentives).

All 40 studies eligible for review reported positive results for the usability, satisfaction, acceptability, or feasibility of their mental health app; however, most (n = 36) employed 371 unclear subjective criteria that were evaluated with surveys, interviews or both, and more than half (n = 23) used custom subjective scales instead of standardized assessment tools.

“While every app study we examined reported positive outcomes, none reported outcomes in the same way,” Torous said. “In essence, while each study notes the app was very engaging, that result is not based on any standard or metric that can be used to compare across apps or studies.”

Overall, 25 studies (63%) used objective criteria — with 71 indistinct measures — and no two studies used the same combination of subjective or objective criteria to examine UEIs of the mental health app, according to the results.

“The clinical implication is that when considering an app, one needs to be careful to examine what the claims about ease of use, feasibility and engagement really mean and if they will generalize to the use cases or patients you are considering,” Torous told Healio Psychiatry. – by Savannah Demko

Disclosure: The authors report no relevant financial disclosures.