Fact checked byShenaz Bagha

Read more

September 25, 2024
2 min read
Save

Algorithm for myasthenia gravis testing via telehealth feasible

Fact checked byShenaz Bagha
You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Key takeaways:

  • Researchers produced an algorithm to standardize virtual myasthenia gravis assessments.
  • More efforts are needed to overcome barriers such as motion capture and patient adherence to video protocols.

ORLANDO, Fla. — An algorithm for standardizing myasthenia gravis diagnosis during telehealth is feasible, but it requires additional testing and validation, according to researchers.

“Patients were sometimes not able to come to the clinic and so that brought the need to develop a new core examination just tailored to myasthenia gravis,” Gulsen Oztosun, MD, clinical research associate in the department of neurology and rehabilitation medicine at the George Washington University School of Medicine and Health Sciences, told Healio at the American Neurological Association annual meeting.

Machine Learning
New research from George Washington University found that an algorithm for myasthenia gravis testing by telehealth is feasible with additional testing and validation. Image: Adobe Stock

Oztosun and colleagues sought to assess and digitize the existing Myasthenia Gravis Core-Exam (MG-CE) for use during telehealth evaluations. They also developed an automated approach to acquiring and analyzing data from these telehealth evaluations to generate a report which would assist clinicians to improve decision-making and diagnosis of the condition.

Their study involved capturing Zoom videos of 52 individuals with myasthenia gravis (median age 63.3 years; 50% women) who submitted to the MG-CE within the ADAPT teleMG study, along with the videos of 15 healthy controls (median age, 55.5 years; 60% women). All videos were taken on two separate occasions within a week, excepting one participant whose videos were collected 39 days apart.

The researchers then created an algorithm for each video to detect individual MG-based eye, facial and limb exercise through a combination of deep learning as well as image and language processing. The result, Oztosun and colleagues hypothesized, would be objective, reproducible and quantitative reports for each patient.

According to the researchers, the MG-CE was successfully digitized and, once implemented, offered quantifiable metrics to assess disease symptoms, chief among them sub-millimeter accuracy in eye tracking as well as in measuring muscle weakness.

Oztosun and colleagues also integrated machine learning and image processing to enhance analyses of eye measurements and other areas of interest.

They additionally confirmed that MG-CE could be quantified, and hybrid deep learning and computerized algorithm successfully analyzed the video recording.

However, further validation is necessary to overcome challenges inherent in the video process, such as capturing certain facial motions.

“There is still a long way to go to make this fully automatic,” Oztosun said. “There are so many limitations with telemedicine such as difficulties in patients following commands, difficulties with the network.”