Q&A: AI identifies most patients with social determinants of health in electronic records
Click Here to Manage Email Alerts
Key takeaways:
- AI identified 93.8% of patients with adverse social determinants of health, whereas ICD-10 codes identified only 2%.
- Further research is needed to evaluate AI-assisted SDoH screenings in real clinical settings.
Artificial intelligence models effectively identified patients with adverse social determinants of health within electronic health records, according to a study published in npj Digital Medicine.
“Our goal is to identify patients who could benefit from resource and social work support and draw attention to the underdocumented impact of social factors in health outcomes,” Danielle Bitterman, MD, a faculty member in the Artificial Intelligence in Medicine Program at Mass General Brigham, said in a press release. “Algorithms that can pass major medical exams have received a lot of attention, but this is not what doctors need in the clinic to help take better care of patients each day. Algorithms that can notice things that doctors may miss in the ever-increasing volume of medical records will be more clinically relevant and therefore more powerful for improving health.”
In the study, Bitterman and colleagues trained large language models to identify any mentions of six social determinants of health (SDoH) — employment, transportation, social support, housing, relationships and parental status — in EHRs of patients with cancer.
The researchers reported that their fine-tuned artificial intelligence (AI) models identified 93.8% of patients with adverse SDoH. In comparison, ICD-10 codes identified just 2%.
Healio spoke with Bitterman about the implications for primary care providers, the next steps of research on SDoH and AI and more.
Healio: Was there anything that particularly stood out to you in the findings?
Bitterman: First, our findings highlight just how sparsely SDoH are documented in the EHRs. Although language models were able to sift through clinic notes to find this information when it is present, I suspect that they remain very underdocumented. We need better methods to collect SDoH so that we can be sure that we can address them equitably for all patients. Second, the extent to which the presence of gender and race/ethnicity in a sentence altered language models’ determinations about the SDoH was striking. Algorithmic bias is a known challenge with language models, and our results highlight how this bias might directly impact health outcomes as we begin to integrate such models into clinical workflows.
Healio: What are the implications for PCPs?
Bitterman: PCPs see many patients with diverse health issues each day, and often have limited time to address each patient’s most pressing concerns. If clinically validated, technologies such as we developed could assist PCPs in reviewing patient’s records and proactively alert them to patients who have adverse SDoH that should be discussed and considered for multidisciplinary care planning. By simultaneously making chart review less time-consuming and more comprehensive, this could also help address clinician burnout and create more time for face-to-face patient care.
Healio: Where does research go from here?
Bitterman: The next step in this research is to carry out clinical studies to understand whether and how large language model-assisted SDoH screening improves patient care and outcomes in a real clinical setting. We are also researching how large language models learn bias, and how that bias may be addressed at a more fundamental level, which I believe is essential for safe and equitable implementation of large language models across clinical applications.
Healio: Anything else to add?
Bitterman: In this research, we not only used large language models to detect SDoH but also to generate synthetic clinical data for fine-tuning better performing, smaller models. We are in early days, but the potential of using generative AI to create synthetic clinical data in this way is a promising avenue for research that could help preserve patient privacy.
References:
- Generative artificial intelligence models effectively highlight social determinants of health in doctors’ notes. https://www.massgeneralbrigham.org/en/about/newsroom/articles/generative-artificial-intelligence-models-effectively-highlight-social-determinants-of-health-in-doctors-notes. Published Jan. 11, 2023. Accessed Jan. 8, 2023.
- Guevara M, et al. npj Digit Med. 2024;doi:10.1038/s41746-023-00970-0.