Empathic AI responses to patient inquires could be the answer to ‘overburdened’ clinicians
Click Here to Manage Email Alerts
Key takeaways:
- Chatbots generated responses similar to those of health care providers in content quality and draft usability.
- AI-driven responses may cause communication issues because of length and use complex language.
Chatbots' use of more positive and subjective language produced more empathetic responses to patient questions submitted through EHR prompts compared with those of primary care physicians, according to results of a randomized study.
Despite AI-driven responses comparing favorably to PCPs, the findings — published in JAMA Network Open — showed chatbots generated longer and more complex responses.
“The findings reveal that, under the right circumstances, these AI-generated drafts could be useful for overburdened providers answering their patients' private messages,” William R. Small, MD, MBA, a clinical assistant professor in department of medicine at NYU Grossman School of Medicine, told Healio. “However, our findings did not take into account the perspectives of patients and other nonphysician health care workers, which will be critical to understand.”
In the single-center study, PCPs reviewed the communication style — that is, understandability, tone and verbosity — content quality and empathy of 175 ChatGPT-4-generated draft responses to patient in-basket questions and compared them with those of 169 health care provider (HCP) responses.
“It was imperative that we understood how our PCPs perceived the drafts and, further, how their content compared to human responses, so that we could make data-driven decisions when engineering our prompts to the language model,” Small explained.
The AI responses were generated with standard electronic health record prompts.
The 16 PCPs graded the outcomes using 5-point Likert scale questions — with one strongly disagreeing and five strongly agreeing — and additionally answered whether they would use the draft or start a new one.
The PCPs also did not know which responses were human and which were AI.
Compared with HCP responses, generative AI responses were graded higher in communication style (mean, 3.7 vs. 3.38) but were comparable in content quality (mean, 3.53 vs. 3.41) and the proportion of usable drafts (mean, 0.69 vs. 0.65).
The researchers found that generative AI responses were 125.5% more empathetic, 61.5% more positive and 74.2% more subjective than HCP responses.
However, Small pointed out that the AI responses “tended to use wordier, more complex language at a higher reading level than their human counterparts.”
He explained that the results “hopefully mean we are one step closer to reducing the burden that in-basket messages have on our outpatient health care professionals without negative impacts on patients or the patient-doctor relationship.”
“By recognizing the early pitfalls we observed in the standard prompts we were issued (without editing them to our liking), prompt engineers who implement this technology into Epic's EHR will have somewhere to start,” he said.
The general applicability of the study findings are limited by the single-center design and small sample size, the researchers wrote.
Small noted that there are several directions future research could go, “and at the heart of it is defining what good communication looks like with our patients.”
“With time, I'm hopeful the research community will ensure that AI draft responses are equitable to all groups, utilized in a way that saves health care professionals time, and aligns with the goals of all humans involved, especially patients,” he said.
References:
- AI tool successfully responds to patient questions in electronic health record. Available at: https://www.eurekalert.org/news-releases/1051422. Published July 16, 2024. Accessed July 19, 2024.
- Small W, et al. JAMA Netw Open. 2024;doi:10.1001/jamanetworkopen.2024.22399.