Read more

October 29, 2024
2 min read
Save

ChatGPT may help physicians answer questions from parents

Key takeaways:

  • ChatGPT can provide patient-specific answers to questions parents may have about their children.
  • More research is needed before pediatricians can begin directing parents to LLMs.

ChatGPT and other large language models could help physicians field questions from patients and their caregivers, although researchers remain concerned about accuracy and HIPAA compliance, according to findings published in Pediatrics.

“When a child is admitted to the pediatric ICU, it is often an incredibly stressful and frightening experience for parents, compounded by an overwhelmingly complex informational landscape with lots of technology and terms that are unfamiliar,” R. Brandon Hunter, MD, FAAP, assistant professor of pediatrics in the division of critical care medicine at Texas Children's Hospital and Baylor College of Medicine, told Healio about the new study.

IDC1024Hunter_graphic

“Large language models (LLMs) like ChatGPT hold great promise to deliver information in simple, jargon-free and patient-specific ways, which is incredibly exciting,” Hunter said.

Hunter and colleagues created assessments and plans for three hypothetical patients: one with respiratory failure, one with septic shock and one with status epilepticus. They developed eight questions that parents may ask in each scenario.

The researchers input the patient profiles and questions into ChatGPT-4 and asked it to respond to the questions at a sixth grade reading level. They asked six PICU physicians who were not involved in developing the scenarios or questions to evaluate the responses based on accuracy, completeness, empathy and understandability.

The language model generated 24 responses to questions, all of which included at least one sentence with patient-specific information. The researchers found a median of 31% (interquartile range, 26%-39%) of sentences in each response were patient-specific. Most sentences (59%) explained the reasoning for clinical decisions, like medication use, through information that was not explicitly included in the prompt, according to the researchers.

The reviewers graded the responses’ accuracy from one to six, with six being the most accurate. Most responses earned high scores — only four had a median score lower than five. Four individual reviews scored responses as a three, meaning they were more inaccurate than accurate, but the reviewers felt the responses would not cause harm to the patient or family, the researchers wrote.

Reliance on information provided by LLMs can be perilous because they are prone to hallucinations, where they respond with information that is incredibly cogent and believable, but false,” Hunter said. “As such, the best-use cases right now are those where a medical decision would not be immediately made by the model.”

Hunter said the research is still in the early stages, so pediatricians should not be directing parents to LLMs for information. His team is currently enrolling parents in a clinical pilot study that will use a HIPAA-compliant LLM to answer questions like this study.

“While these tools show promise, it is crucial to view them as potential aids to enhance communication and education, not as replacements for direct physician-patient interactions,” Hunter told Healio. “Our overall goal in this is not to replace doctors and nurses or the human element in health care; it is to promote greater family engagement and active participation in their child’s health care journey.”

For more information:

R. Brandon Hunter, MD, FAAP, can be reached at rxhunter@texaschildrens.org.