VIDEO: The potential of everyday AI use in primary care
Click Here to Manage Email Alerts
ORLANDO — AI has the potential to supplement the work of primary care providers in a variety of ways, but should be used carefully, according to a presenter at the AIMed24 Annual Meeting.
Sabrina Braham, MD, FAAP, a clinical assistant professor of pediatrics at the Stanford University School of Medicine, told Healio that AI is “one of the best opportunities that we have” to revamp a system that is currently not working for providers or patients.
“The goal in giving my talk was to inspire providers and decision-makers to be thinking creatively about how AI can be used, and then to be aware of risks and challenges that we’ll encounter as we roll out these solutions,” Braham said.
PCPs will mostly experience AI in medicine through scribes and systems that message patients, Braham said. She described some examples of how AI can help PCPs in everyday tasks outside of what they might typically see in the electronic health record (EHR).
For example, the needs of a 9-year-old patient with ADHD and autism may have overwhelmed Braham before generative AI, she said. But now Braham has the tools to approach that family’s non-clinical issues, like him not taking medication because he does not like the flavor or texture.
“I used ChatGPT to answer questions like ‘give me a list of stimulant medicines that come in powder or liquid form that can be mixed in food’ that I can use to help this child tolerate their medicine,” she said. “Then I asked ChatGPT to organize them based on duration of act, because that’s how we think about them when we prescribe.”
The AI was additionally able to describe which would be available in a generic form and covered by their insurance plan. Once she had been taught, Braham could give the information to the patient and his parents at their literacy level.
However, Braham also warned of some of the dangers in using AI, like accuracy errors and legal liability gray areas, and emphasized the importance of humans using their own judgment.
“It requires a lot of cognitive energy for us to understand what the [EHR] is recommending and then to decide ‘is this appropriate in our patient’s case?’” Braham said. “If a physician is accepting the recommendations of the model 100% of the time, the model should flag that and let the physician know ‘hey, hang on, you’re not using your judgment.’”