Read more

November 09, 2023
2 min read
Save

Use of chatbot AI ‘hazardous’ without ID consultant input, experts say

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Key takeaways:

  • The performance of ChatGPT-4 was satisfactory compared with ID consultations for positive blood cultures.
  • The artificial intelligence was not optimal for or translatable to clinical practice.

Although using artificial intelligence could be helpful for some aspects of patient care, experts say relying on the technology without the aid of an infectious disease consult could be hazardous for patients, a recent study showed.

“Since the release of ChatGPT, there has been tremendous interest and debate about the changes that generative artificial intelligence (AI) will bring to our society and our daily lives. We can bet that generative AI will produce a huge change in the way we practice clinical medicine,” Alexis Maillard, MSc, a member of the infectious diseases stewardship team at the Paris Centre University Hospital, told Healio.

doctor_with_chart 1
Using Chatbot AI technology without an infectious disease physician consult could be hazardous to patients, although experts said the technology could be useful as an assistive tool for reports. Image: Adobe Stock.

“As we, like everyone, were impressed by ChatGPT's performance in many topics, we wonder if ChatGPT could replace us as infectious disease (ID) consultants,” Maillard said.

To assess the performance of a chatbot AI platform in managing patients with positive blood cultures in a real-life setting, Maillard and colleagues prospectively provided data from consecutive ID consultations for a first positive blood culture to ChatGPT-4 over a 4-week period.

ChatGPT-4 used these data to generate a comprehensive management plan and the researchers then compared the management plan suggested by ChatGPT-4 with the plan suggested by ID consultants based on literature and guidelines.

In total, 44 cases with a first episode of positive blood culture were included in the study. Overall, the researchers found that ChatGPT-4 provided “detailed and well-written responses” in all cases and that the AI’s diagnoses were identical to those of the consultant in 59% of the cases.

Suggested diagnostic workups were “satisfactory” in 80% of cases — in that there were no missing important diagnostic tests, according to the study — and that empirical antimicrobial therapies were “adequate” in 64% cases and harmful in 2%, whereas source control plans were “inadequate” in 9% cases.

The study also showed that definitive antibiotic therapies were “optimal” in 36% of patients and harmful in 5%. Researchers added that management plans were considered “optimal” for only one patient, “satisfactory” for 17 and “harmful” for seven.

Based on these findings, the researchers said that the use of ChatGPT-4 without consultant input remains “hazardous” when seeking expert medical advice.

Maillard said that the performance of any advisor, including chatbot AI platforms but also humans, “depends critically on the information provided” and that a chatbot AI without a human is “not yet able to collect clinical information properly.” She said, though, that if experts learn how to use it, it has the potential to improve knowledge and practice.

“So far, do not use ChatGPT for infectious disease advice: ask a specialist,” Maillard said. “However, given its ability to write clear and informative medical reports, why not use generative AI as an assistive tool to improve the formal quality of our reports in the future?”