ChatGPT may provide moderately accurate overview of orthopedic conditions
Click Here to Manage Email Alerts
Key takeaways:
- ChatGPT offered a moderately accurate overview of orthopedic conditions compared with information from a professional organization.
- However, the chatbot lacked quality assessments of treatments and risk factors.
When prompted, ChatGPT offered a moderately accurate overview of 40 orthopedic conditions. However, the chatbot often lacked descriptive assessments of treatment options and risk factors, according to published results.
Chandler A. Sparks, MD, MS, from the Hackensack Meridian School of Medicine in New Jersey, and colleagues assessed the quantity and accuracy of ChatGPT-3.5 outputs for treatment options, risk factors and symptoms for 40 orthopedic conditions after prompting the chatbot with general patient-focused inquiries. They then compared responses with the American Academy of Orthopaedic Surgeons OrthoInfo website for quantity and accuracy.
Compared with the AAOS OrthoInfo website, Sparks and colleagues found ChatGPT provided a similar quantity of symptoms for each condition. However, they noted the chatbot provided significantly fewer treatment options (mean difference [MD] = –2.5) and risk factors (MD = –1.1) for each condition. They also found ChatGPT provided non-descript treatment options for 50% of conditions.
“ChatGPT provides at least moderately accurate outputs for general inquiries regarding orthopedic conditions but lacks in the quantity of information it provides for risk factors and treatment options,” Sparks and colleagues wrote in the study.
In addition, an attending orthopedic surgeon determined ChatGPT was mostly accurate for 65% of conditions and moderately accurate for 35% of conditions, according to the study.
Sparks and colleagues concluded clinical guidance from professional organizations remains the preferred source of musculoskeletal information compared with AI chatbots.