Read more

April 20, 2023
3 min read
Save

Q&A: AI offers medical advice to clinicians like a colleague

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Key takeaways:

  • Researchers developed an AI system that communicates biomedical research to help physicians make decisions.
  • Clinicians said the technology is “intuitive and easy to understand,” a developer told Healio.

Artificial intelligence has offered exciting new innovations for clinicians, but the next step may be AI communicating like clinicians, too.

Qian Yang, PhD, an assistant professor of information science at the Cornell Ann S. Bowers College of Computing and Information Science and co-founder of the digital and AI literacy initiative, and colleagues developed a system to help validate AI suggestions based on evidence from journals and clinical trials.

PC0423Yang_Graphic_01_WEB

They then conducted a study that found if AI tools can communicate with the physician like a colleague — identifying and communicating relevant biomedical research that supports the decision — then physicians can better weigh the merits of the recommendation.

Yang and colleagues will present their study at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems later this month. Healio spoke with Yang to learn more about the system, its benefits and drawbacks and if physicians prefer the new AI method.

Healio: Will you describe the system you built?

Yang: We built a system that helps explain to doctors whether an AI diagnostic or treatment suggestion is trustworthy or not, in similar ways as how a doctor would explain to another doctor — by referencing relevant clinical research literature.

Hospitals have begun using “decision-support tools” powered by artificial intelligence that can diagnose disease, suggest treatment or predict a surgery’s outcome. But no algorithm is correct all the time, so how do doctors know when to trust the AI’s recommendation? We thought: if AI tools can counsel the doctor like a colleague — pointing out relevant biomedical research that supports the decision — then doctors can better weigh the merits of the recommendation. So, we did exactly that using GPT-3, a pre-trained large language model, to find and summarize relevant research to calibrate clinician trust.

Healio: What are some of the benefits and drawbacks you’ve seen?

Yang: We developed prototypes of this system for decision support tools across three specialties — neurology, psychiatry, and palliative care. Clinicians we interviewed who access these specialties said they appreciated the clinical evidence, finding it intuitive and easy to understand, and preferred it to an explanation of the AI’s inner workings. Moreover, this is a highly generalizable method. This type of approach could work for all medical specialties and other applications where scientific evidence is needed, such as Q&A platforms to answer patient questions or even automated fact-checking of health-related news stories.

While generalizable, this approach will be mainly useful in less time-sensitive clinical settings. Physicians in emergency care, for example, are less likely to have the time to digest literature. Those physicians expressed that they would like AI to succinctly summarize all the literature evidence, which can be tricky since such a summary cannot miss critical nuances either.

Healio: Do you think that physicians are able to better weigh the merits of the recommendation with this kind of system?

Yang: I think so! Previously, most AI researchers have tried to help doctors evaluate suggestions from decision support tools by explaining how the underlying algorithm works, or what data was used to train the AI. But education in how AI makes its predictions wasn’t sufficient, because each AI system can make an error in some (even if rare) cases. It’s the doctors’ job to tell in which patient case the AI is making an error, and that can be very challenging even for an AI expert with an abundant amount of time. In contrast, doctors routinely read and critically examine results from clinical trials. Evidence from literature or trail reports offer a much more intuitive way of understanding AI recommendations.

Healio: Do physicians prefer this presentation of AI — a system that mimics the interpersonal communication that colleagues offer each other?

Yang: Yes, physicians shared that, especially for the doctors or nurse practitioners who work in low-resource hospitals, such tools can be very useful. Those in large research hospitals sometimes have full-time clinical librarians to help physicians look up literature evidence. This system can potentially bring analogous benefits to those working in less resourceful hospitals.

Healio: Why did you decide to develop this system? Why is AI important in health care?

Yang: The idea of leveraging machine intelligence in health care in the form of decision-support tools has fascinated health care and AI researchers for decades. These tools promise improved health care quality through complementary insights on patient diagnosis, treatment options, and likely prognosis. In recent years, the adoption of electronic medical records along with advances in big data and machine learning technologies has created the perfect environment for AI to impact clinical practice.

References: