Read more

May 10, 2023
3 min read
Save

Chatbot accurate but vague in answering questions about cancer misinformation

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Although an artificial intelligence chatbot provided accurate information about cancer in most cases, some of its responses could be misinterpreted, according to researchers at University of Utah Huntsman Cancer Institute.

“Fortunately, on blinded review, it appeared that ChatGPT was an accurate source of information regarding common cancer misconceptions, with some important caveats,” researcher Skyler B. Johnson, MD, physician-scientist at Huntsman Cancer Institute and assistant professor in the department of radiation oncology at University of Utah, told Healio. “We felt that ChatGPT’s answers were a bit more vague than what people would like to see. There was concern that patients might interpret that in a way that could lead to confusion, poor decision-making and decreased survival.”

Quote from Skyler B. Johnson, MD

In the study published in JNCI Cancer Spectrum, Johnson and colleagues evaluated the accuracy and dependability of ChatGPT’s information about cancer. They assessed the chatbot’s answers to cancer-related questions based on the NCI’s common myths and misconceptions resource.

Johnson spoke with Healio about his study’s findings and their potential implications.

Healio: Can you explain your rationale for this study?

Johnson: We did a study recently of social media misinformation, predominantly on Facebook, but also on Twitter, Reddit and Pinterest, that showed about one in every three articles about cancer contains misinformation. Those articles had the potential to hurt and harm people. We’re now trying to evaluate new methods by which people might seek cancer information and determine whether those are accurate sources of information for patients. In this study, we took the NCI’s common myths and misconceptions list and fed each of those questions into ChatGPT to determine whether the outputs from ChatGPT were accurate and reliable. We found that 97% of the answers were correct.

However, when we compared it with NCI’s answers to those same questions, NCI used more definitive terms. If you were to speak with a communications expert or public health expert, they would tell you how, depending on the patient’s health literacy, it is important for this information to be clear and direct.

When we posed a question to ChatGPT, it eventually provided accurate answers, but it might start off by saying the data is mixed, or that there is no conclusive evidence one way or the other. This might lead a patient with cancer to assume that they should go ahead and try whatever the treatment is. Yet most physicians know that the recommendations we make are based on available evidence. You need evidence to make recommendations.

Healio: What are the implications of this study?

Johnson: It’s challenging, because we were hoping to determine whether or not we could tell patients who were interested in going on ChatGPT that they could, or that they should, use it as a resource. Despite the fact that it was accurate for the most part, it left many of the reviewers and co-investigators on this study wondering what would be the right recommendation, given the vagueness and the unclear answers. The other thing that gave us pause is that these artificial intelligence chatbots work by drawing on available information, and the questions we used were very common questions, so there was a good amount of information to draw on. We’re concerned about patients who might seek uncommon or less common information.

The other challenging issue is that we’re now on the fourth version of ChatGPT — it’s continuing to evolve and change. The main thing I expect to come from this study is that we should continue to monitor these sources of information for the risk of producing inaccurate information about cancer.

Healio: What is next in your research on this topic?

Johnson: The plan is to determine whether patients are actually seeking out ChatGPT as a source of information and whether or not patients perceive the ChatGPT answers to be accurate or reliable in a way that they would trust those recommendations. At the end of the day, if patients aren’t going there or if they don’t trust it, then it’s less of a major concern. However, if patients who are nervous to bring up this type of thing to their physicians are looking to things like ChatGPT for answers instead, that’s something we need to keep an eye on. We know from past data that a lot of times, patients who use unproven cancer treatments aren’t even reporting that to their physicians based on the perception that they will be judged. So, that’s a major concern.

For more information:

Skyler B. Johnson, MD, can be reached at Huntsman Cancer Institute, 1950 Circle of Hope, Salt Lake City, UT 84112; email: skyler.johnson@hci.utah.edu.