Read more

October 10, 2023
4 min read
Save

AI chatbots should be supplementary — not primary — source of medical information

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Information AI-powered chatbots provided related to search queries on the most common cancer types generally produced accurate information with poor actionability, study results published in JAMA Oncology showed.

The poor actionability score for queries about skin, lung, breast, colorectal and prostate cancer, researchers said, can at least partially be attributed to the fact that responses to questions appeared to be written at a college level, using medical terminology that is most likely not familiar or helpful to most people.

Doctor at Comouter
Researchers found that chatbots provided generally accurate information but did so at a college-based reading level. Source: Adobe Stock.

“A couple studies have shown that the average patient reading level could be somewhere around sixth grade, so when what should be useful information from these chatbots is coming back at a college reading level, it’s very difficult for a lot of patients to understand,” Abdo E. Kabarriti, MD, assistant professor in the department of urology at SUNY Downstate Health Sciences University and chief of urology at Coney Island Hospital, told Healio.

Kabarriti spoke with Healio about how useful the current state of AI-powered chatbot services can be for patients searching online for information regarding a recent cancer diagnosis or symptoms that potentially warrant setting up a doctor visit, while also theorizing what the future may hold as these AI tools continue to improve in the years to come.

Healio: What motivated you to conduct this study?

Kabarriti: There’s a lot of research out there that looks at the different things patients use to get their information. In a separate study, we looked at the quality of the content on YouTube, because often patients are going to look on Google, YouTube, X (formerly known as Twitter) and Tik Tok.

In the last year or so, these chatbots have taken on a life of their own and I anticipate will become ubiquitous — sort of the new search engines. That could lead patients to rely on these chatbots to get a lot of their medical information, so we wanted to look at the quality of the information that’s on them presently to see whether it’s accurate for patients or if it’s something we need to potentially worry about as clinicians.

Healio: Can you briefly describe the findings? Did anything surprise you?

Kabarriti: Happily, there was not much misinformation on the topics that we looked at, which were the top five cancer types. We tried to replicate what patients were looking up, so we used Google Trends to find out the most searched topics and found the chatbots we used had reputable websites that they pulled information from for their answers.

We also found that the reading level of the output information was at a very high reading level, which is way above the average consumer or the average patient reading level.

Furthermore, we found that there was not much actionability from these outputs, so they didn’t really tell patients what to do. At most, they would say to consult your physician. In some ways, I found it comforting that there wasn’t too much information at this time, and it could ultimately work to supplement or complement the clinician-patient interaction rather than potentially replace it.

Healio: What do you consider the most important take-home message of these findings?

Kabarriti: I believe chatbots are reliable but are also of questionable utility — at this point — when used on their own. Their usefulness is limited if the majority of people who are reading them can’t understand them, and they don’t direct patients very well in terms of what to do.

Healio: Can you describe the next steps in research and key questions that need to be answered?

Kabarriti: There’s a lot of room for these chatbots to grow. Number one, I would say that these are obviously the early versions of these chatbots, so I’m sure they’re only going to continue to improve. Number two, as clinicians, we must figure out how to utilize these and incorporate them into our practice because — inevitably — patients will use them, and we would be wise to leverage them in a helpful way.

For example, one of the things that you can do with these chatbots is specifically request that it simplify a medical concept to a specific reading level. Sometimes, when we’re delivering information as physicians, we could use these to make sure we’re communicating effectively with our patients. And there’s plenty of additional ways chatbots can be deployed and we are continuing to do research and looking at different angles for how they can be utilized.

Healio: Is there anything additional you would like to mention?

Kabarriti: I just want to reiterate that these chatbots have plenty of utility and I believe they will become a part of our lives in medicine in the future. I think it’s important to embrace them and learn their strengths and weaknesses as early as possible.

Maybe part of getting these chatbots to be most useful in our specific community would include us working with the companies that are developing the technology. We can help play a role in correcting errors and minimize the potential that misinformation makes it to patients. It would be a win-win for everybody.

There’s a lot of excitement on my part about where these chatbots are and where they can go from here. I was a computer science major in college — these things really interest me, and I’m very curious to see how they become a part of our life.

For more information:

Abdo E. Kabarriti, MD, can be reached at abdo.kabarriti@downstate.edu.

Reference: