AI in hematology: ‘The good, the bad and the ugly’
Click Here to Manage Email Alerts
SAN DIEGO — The increased use of AI in health care has generated tremendous excitement for its potential to guide treatment decision-making, improve outcomes and increase efficiency.
However, the enthusiasm comes at the price of concerns, such as the risk for depersonalized care or privacy breaches.
As clinicians and patients move together into uncharted territory, providers have the responsibility to explain and clarify the benefits and risks of AI to their patients, according to Gwen L. Nichols, MD, chief medical officer of The Leukemia & Lymphoma Society.
Nichols delivered a talk during an ASH Annual Meeting and Exposition special interest session titled “The role of artificial intelligence in hematology practice: the good, the bad and the ugly.”
Nichols discussed how hematologists and oncologists can best address patients’ concerns as AI’s role in medicine invariably expands.
She framed her talk in the context of a quote from Stephen Hawking, a theoretical physicist and cosmologist, who said: “The rise of powerful AI will be either the best — or the worst — thing to ever happen to humanity. We do not yet know which.”
“We have to meet patients where they are, because they’re going through the same questions we are — except with the added stress of having cancer, in many cases,” Nichols told Healio. “More than ever, these patients need to be considered and treated as individuals.”
‘It’s a trust question’
Nichols discussed the many dichotomous messages around AI that patients receive from popular media and other sources.
AI signals progress, she noted, but it also poses the threat of job loss. It provides increased efficiency, she added, but also the possibility of increased discrimination.
“Patients see it as a way to create knowledge, or as a way for bad actors to use knowledge against [them],” Nichols told Healio. “They see it as incredibly smart but also very depersonalizing.”
As AI’s role expands, some patients fear the possibility of losing face-to-face interaction with their health care providers. This concern may be justified given an already dwindling sense of in-person communication.
“We already know that doctors are spending more time on the computer doing notes and less time sitting face-to-face with the patient and having a conversation,” Nichols said. “Providers have less time to see patients, and I think patients are worried — rightly so — that they are going to lose even more face-to-face contact with their doctors.”
AI-associated privacy breaches also are a major concern — particularly for people with cancer who may not want their diagnosis to be public knowledge.
Perhaps even more upsetting to patients is the concern about who truly is determining the best path forward in their cancer care.
“It’s a trust question,” Nichols said. “They want to know whether the computer is actually making the decisions about how they are going to be treated. That’s a worry. They need to be reassured that they have not become part of an algorithm, but rather that they are being looked at as an individual.”
Discussing the benefits
Along with reinforcing the idea that AI will not involve sacrificing personalized care, clinicians should explain to patients the many ways in which revolutionary technology can enhance their care, Nichols said.
One area in which AI likely will improve care is through more efficient tracking of a patient’s medications. Clinicians and pharmacies can utilize AI to monitor a patient’s different medications and detect potentially harmful drug-drug interactions, Nichols said.
“There’s a lot of polypharmacy going on in our world, and it is difficult for a primary care doctor, for instance, to know if a patient’s new blood pressure medicine is going to interact with the medicine prescribed by their oncologist,” she said. “AI can make polypharmacy more convenient and can protect them. I think that is something that would resonate with patients.”
AI also likely will improve research that requires analysis of large datasets, Nichols said.
“Increasingly, cancer is affecting people who have grown up with computers,” Nichols told Healio. “I think people will begin to see how well AI can work for big sets of data and for research into rare diseases.”
Nichols cited research that showed patients are willing to accept AI reviewing their X-rays, conducting initial screenings for skin cancer, and even providing second opinions on their diagnosis. In such cases, Nichols emphasized that patients do want to be certain that the AI is functioning in a complementary or supplemental capacity.
“They need to be assured that the doctor is the final decision-maker,” she said.
At a practical and logistical level, AI can be used to streamline logistical tasks, Nichols said. One example is managing workflow in outpatient chemotherapy infusion rooms. She mentioned an AI app being tested by graduate students that would oversee this process and save time for providers and patients.
“Normally, you have a nurse who spends their entire day figuring out who can move into which chair. Wen someone has a complication and it takes longer and the next person gets bumped, it’s a disaster,” she said. “People end up spending hours in the waiting room waiting for a chair to open. They’re developing a way that AI can notify the patient, tell them how long the delay will be, and help providers adjust the schedule so it runs more smoothly.”
‘You’re still in there’
Nichols referred to a picture she generated from AI that depicts a robot performing surgery. She said the image captures a prevailing fear among patients and the general public regarding AI.
“Patients worry about a world where health care is making one-size-fits-all decisions, where the robots are taking over,” she said. “It is our job to explain how AI is being used in an individual practice.”
Nichols suggested framing discussions of AI around its potential risks and benefits, which is a type of conversation clinicians are accustomed to having. She discussed a study in which patients received the same health care information — half were told AI generated the information and the other half were told their health care team generated it.
“It was exactly the same information, but the trust of the physician or health care team-generated information was much higher,” Nichols said. “It’s important they know you’re still in there, regardless of where you get the additional information to help in their care.”
Reference:
- Nichols G. The role of artificial intelligence in hematology practice: the good, the bad, and the ugly. Presented at: ASH Annual Meeting and Exposition; Dec. 7-10, 2024; San Diego.
For more information:
Gwen L. Nichols, MD, can be reached at gwen.nichols@lls.org.