ASCO principles designed to guide development, ’responsible use’ of AI in oncology
Click Here to Manage Email Alerts
Key takeaways:
- ASCO’s principles to guide AI in oncology care include transparency, equity and accountability.
- Training and communication can help clinicians integrate AI tools into care while minimizing potential harm.
During a shareholder presentation in front of more than 500 physicians in April, Debra A. Patt, MD, PhD, MBA, FASCO, asked how many in attendance used large language models like ChatGPT and Claude.
Fewer than five raised their hands, and those who did seemed to use the AI software like a search engine without understanding of prompt optimization.
“There is a large opportunity to educate how physicians can best use AI tools to improve the care of the patients we serve,” Patt, executive vice president of Texas Oncology and chair of ASCO’s AI task force, told Healio.
“When you and I started driving, we used directions that we printed out or maps. It was difficult to learn how to do it differently,” Patt added. “The greatest fear I have is that health care organizations will be late adopters of digital health care tools that can improve care delivery.”
ASCO’s AI task force — comprised of about a dozen individuals, including board members, ASCO member physicians, patients and policy experts — issued “Principles for the responsible use of artificial intelligence in oncology” to guide the implementation of AI and ensure its use benefits patients and clinicians.
The six guiding principles are:
- Transparency: “AI tools and applications should be transparent throughout their lifecycle.”
- Informed stakeholders: “Patients and clinicians should be aware when AI is used in clinical decision-making and patient care.”
- Equity and fairness: “Developers and users of AI should protect against bias in AI model design, and use and ensure access to AI tools in application.”
- Accountability: “AI systems must comply with legal, regulatory and ethical requirements that govern use of data. AI developers should assume responsibility for their AI systems, its decisions, and their adherence to legal, regulatory and ethical standards.”
- Oversight and privacy: “Decision-makers should establish institutional compliance policies that govern the use of AI, including protections that guard clinician and patient autonomy in clinical decision-making and privacy of personal health information.”
- Human-centered application of AI: “Human interaction is a fundamental element of health care delivery; AI does not eliminate the need for human interaction and should not be used as a substitute for sensitive interactions that require it.”
AI can aid cancer care in many ways, according to task force members. Examples include diagnosis assistance, outcome predictions, treatment recommendations, identifying clinical trial opportunities, reducing the workload burden of physicians, and health care optimization.
However, AI use also carries some risks. These include AI hallucinations, when models generate misleading or inaccurate results. AI also can exacerbate disparities or biases, reduce trust, and change clinicians’ roles, which could affect the quality of patient-centered care.
“Science can be used for good and evil,” Patt said. “We want patients to benefit from this tremendous innovation, and it’s a very exciting time. We have to be cognizant and responsible about the fact that these tools might have bias and errors. We need responsible guiding principles to see us through how we optimally benefit from these important technologic advances.”
Complexities, future of AI use
AI tools can be helpful and problematic at the same time.
Patt shared an example of software that helps with administrative tasks, such as appointment rescheduling. It reduces staff burden, but can contribute to care disparities.
“If you have someone who has canceled their appointments a lot, that may deprioritize their reschedule,” Patt said. “It may be that they had to cancel their appointments because they have socioeconomic burdens to health care, transportation insecurity [or] a job they can’t manage when they have cancer. Deprioritizing them with subsequent appointments would provide inequity in care delivery because you are disenfranchising an already vulnerable part of the patient population, just as an example.”
Clinicians must learn to integrate AI tools into their care without causing harm, Patt said. This requires training and communication with patients.
Patt used an AI tool NIH developed called logistic regression-based immunotherapy-response score (LORIS). NIH hopes LORIS can help physicians predict whether someone may respond to immune checkpoint inhibitors.
“If I told a patient that I was going to use an AI tool to try to decide their likelihood of responding to immunotherapy, they need to understand that I’m using a mathematical model to make that determination because there might be things about that model that make it more or less likely for them to benefit,” Patt said. “It could be influenced by all sorts of things.”
These risks may cause clinicians to avoid using AI, but Patt thinks that would be a mistake.
“These issues of bias, error, privacy and accountability are real,” she said. “We need to give appropriate, heightened awareness of those concerns, but I do feel like the greatest challenge we have to overcome in medicine is to paint the picture of ‘the possible’ for the oncology community [and] to educate the masses of clinicians on how to use these models to do what they do better.”
Patt recalled opening her browser to National Comprehensive Cancer Network guidelines on her computer 20 years ago when she finished her fellowship. Now, fellows can use large language models to access NCCN and ASCO guidelines, making it easier to access relevant content.
“They can do that with every patient pretty specifically,” she said.
AI tools have become more important due to increased cancer incidence, improved survival, the aging population, treatment innovations and staffing issues, including an oncology nursing shortage.
““These challenges and opportunities are only going to grow in the next decade,” Patt said.
Clinicians may not feel at ease about AI, but if development and implementation adhere to the task force’s principles, Patt said the possibilities are “endless”
“We had to go through the technical challenges of early smart devices like PalmPilot before we had other iterations of the smartphone that were much better,” Patt said. “AI application and use is probably the most important development of our time. It has an amazing potential to help patients, doctors, all medical professionals and, most importantly, the patients we serve together. This is in direct alignment with ASCO’s mission of sharing knowledge, improving care and access, and advancing research.”
For more information:
Debra A. Patt, MD, PhD, MBA, FASCO, can be reached at debra.patt@usoncology.com.
References:
- AI tool predicts response to cancer therapy. Available at: https://www.nih.gov/news-events/nih-research-matters/ai-tool-predicts-response-cancer-therapy. Published June 25, 2024. Accessed Aug. 6, 2024.
- American Society of Clinical Oncology principles for the responsible use of artificial intelligence in oncology. Available at: https://society.asco.org/sites/new-www.asco.org/files/ASCO-AI-Principles-2024.pdf. Published May 31, 2024. Accessed Aug. 6, 2024.