Read more

March 28, 2024
3 min read
Save

Physicians express concerns about ‘ethical deployment’ of AI in clinical practice

Key takeaways:

  • Most oncologists agreed they need to be able to explain AI to use it in their practices.
  • Most oncologists believed they should protect patients from biased AI but are not confident they can.

The results from a nationwide survey, published in JAMA Network Open, highlight the complex issues oncologists are grappling with in integrating artificial intelligence.

Most oncologists believed they should have the ability to explain how artificial intelligence (AI) models work, must protect patients from biased AI, and that patients should consent to its use before implementing it in their practices.

A survery of US-based oncologists revealed that infographic
Data derived from Hantel A, et al. JAMA Network Open. 2024;doi:10.1001/jamanetworkopen.2024.4077.

“Ethical deployment of AI in oncology must prioritize the development of infrastructure that supports oncologist training as well as transparency, consent, accountability and equity,” Andrew Hantel, MD, faculty member in the divisions of leukemia and population sciences at Dana-Farber Cancer Institute and the Harvard Medical School Center for Bioethics, told Healio. “It means that infrastructure needs to be developed around cancer AI to ensure its ethical deployment.”

Background, methodology

The FDA has recently approved AI models for oncology, but clinicians have reported concerns regarding AI bias; the ability of AI to detail its decision-making process; who bears responsibility for errors or misuse; and whose treatment recommendation — the doctor’s or AI’s — takes precedence, according to background information provided by researchers.

“We have all seen the rapid progress of AI, which has many implications for health care, and its blend of opportunities and challenges,” Hantel said. “As AI begins to impact cancer care delivery, understanding the ethical implications from those who will be asked to implement it — oncologists — is crucial. Our intent was to present the views of practicing oncologists so that AI is deployed in an ethical way that meets the needs of oncologists and patients while addressing potential ethical dilemmas.”

Researchers sent out a 24-question, cross-sectional survey to nearly 400 oncologists throughout the country from Nov. 15, 2022, to July 31, 2023.

Survey results included responses from 204 oncologists from 37 different states (63.7% men; 62.7% non-Hispanic white). More than one-quarter (29.4%) of respondents came from academic practices and more than half (53.4%) had no AI training.

Results

Most oncologists agreed they should be able to explain AI decision making to use it (84.8%), but only 23% of respondents felt patients needed the ability to explain it.

Most oncologists reported that patients should have to give their consent to use AI treatment recommendations (81.4%), but that number decreased significantly for diagnostic decisions (56.4%).

Researchers asked oncologists what should happen if an AI model presented a different treatment option than the treating physician, and 36.8% responded the patient should be given both choices and decide.

“This finding highlights that many physicians are unsure about how to act in relation to an AI and counsel patients about such situations,” Hantel said.

Conflicting responses emerged regarding biased AI, which occurs when AI uses medical database information that has inequities.

Most oncologists answered they need to protect their patients from biased AI (76.5%), but only 27.9% felt “confident in their ability to do so,” researchers wrote.

“The alignment on these points underscores the urgent need for structured AI education and ethical guidelines within oncology,” Hantel said.

Almost all oncologists believed AI developers bear responsibility for any medical or legal problems that arise with the technology (90.7%), and less than half felt clinicians (47.1%) or hospitals (43.1%) shared in that responsibility.

“The FDA and regulatory agencies need to clearly define and delineate the responsibilities of all stakeholders involved in AI’s development and clinical application,” Hantel said. “This includes establishing standards for transparency, explainability and ethical oversight. Without this guidance, there will be no consensus and it poses risks for oncologists when an AI makes the wrong recommendation and they follow it, or when an AI tool is standard of care, and they go against its recommendation.”

Next steps

AI will be part of the future of oncology. How remains to be seen.

“It has the potential to improve research and discovery, diagnosis, prognosis, treatment decisions, communication and ancillary tasks that make health care frustrating for oncologists, like billing and documentation,” Hantel said. “But to make this future one that is beneficial for patients and oncologists, we need to develop AI using ethical process frameworks.”

Hantel repeatedly mentioned training and education in improving oncologists’ understanding of AI, both in its use and for ethical dilemmas that could come up.

He also said oncologists need to understand patients’ perspectives as well. “Especially historically marginalized and underrepresented groups on these same issues,” Hantel said. “Then, we need to develop and test the effectiveness of ethics infrastructure for developing and deploying AI that maximizes benefit and minimizes harms and these other ethical issues — and educate clinicians about AI models and the ethics of their use.”

Hantel said accomplishing these goals is paramount to the success of AI in oncology. “The ethical deployment of AI in cancer care is a shared responsibility, and the ethics of its development and deployment needs to be integrated from inception, otherwise we will be trying to fix something after its harmed people rather than avoiding that harm altogether,” he said.

For more information:

Andrew Hantel, MD, can be reached at andrew_hantel@dfci.harvard.edu.