AMA agrees to develop recommendations for artificial intelligence
Key takeaways:
- Artificial intelligence carries several risks, including bias, incorrect information and liability.
- The AMA encouraged physicians to educate patients on AI’s limitations and benefits.
The AMA announced during its House of Delegates meeting that it will develop recommendations and principles on the potential benefits and risks of relying on artificial intelligence-created medical advice.
Previous research has shown that artificial intelligence (AI) may help streamline health care services, triage patient care, relieve physician burnout and reduce documentation time. However, the technology has also demonstrated notable pitfalls, particularly surrounding the potential for misdiagnoses and risk to private data.

“AI holds the promise of transforming medicine,” Alexander Ding, MD, MS, MBA, an AMA trustee and assistant professor at the University of Louisville School of Medicine, said in a press release. “We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation.”
In addition to developing guidelines, the AMA also voted to work with the federal government, policymakers and other organizations to protect patients from incorrect and misleading AI-generated medical advice, and to encourage physicians to educate patients of the benefits and risks of AI.
“We’re trying to look around the corner for our patients to understand the promise and limitations of AI,” Ding said. “There is a lot of uncertainty about the direction and regulatory framework for this use of AI that has found its way into the day-to-day practice of medicine.”
The AMA highlighted several limitations posed by generative AI and large language models (LLM) — platforms such as ChatGPT that can recognize, predict and generate text based on knowledge from large datasets — that providers should be aware of, which include:
- risk for incorrect or falsified responses;
- training dataset limitations;
- a lack of knowledge-based reasoning;
- LLMs not being regulated by health care agencies;
- risk to patient privacy and cybersecurity;
- risk for the promotion of bias, discrimination and stereotypes; and
- potential for liability.
The AMA also pointed out that with LLMs being embedded within electronic health record systems, physicians are responsible for understanding how their EHR systems utilize AI tools and being prepared to answer patient questions.
Ultimately, unregulated AI algorithms and resources “should be used with appropriate caution at this time” because of possible burdens to both physicians and patients, the organization noted.
“Moving toward creation of consensus principles, standards, and regulatory requirements will help ensure safe, effective, unbiased, and ethical AI technologies, including [LLMs] and generative pre-trained transformers [are developed] to increase access to health information and scale doctors' reach to patients and communities,” Ding said.
References:
- AMA to develop recommendations for augmented intelligence. https://www.ama-assn.org/press-center/press-releases/ama-develop-recommendations-augmented-intelligence. Published June 13, 2023. Accessed June 13, 2023.
- ChatGPT and generative AI: What physicians should consider. https://www.ama-assn.org/system/files/chatgpt-what-physicians-should-consider.pdf. Accessed June 13, 2023.