Physician enthusiasm for AI on the rise; patients remain skeptical about responsible use
Key takeaways:
- A growing number of physicians are enthusiastic about the use of AI in health care.
- Many Americans have concerns about whether AI will be used responsibly in health care.
Physician enthusiasm about the role of AI in health care continues to increase, according to an AMA survey.
However, a majority of Americans do not have faith in health systems to utilize AI responsibly, a separate study showed.

‘Quickly evolving technology’
AMA conducted a survey in 2023 to obtain baseline data about physicians’ opinions about AI in health care.
A follow-up survey conducted in fall 2024 assessed changes in physician sentiments, with the goal to capture their views about the increasing use of AI in health care and the needs it may help address.

“This is such a quickly evolving technology. We’re thinking a lot about all of the existing technologies today in the clinical spaces and how is AI going to be integrated,” Margaret Lozovatsky, MD, FAMIA, AMA’s vice president of digital health innovations, told Healio. “Additionally, what is the potential of AI to drive care delivery to address some of the burdens that our physicians are experiencing in the clinical environment when it comes to patient care — particularly administrative burdens?”
The follow-up survey included responses from 1,183 physicians and specialists.
Results showed the percentage of physicians whose enthusiasm for health AI exceeded their concerns increased from 30% in 2023 to 35% in 2024.
The percentage of physicians whose concerns about AI exceeded their enthusiasm declined from 29% in 2023 to 25% in 2024.
Among other key findings:
- A majority (66%) of physicians indicated they used AI in their practice in 2024, up from 38% in 2023.
- A majority of physicians indicated they see “definite or some advantage” to AI tools, though that percentage declined slightly from 2023 to 2024 (68% vs. 65%).
- More than half (57%) of physicians indicated “addressing administrative burden through automation” is the biggest area of opportunity for AI.
The findings suggest health care practitioners are utilizing the technology but are still learning where and how it may best improve practice, Lozovatsky said.
“It’s moving at an astronomically fast pace in a way that I’ve never seen before,” Lozovatsky told Healio. “The numbers truly reflect what we’ve been hearing anecdotally across the industry.”
The survey highlights physicians are particularly interested about how AI can enhance diagnostic accuracy, personalize treatment and reduce administrative burdens, according to AMA Immediate Past President Jesse M. Ehrenfeld, MD, MPH.
“But there remain unresolved physician concerns with the design of health AI and the potential of flawed AI-enabled tools to put privacy at risk, integrate poorly with EHR systems, offer incorrect conclusions or recommendations, and introduce new liability concerns,” Ehrenfeld said in a press release. |Increased oversight ranked as the top regulatory action needed to increase physician confidence and adoption of AI.”
In 2023, the most commonly cited needs survey respondents cited as required to advance physician adoption of AI included privacy assurances (87%), not being held liable for AI model errors (87%) and medical liability coverage (86%).
In 2024, the most commonly cited needs included a designated feedback channel (88%), assurances about data privacy (87%) and HER integration (84%).
Despite those concerns, the overall response from members of the health care community toward AI adoption has been “pretty enthusiastic,” Lozovatsky added.
“We often see a lot of technology in our personal lives, and yet health care can be a late adopter of these technologies,” she said. “Seeing the enthusiasm in the interest in this topic to me is a really important first step to the successful implementation of these tools.”
However, the increasing adoption of this technology requires a commitment to responsible use and assurances that its use provides benefits without causing unnecessary harm.
“The most important question that comes up — and that we will continue to think about — is trust,” Lozovatsky said. “How do we develop, design and integrate these tools into care delivery in a way that we can trust what they’re telling us, because we’ll be basing clinical decisions on some of these outputs?”
Americans less optimistic
Although physician enthusiasm for AI integration into health care has increased, the general public’s trust in health systems to use AI responsibly remains low, according to results of a survey published in JAMA Network Open.
“Patients aren’t systematically represented in AI governance and policy discussions, and we need empirical evidence to meaningfully facilitate their inclusion,” Paige Nong, PhD, assistant professor in the division of health policy and management at University of Minnesota School of Public Health, told Healio. “Especially as AI becomes more common, it’s critically important to understand patient perspectives so health systems can make good decisions about when and how to engage patients in AI use and governance practices.”
Nong and Jodyn Platt, PhD, associate professor of learning health sciences at University of Michigan Medical School, conducted a national survey of U.S. adults. The survey assessed public trust in health systems to use AI responsibly, as well as their belief in health systems to protect patients from AI-related harms.
Researchers received responses from 2,039 individuals (51.2% women; 63.1% white, 17.4% Hispanic, 12.1% Black).
Respondents answered questions on a four-point Likert scale, with 1 equaling “not true” and 4 equaling “very true.”
A majority of respondents reported low trust in health care systems to use AI responsibly (65.8%) and low trust in health care systems’ efforts to ensure an AI tool would not harm patients (57.7%).
Multivariable logistic regression analyses showed respondents with higher trust more often believed health care systems would protect them from AI harm (OR = 3.97; 95% CI, 3.06-5.16) and use AI responsibly (OR = 4.29; 95% CI, 3.25-5.67).
Women expressed less trust in health care systems to use AI responsibly than men.
“We know that general trust in health care, both in clinicians and hospitals, has been declining for a while,” Nong told Healio. “It wasn’t a huge surprise to us that trust in health systems to use AI responsibly was low. It is very low, though, which might be surprising to some of the health care stakeholders who are especially excited about AI.”
As more health systems adopt and integrate AI, concern about patient trust is “well founded,” Nong said.
“Almost 66% of adults don’t trust their health care system to use AI responsibly and 58% don’t trust that their system will protect them against AI harms. These percentages are high, and our findings signal a real opportunity for systems to earn trust among their patients,” she said. “We know, for example, that patients consistently desire notification related to the use of their data and AI. Notification is low-hanging fruit in terms of how systems can work to preserve or earn the trust of their patients.”
References:
- AMA Augmented Intelligence Research. Available at: https://www.ama-assn.org/system/files/physician-ai-sentiment-report.pdf. Accessed March 3, 2025.
- AMA: Physician enthusiasm grows for health care AI (press release). Available at: https://www.ama-assn.org/press-center/press-releases/ama-physician-enthusiasm-grows-health-care-ai. Published Feb. 12, 2025. Accessed March 3, 2025.
- Nong P, et al. JAMA Netw Open. 2025;doi:10.1001/jamanetworkopen.2024.60628.
For more information:
Margaret Lozovatsky, MD, FAMIA, can be reached at margaret.lozovatsky@ama-assn.org. Paige Nong, PhD, can be reached at nong0016@umn.edu.