VIDEO: New ACP guidance says AI should support physician logic, not supplant it
Click Here to Manage Email Alerts
In this video, Nadia Daneshvar, JD, MPH, a health IT policy associate at ACP, discussed the organization’s recent recommendations on the development and use of AI in health care.
“One of the primary reasons that physicians are in need of these recommendations has to do with patient safety,” Daneshvar said. “The fact that these technologies are currently being incorporated into various health care systems and tools that are used by physicians and other clinicians without their knowledge creates potential risks for patient safety.”
The ACP recommends that:
- AI should complement physician logic and decision-making, not supplant it;
- AI development, testing and use should align with principles of medical ethics and decision-making;
- there should be transparency in the development, testing and use of AI for patient care;
- AI developers, researchers and implementers should prioritize privacy and confidentiality of patient and clinician data;
- clinical safety and effectiveness and health equity should be a top priority for AI developers, regulators and implementers;
- AI should reduce health care disparities rather than exacerbate them;
- AI developers should be held accountable for their models;
- AI tools should be designed to reduce clinician burden in all stages of development;
- training on AI use should be provided in all levels of medical education; and
- the environmental impacts of AI and their mitigation should be studied and considered.
Daneshvar noted that the recommendations from ACP differ from those of other organizations for several reasons. For example, “we do discuss the significance of not using anthropomorphic language when it comes to AI technologies and the impact that can have,” she explained.