Fact checked byRichard Smith

Read more

November 06, 2024
6 min read
Save

AI in cardiology: A call for robust validation, regulatory labeling and security of data

Fact checked byRichard Smith
You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Key takeaways:

  • Implementation science remains the largest challenges to widespread use of AI in medicine.
  • Considerations for data security, patient preference for data sharing and regulatory labeling are key.

Editors Note: This is part two of a three-part Healio Exclusive series on the development and use of AI to improve clinical outcomes in cardiovascular medicine and considerations for regulatory labeling and patient privacy. Part one can be read here. Part three can be read here.

As the development of AI tools for use in clinical practice hastens, a successful framework for their useful, safe and equitable implementation remains paramount.

Graphical depiction of source quote presented in the article

The American Heart Association councils in February issued a scientific statement on the use of AI to improve cardiovascular (CV) outcomes. The scientific statement, published in Circulation, provided best practices for the development of new AI tools as well as a framework for successful implementation in clinical practice.

In addition, the American College of Cardiology conducted a state-of-the-art review of contemporary use of AI in CV clinical practice. The review, published in the Journal of the American College of Cardiology in July, highlights AI developments in CV across clinical practice, and the authors provide an overview of future uses of AI in clinical practice as well as highlighting areas of caution.

“The field of AI is growing at a rapid pace, with technological innovation that spans all aspects of the discovery of new mechanisms of disease and their treatments all the way to how we deliver care,” Rohan Khera, MD, MS, assistant professor in the section of cardiovascular medicine at Yale School of Medicine, cardiologist at Yale New Haven Hospital and co-author of the ACC review, told Healio. “Our review focuses on providing a comprehensive overview of these innovations. The article is just the first among many that forecasts a new future in CV care. We are hopeful that many of the technologies will continue to evolve and transform care across all care settings.”

The implementation framework, as laid out in the AHA scientific statement, encompasses important topics such as ensuring clinical utility and seamless integration of AI in patient care; understanding the technology at a level comparable to any other tool used to aid in clinical decision-making; equitable distribution; and avoiding societal biases.

Validation in large, diverse datasets

Paul A. Friedman

“One of the biggest challenges with AI tools is implementation science. It is important that we use these tools to address meaningful clinical questions,” Paul A. Friedman, MD, FHRS, cardiac electrophysiologist and chair of cardiovascular medicine at Mayo Clinic in Rochester, Minnesota, and co-author of the AHA scientific statement, told Healio. “We want it to solve problems that impact human health, not a niche research interest.”

Algorithm triangulation using large, diverse datasets is essential; however, recognition of differences between centers in their data gathering practices should be taken into consideration when training and validating AI tools, Friedman and colleagues wrote.

Acknowledgement of differences in the accuracy and frequency of data collection and clinical actions from varying centers of collection may inhibit the utility of certain datasets to train AI algorithms effectively. Failing to do so could exacerbate current health care disparities, according to the statement.

“Robust clinical validation in large, diverse populations that minimizes bias is essential to address uncertainties, such as various forms of bias, vulnerability to adversarial attacks and overfitting, which reduce clinical acceptance and adoption,” Antonis A. Armoundas, PhD, associate professor of medicine at Harvard Medical School, affiliate member of Broad Institute at Massachusetts Institute of Technology and assistant in biology at Massachusetts General Hospital Cardiovascular Research Center and co-author of the AHA statement, told Healio. “Furthermore, a major challenge to current AI-based algorithms is the lack of rigorous prospective evaluation. With respect to genetics, although AI algorithms have made significant progress in enhancing variant interpretation, their use as a definitive classification tool still requires caution.

“It should be recognized that deterioration of algorithm performance may occur as a consequence of natural evolution of clinical environments resulting from changes in the demographics of the treated patients, or updated clinical practice evidence and outcomes,” he said.

To preserve AI algorithm performance over time, updates should be made to labeling within training datasets as new patient populations and demographics are studied, according to the AHA statement.

Standardized protocols needed

The authors of the AHA statement posited that complete clinician understanding of the architecture behind some of the more complex algorithms may not be necessary for its robust use, as long as there is proper regulatory labeling.

As an example, Armoundas and colleagues noted that complete understanding of a drug’s mechanism of action is not a prerequisite of its use, so long as its use is in accordance with its regulatory labeling predicated on robust clinical evidence.

Therefore, the authors wrote, AI and machine learning algorithms for use in clinical medicine should be FDA-labeled with a precise description of the patient population it was validated in and the intended clinical scenarios for which it was developed.

Dipti Itchhaporia

“I don’t think the FDA is going to come up with all the standards. This is a role for professional organizations because we’re a trusted source. This is where we can play a role to help create standardized protocols, for example,” Healio | Cardiology Today Editorial Board Member Dipti Itchhaporia, MD, MACC, FESC, the Eric & Sheila Samson Endowed Chair in Cardiovascular Health and director of disease management for Jeffrey M. Carlton Heart & Vascular Institute at Hoag Memorial Hospital in Newport Beach, California, clinical professor of medicine at University of California, Irvine, and past president of the ACC, said in an interview. “We are in an area where cooperation is going to be important. The engineers [building these tools] have no idea about the clinical needs.”

In their state-of-the-art review published in JACC, Khera and colleagues stated the path to a future of AI-driven improvement in clinical outcomes relies on equitable and regulated adoption of these new tools.

The authors hypothesize that AI’s greatest value may come in the form of identifying phenotypic variations and risk within complex signals, and not in the assessment of tabular data.

“It is essential that the barrier to deployment is modest, by using simpler modalities, like ECG images instead of signals, and tools that can easily scale up to low-resource settings,” Khera told Healio.

Protecting privacy, keeping data secure

Because machine learning enables prognostication using large volumes of unstructured data, such as 12-lead ECGs, chest X-rays and echocardiograms, AI — especially generative AI models — needs access to sensitive patient data, of which collection and processing may pose security and privacy risks.

“We address them via regulation and through appropriate controls,” Khera told Healio. “I believe while each stakeholder is responsible for ensuring these aspects, most important is for regulatory agencies and payers to play a key role in ensuring this is implemented. Moreover, legislation that prioritizes privacy and security in the era of AI will need to be developed and refined.”

In addition, considerations for patients’ preferences for how their personal and health data are used will play a significant role in future use of AI tools.

“Transparency is also going to require that I tell you that your care is going to involve AI and that I’m going to do data collection. Then as the patient, you want to know what that means,” Itchhaporia told Healio.

Armoundas and colleagues noted that patients:

  • will have differing views on how their data will be used;
  • may not be comfortable with their data being sold to third parties without notice or consent; and
  • want to be informed of its use or sale, regardless of whether the data are deidentified.

The AHA statement also addressed regulation of AI and data handling. The statement recommended a focus on transparency by providing detailed descriptions of preprocessing. This included an outline of which features are extracted by the algorithm and how they are extracted, which features are excluded by it, and how predictions are made.

“As with any other technology, AI algorithms may pose risk if not used properly,” Armoundas said. “As new patient groups are studied, thereby reducing the sample bias, such descriptions should be entered into the AI algorithm label. This is also a critical issue with respect to physician’s professional liability in case of an incorrect decision and a potentially harmful outcome; however, as with any other medical product, the as-labeled use of the A algorithms narrows the responsibility and minimizes liability concerns.”

Editors Note: Part three of this Healio Exclusive series will delve into the responsibility of clinicians to be active participants in clinical AI tool development, from ideation to implementation.

We want to hear from you:

Healio wants to hear from you: What are your thoughts on the rising use AI tools in clinical practice and any related ethical concerns? Share your thoughts with Healio by emailing the author at sbuzby@healio.com or tagging @CardiologyToday on X (Twitter). We will contact you if we wish to publish any part of your story.

For more information:

Antonis A. Armoundas, PhD, can be reached at armoundas.antonis@mgh.harvard.edu.

Paul A. Friedman, MD, can be reached at friedman.paul@mayo.edu; X (Twitter): @drpaulfriedman.

Dipti N. Itchhaporia, MD, can be reached at drdipti@yahoo.com; X (Twitter): @ditchhaporia.

Rohan Khera, MD, MS, can be reached at rohan.khera@yale.edu.

References: