UPDATE: The American Heart Association (AHA) has just released urgent new guidance for health care systems on the responsible use of artificial intelligence (AI) in patient care. This advisory highlights the critical need for a structured approach to ensure that AI tools deliver effective clinical benefits while minimizing risks to patients.
The advisory, titled “Pragmatic Approaches to the Evaluation and Monitoring of Artificial Intelligence in Health Care,” was published in the AHA’s flagship journal, Circulation, and outlines a risk-based framework specifically for cardiovascular and stroke care. It comes at a time when the U.S. Food and Drug Administration (FDA) has cleared hundreds of AI tools, yet only a fraction undergoes thorough evaluation for their clinical impact, fairness, or bias.
Why This Matters NOW: With AI rapidly transforming health care, the AHA’s new principles aim to ensure that systems can navigate the complexities of AI deployment. The advisory reveals alarming statistics: only 61% of hospitals using predictive AI tools validated them on local data before deployment, and fewer than half tested these tools for bias. This inconsistency poses significant risks, particularly for smaller and rural hospitals that may not have the resources for rigorous validation.
Dr. Sneha S. Jain, a key figure in the advisory and a clinical assistant professor of medicine at Stanford Health Care, stated, “AI is transforming health care faster than traditional evaluation frameworks can keep up.” The AHA aims to guide health systems in adopting AI responsibly, ensuring that innovations genuinely improve patient care.
The advisory outlines four guiding principles for health systems to consider when deploying clinical AI: strategic alignment, ethical evaluation, usefulness and effectiveness, and financial performance. These principles are essential for ensuring that AI tools not only enhance diagnostic accuracy but also maintain patient safety.
What’s Next? The AHA has committed over $12 million in research funding in 2025 to test novel health care AI delivery strategies, focusing on their safety and efficacy. Additionally, the advisory emphasizes that monitoring of AI tools must continue even after deployment. As clinical practices evolve, health systems need to integrate AI governance into existing quality assurance programs and set clear thresholds for retraining or retiring ineffective tools.
In a powerful call to action, Dr. Lee H. Schwamm, a volunteer member of the AHA committee on AI and Technology Innovation, remarked, “Responsible AI use is not optional; it’s essential.” This guidance provides practical steps for health systems to evaluate and monitor AI tools, ensuring they support equitable, high-quality care and improve patient outcomes.
With the landscape of health care continuously changing, the AHA’s advisory is a crucial step toward fostering safer, more effective use of AI technologies. Stakeholders across the health care industry are urged to stay informed and implement these guidelines to protect patient welfare.
For more details, refer to the full advisory published in Circulation: Sneha S. Jain et al, “Pragmatic Approaches to the Evaluation and Monitoring of Artificial Intelligence in Health Care,” DOI: 10.1161/cir.0000000000001400.
