New Guidance for Ensuring AI Safety in Clinical Care Published in JAMA

As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine.

The guidance was published, Nov. 27, 2024, in the Journal of the American Medical Association.

"We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings," Sittig said. "It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked."

Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems.

"Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now."

Some of the recommended actions for health care organizations are listed below:

  • Review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI’s safety and effectiveness.
  • Establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. Committee members should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance.
  • Formally train clinicians on AI usage and risk, but also be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI's role in health care.
  • Maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks.
  • Develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes.

"Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care."

Sittig DF, Singh H.
Recommendations to Ensure Safety of AI in Real-World Clinical Care.
JAMA. 2024 Nov 27. doi: 10.1001/jama.2024.24598

Most Popular Now

Mobile App Tracking Blood Pressure Helps…

The AHOMKA platform, an innovative mobile app for patient-to-provider communication that developed through a collaboration between the School of Engineering and leading medical institutions in Ghana, has yielded positive results...

Accelerating NHS Digital Maturity: Paper…

Digitised clinical noting at South Tees Hospitals NHS Foundation Trust is creating efficiencies for busy doctors and nurses. The trust’s CCIO Dr Andrew Adair, deputy CCIO Dr John Greenaway, and...

Customized Smartphone App Shows Promise …

A growing body of research indicates that older adults in assisted living facilities can delay or even prevent cognitive decline through interventions that combine multiple activities, such as improving diet...

AI Tool Helps Predict Who will Benefit f…

A study led by UCLA investigators shows that artificial intelligence (AI) could play a key role in improving treatment outcomes for men with prostate cancer by helping physicians determine who...

New Study Shows Promise for Gamified mHe…

A new study published in Multiple Sclerosis and Related Disorders highlights the potential of More Stamina, a gamified mobile health (mHealth) app designed to help people with Multiple Sclerosis (MS)...

AI in Healthcare: How do We Get from Hyp…

The Highland Marketing advisory board met to consider the government's enthusiasm for AI. To date, healthcare has mostly experimented with decision support tools, and their impact on the NHS and...

Research Shows AI Technology Improves Pa…

Existing research indicates that the accuracy of a Parkinson's disease diagnosis hovers between 55% and 78% in the first five years of assessment. That's partly because Parkinson's sibling movement disorders...

New AI Tool Accelerates Disease Treatmen…

University of Virginia School of Medicine scientists have created a computational tool to accelerate the development of new disease treatments. The tool goes beyond current artificial intelligence (AI) approaches by...

DMEA sparks: The Future of Digital Healt…

8 - 10 April 2025, Berlin, Germany. Digitalization is considered one of the key strategies for addressing the shortage of skilled workers - but the digital health sector also needs qualified...

First Therapy Chatbot Trial Shows AI can…

Dartmouth researchers conducted the first clinical trial of a therapy chatbot powered by generative AI and found that the software resulted in significant improvements in participants' symptoms, according to results...

Who's to Blame When AI Makes a Medi…

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually...

DeepSeek: The "Watson" to Doct…

DeepSeek is an artificial intelligence (AI) platform built on deep learning and natural language processing (NLP) technologies. Its core products include the DeepSeek-R1 and DeepSeek-V3 models. Leveraging an efficient Mixture...