New Guidance for Ensuring AI Safety in Clinical Care Published in JAMA

As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine.

The guidance was published, Nov. 27, 2024, in the Journal of the American Medical Association.

"We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings," Sittig said. "It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked."

Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems.

"Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now."

Some of the recommended actions for health care organizations are listed below:

  • Review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI’s safety and effectiveness.
  • Establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. Committee members should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance.
  • Formally train clinicians on AI usage and risk, but also be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI's role in health care.
  • Maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks.
  • Develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes.

"Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care."

Sittig DF, Singh H.
Recommendations to Ensure Safety of AI in Real-World Clinical Care.
JAMA. 2024 Nov 27. doi: 10.1001/jama.2024.24598

Most Popular Now

500 Patient Images per Second Shared thr…

The image exchange portal, widely known in the NHS as the IEP, is now being used to share as many as 500 images each second - including x-rays, CT, MRI...

Jane Stephenson Joins SPARK TSL as Chief…

Jane Stephenson has joined SPARK TSL as chief executive as the company looks to establish the benefits of SPARK Fusion with trusts looking for deployable solutions to improve productivity. Stephenson joins...

Heart Attacks could be Ruled Out Early w…

As many as 60% of people presenting to emergency departments around the world with heart attack symptoms could be safely sent home, many at earlier stages, with the support of...

Northern Ireland's Laboratory Servi…

The transformation of pathology services across Northern Ireland has achieved another milestone, with the completion of phase three of the CoreLIMS programme to deploy Clinisys WinPath to all five health...

Is Your Marketing Effective for an NHS C…

How can you make sure you get the right message across to an NHS chief information officer, or chief nursing information officer? Replay this webinar with Professor Natasha Phillips, former...

We could Soon Use AI to Detect Brain Tum…

A new paper in Biology Methods and Protocols, published by Oxford University Press, shows that scientists can train artificial intelligence (AI) models to distinguish brain tumors from healthy tissue. AI...

Welcome Evo, Generative AI for the Genom…

Brian Hie runs the Laboratory of Evolutionary Design at Stanford, where he works at the crossroads of artificial intelligence and biology. Not long ago, Hie pondered a provocative question: If...

Telehealth Significantly Boosts Treatmen…

New research reveals a dramatic improvement in diagnosing and curing people living with hepatitis C in rural communities using both telemedicine and support from peers with lived experience in drug...

AI can Predict Study Results Better than…

Large language models, a type of AI that analyses text, can predict the results of proposed neuroscience studies more accurately than human experts, finds a new study led by UCL...

Using AI to Treat Infections more Accura…

New research from the Centres for Antimicrobial Optimisation Network (CAMO-Net) at the University of Liverpool has shown that using artificial intelligence (AI) can improve how we treat urinary tract infections...

Research Study Shows the Cost-Effectiven…

Earlier research showed that primary care clinicians using AI-ECG tools identified more unknown cases of a weak heart pump, also called low ejection fraction, than without AI. New study findings...

New Guidance for Ensuring AI Safety in C…

As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an...