AI Tool Successfully Responds to Patient Questions in Electronic Health Record

As part of a nationwide trend, many more of NYU Langone Health's patients during the pandemic started using electronic health record tools to ask their doctors questions, refill prescriptions, and review test results. Many patients’ digital inquiries arrived via a communications tool called In Basket, which is built into NYU Langone’s electronic health record (EHR) system, EPIC.

While physicians have always dedicated time to managing EHR messages, they saw a more than 30% annual increase in recent years in the number of messages received daily, according an article by Paul A. Testa, MD chief medical information officer at NYU Langone. Testa wrote that it is not uncommon for physicians to receive more than 150 In Basket messages per day. With health systems not designed to handle this kind of traffic, physicians ended up filling the gap, spending long hours after work sifting through messages. This burden is cited as a reason that half of physicians report burnout.

Now a new study, led by researchers at NYU Grossman School of Medicine, shows that an AI tool can draft responses to patients’ EHR queries as accurately as their human healthcare professionals, and with greater perceived "empathy." The findings highlight these tools’ potential to dramatically reduce physicians’ In Basket burden while improving their communication with patients, as long as human providers review AI drafts before they are sent.

NYU Langone Health has been testing the capabilities of generative artificial intelligence (genAI), in which computer algorithms develop likely options for the next word in any sentence based on how people have used words in context on the internet. A result of this next-word prediction is that genAI "chatbots" can reply to questions in convincing human-like language. NYU Langone in 2023 licensed "a private instance" of GPT4, the latest relative of the famous chatGPT chatbot, which let physicians experiment using real patient data while still adhering to data privacy rules.

Published online July 16 in JAMA Network Open, the new study examined GPT4-generated drafts to patient In Basket queries, and had primary care physicians compare them to the actual human responses to those messages.

"Our results suggest that chatbots could reduce the workload of care providers by enabling efficient and empathetic responses to patients' concerns," said lead study author William Small, MD, a clinical assistant professor in Department of Medicine at NYU Grossman School of Medicine. "We found that EHR-integrated AI chatbots that use patient-specific data can draft messages similar in quality to human providers."

For the study, sixteen primary care physicians rated 344 randomly assigned pairs of AI and human responses to patient messages on accuracy, relevance, completeness, and tone, and indicated if they would use the AI response as a first draft, or have to start from scratch in writing the patient message. The physicians did not know whether the responses they were reviewing were generated by humans or the AI tool (blinded study).

The research team found that the accuracy, completeness, and relevance of generative AI and human providers responses did not differ statistically. Generative AI responses outperformed human providers in terms of understandability and tone by 9.5%. Further, the AI responses were more than twice as likely (125 percent more likely) to be considered empathetic and 62% more likely to use language that conveyed positivity (potentially related to hopefulness) and affiliation ("we are in this together").

On the other hand, AI responses were also 38% longer and 31% more likely to use complex language, so further training of the tool is needed, the researchers say. While humans responded to patient queries at a 6th grade level, AI was writing at an 8th grade level, according to a standard measure of readability called the Flesch Kincaid score.

The researchers argued that use of private patient information by chatbots, rather than general internet information, better approximates how this technology would be used in the real world. Future studies will be needed to confirm whether private data specifically improved AI tool performance.

"This work demonstrates that the AI tool can build high-quality draft responses to patient requests," said corresponding author Devin Mann, MD, senior director of Informatics Innovation in NYU Langone Medical Center Information Technology (MCIT). "With this physician approval in place, GenAI message quality will be equal in the near future in quality, communication style, and usability, to responses generated by humans," added Mann, also a professor in the Departments of Population Health and Medicine.

Along with Drs. Small and Mann, study authors from NYU Langone Health were Beatrix Brandfield-Harvey, Zoe Jonassen, Soumik Mandal, Elizabeth Stevens, Vincent Major, Erin Lostraglio, Adam Szerencsy, Simon Jones, Yindalon Aphinyanaphongs, and Stephen Johnson. Also authors were Oded Nov in the NYU Tandon School of Engineering, and Batia Wiesenfeld of NYU Stern School of Business.

The study was funded by National Science Foundation grants 1928614 and 2129076) and Swiss National Science Foundation grants P500PS_202955 and P5R5PS_217714.

Small WR, Wiesenfeld B, Brandfield-Harvey B, Jonassen Z, Mandal S, Stevens ER, Major VJ, Lostraglio E, Szerencsy A, Jones S, Aphinyanaphongs Y, Johnson SB, Nov O, Mann D.
Large Language Model-Based Responses to Patients' In-Basket Messages.
JAMA Netw Open. 2024 Jul 1;7(7):e2422399. doi: 10.1001/jamanetworkopen.2024.22399

Most Popular Now

European Artificial Intelligence Act Com…

The European Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on artificial intelligence, enters into force. The AI Act is designed to ensure that AI developed and used...

Patient Safety must be Central to the De…

An EPR system brings together different patient information in one place, making it easier to access for healthcare professionals. This information can include patients' own notes, test results, observations by...

ChatGPT Shows Promise in Answering Patie…

The groundbreaking ChatGPT chatbot shows potential as a time-saving tool for responding to patient questions sent to the urologist's office, suggests a study in the September issue of Urology Practice®...

AI can Help Rule out Abnormal Pathology …

A commercial artificial intelligence (AI) tool used off-label was effective at excluding pathology and had equal or lower rates of critical misses on chest X-ray than radiologists, according to a...

Survey: Most Americans Comfortable with …

Artificial intelligence (AI) is all around us - from smart home devices to entertainment and social media algorithms. But is AI okay in healthcare? A new national survey commissioned by...

AI Spots Cancer and Viral Infections at …

Researchers at the Centre for Genomic Regulation (CRG), the University of the Basque Country (UPV/EHU), Donostia International Physics Center (DIPC) and the Fundación Biofisica Bizkaia (FBB, located in Biofisika Institute)...

Video Gaming Improves Mental Well-Being

A pioneering study titled "Causal effect of video gaming on mental well-being in Japan 2020-2022," published in Nature Human Behaviour, has conducted the most comprehensive investigation to date on the...

New Diabetes Research Links Blood Glucos…

As part of its ongoing exploration of vocal biomarkers and the role they can play in enhancing health outcomes, Klick Labs published a new study in Scientific Reports - confirming...

Machine learning helps identify rheumato…

A machine-learning tool created by Weill Cornell Medicine and Hospital for Special Surgery (HSS) investigators can help distinguish subtypes of rheumatoid arthritis (RA), which may help scientists find ways to...

New AI Software could Make Diagnosing De…

Although Alzheimer's is the most common cause of dementia - a catchall term for cognitive deficits that impact daily living, like the loss of memory or language - it's not...

A New AI Tool for Cancer

Scientists at Harvard Medical School have designed a versatile, ChatGPT-like AI model capable of performing an array of diagnostic tasks across multiple forms of cancers. The new AI system, described Sept...

Vision-Based ChatGPT Shows Deficits Inte…

Researchers evaluating the performance of ChatGPT-4 Vision found that the model performed well on text-based radiology exam questions but struggled to answer image-related questions accurately. The study's results were published...