Proposed Framework for Integrating Chatbots into Health Care

While the technology for developing artificial intelligence-powered chatbots has existed for some time, a new viewpoint piece in JAMA lays out the clinical, ethical, and legal aspects that must be considered before applying them in healthcare. And while the emergence of COVID-19 and the social distancing that accompanies it has prompted more health systems to explore and apply automated chatbots, the authors still urge caution and thoughtfulness before proceeding.

"We need to recognize that this is relatively new technology and even for the older systems that were in place, the data are limited," said the viewpoint's lead author, John D. McGreevey III, MD, an associate professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania. "Any efforts also need to realize that much of the data we have comes from research, not widespread clinical implementation. Knowing that, evaluation of these systems must be robust when they enter the clinical space, and those operating them should be nimble enough to adapt quickly to feedback."

McGreevey, joined by C. William Hanson III, MD, chief medical information officer at Penn Medicine, and Ross Koppel, PhD, FACMI, a senior fellow at the Leonard Davis Institute of Healthcare Economics at Penn and professor of Medical Informatics, wrote "Clinical, Legal, and Ethical Aspects of AI-Assisted Conversational Agents." In it, the authors lay out 12 different focus areas that should be considered when planning to implement a chatbot, or, more formally, "conversational agent," in clinical care.

Chatbots are a tool used to communicate with patients via text message or voice. Many chatbots are powered by artificial intelligence (AI). This paper specifically discusses chatbots that use natural language processing, an AI process that seeks to "understand" language used in conversations and draws threads and connections from them to provide meaningful and useful answers.

With health care, those messages, and people's reactions to them, are extremely important and carry tangible consequences.

"We are increasingly in direct communication with our patients through electronic medical records, giving them direct access to their test results, diagnoses and doctors' notes," Hanson said. "Chatbots have the ability to enhance the value of those communications on the one hand, or cause confusion or even harm, on the other."

For instance, how a chatbot handles someone telling it something as serious as "I want to hurt myself" has many different implications.

In the self-harm example, there are several areas of focus laid out by the authors that apply. This touches first and foremost on the "Patient Safety" category: Who monitors the chatbot and how often do they do it? It also touches on "Trust and Transparency": Would this patient actually take a response from a known chatbot seriously? It also, unfortunately, raises questions in the paper's "Legal & Licensing" category: Who is accountable if the chatbot fails in its task. Moreover, a question under the "Scope" category may apply here, too: Is this a task best suited for a chatbot, or is it something that should still be totally human-operated?

Within their viewpoint, the team believes they have laid out key considerations that can inform a framework for decision-making when it comes to implementing chatbots in health care. Their considerations should apply even when rapid implementation is required to respond to events like the spread of COVID-19.

"To what extent should chatbots be extending the capabilities of clinicians, which we'd call augmented intelligence, or replacing them through totally artificial intelligence?" Koppel said. "Likewise, we need to determine the limits of chatbot authority to perform in different clinical scenarios, such as when a patient indicates that they have a cough, should the chatbot only respond by letting a nurse know or digging in further: 'Can you tell me more about your cough?'"

Chatbots have the opportunity to significantly improve health outcomes and lower health systems' operating costs, but evaluation and research will be key to that: both to ensure smooth operation and to keep the trust of both patients and health care workers.

"It's our belief that the work is not done when the conversational agent is deployed," McGreevey said. "These are going to be increasingly impactful technologies that deserve to be monitored not just before they are launched, but continuously throughout the life cycle of their work with patients."

Na-Na Zhang, Xiao-Feng Li, Yong-Qiang Deng, Hui Zhao, Yi-Jiao Huang, GuanYang, Wei-Jin Huang, Peng Gao, Chao Zhou, Rong-Rong Zhang, Yan Guo, Shi-Hui Sun, Hang Fan, Shu-Long Zu, Qi Chen, Qi He, Tian-Shu Cao, Xing-Yao Huang,Hong-Ying Qiu, Jian-Hui Nie, Yuhang Jiang, Hua-Yuan Yan, Qing Ye, Xia Zhong,Xia-Lin Xue, Zhen-Yu Zha, Dongsheng Zhou, Xiao Yang, You-Chun Wang, Bo Ying,Cheng-Feng Qin.
A thermostable mRNA vaccine against COVID-19.
Cell, 2020. doi: 10.1016/j.cell.2020.07.024

Most Popular Now

New AI Tool Predicts Protein-Protein Int…

Scientists from Cleveland Clinic and Cornell University have designed a publicly-available software and web database to break down barriers to identifying key protein-protein interactions to treat with medication. The computational tool...

AI for Real-Rime, Patient-Focused Insigh…

A picture may be worth a thousand words, but still... they both have a lot of work to do to catch up to BiomedGPT. Covered recently in the prestigious journal Nature...

New Research Shows Promise and Limitatio…

Published in JAMA Network Open, a collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center and the University of Virginia studied...

G-Cloud 14 Makes it Easier for NHS to Bu…

NHS organisations will be able to save valuable time and resource in the procurement of technologies that can make a significant difference to patient experience, in the latest iteration of...

Start-Ups will Once Again Have a Starrin…

11 - 14 November 2024, Düsseldorf, Germany. The finalists in the 16th Healthcare Innovation World Cup and the 13th MEDICA START-UP COMPETITION have advanced from around 550 candidates based in 62...

Hampshire Emergency Departments Digitise…

Emergency departments in three hospitals across Hampshire Hospitals NHS Foundation Trust have deployed Alcidion's Miya Emergency, digitising paper processes, saving clinical teams time, automating tasks, and providing trust-wide visibility of...

MEDICA HEALTH IT FORUM: Success in Maste…

11 - 14 November 2024, Düsseldorf, Germany. How can innovations help to master the great challenges and demands with which healthcare is confronted across international borders? This central question will be...

A "Chemical ChatGPT" for New M…

Researchers from the University of Bonn have trained an AI process to predict potential active ingredients with special properties. Therefore, they derived a chemical language model - a kind of...

Siemens Healthineers co-leads EU Project…

Siemens Healthineers is joining forces with more than 20 industry and public partners, including seven leading stroke hospitals, to improve stroke management for patients all over Europe. With a total...

MEDICA and COMPAMED 2024: Shining a Ligh…

11 - 14 November 2024, Düsseldorf, Germany. Christian Grosser, Director Health & Medical Technologies, is looking forward to events getting under way: "From next Monday to Thursday, we will once again...

In 10 Seconds, an AI Model Detects Cance…

Researchers have developed an AI powered model that - in 10 seconds - can determine during surgery if any part of a cancerous brain tumor that could be removed remains...

Does AI Improve Doctors' Diagnoses?

With hospitals already deploying artificial intelligence to improve patient care, a new study has found that using Chat GPT Plus does not significantly improve the accuracy of doctors' diagnoses when...