ChatGPT Shows Promise in Answering Patients' Questions to Urologists

The groundbreaking ChatGPT chatbot shows potential as a time-saving tool for responding to patient questions sent to the urologist's office, suggests a study in the September issue of Urology Practice®, an Official Journal of the American Urological Association (AUA). The journal is published in the Lippincott portfolio by Wolters Kluwer.

The artificial intelligence (AI) tool generated "acceptable" responses to nearly one-half of a sample of real-life patient questions, according to the new research by Michael Scott, MD, a urologist at Stanford University School of Medicine. "Generative AI technologies may play a valuable role in providing prompt, accurate responses to routine patient questions - potentially alleviating patients' concerns while freeing up clinic time and resources to address other complex tasks," Dr. Scott comments.

Can ChatGPT accurately answer questions from urology patients?

ChatGPT is an innovative large language model (LLM) that has sparked interest across a wide range of settings, including health and medicine. In some recent studies, ChatGPT has performed well in responding to various types of medical questions, although its performance in urology is less well-established.

Modern electronic health record (EHR) systems enable patients to send medical questions directly to their doctors. "This shift has been associated with an increased time burden of EHR use for physicians with a large portion of this attributed to patient in-basket messages," the researchers write. One study estimates that each message in a physician's inbox adds more than two minutes spent on the EHR.

Dr. Scott and colleagues collected 100 electronic patient messages requesting medical advice from a urologist at a men's health clinic. The messages were categorized by type of content and difficulty, then entered into ChatGPT. Five experienced urologists graded each AI-generated response in terms of accuracy, completeness, helpfulness, and intelligibility. Raters also indicated whether they would send each response to a patient.

Findings support 'generative AI technology to improve clinical efficiency'

The ChatGPT-generated responses were judged to be accurate, with an average score of 4.0 on a five-point scale; and intelligible, average score 4.7. Ratings of completeness and helpfulness were lower, but with little or no potential for harm. Scores were comparable for different types of question content (symptoms, postoperative concerns, etc).

"Overall, 47% of responses were deemed acceptable to send to patients," the researchers write. Questions rated as "easy" had a higher rate of acceptable responses: 56%, compared to 34% for "difficult" questions.

"These results show promise for the utilization of generative AI technology to help improve clinical efficiency," Dr. Scott and coauthors write. The findings "suggest the feasibility of integrating this new technology into clinical care to improve efficiency while maintaining quality of patient communication."

The researchers note some potential drawbacks of ChatGPT-generated responses to patient questions: "ChatGPT's model is trained on information from the Internet in general, as opposed to validated medical sources," with a "risk of generating inaccurate or misleading responses." The authors also highlight the need for safeguards to ensure patient privacy.

"While our study provides an interesting starting point, more research will be needed to validate the use of LLMs to respond to patient questions, in urology as well as other specialties," Dr. Scott comments. "This will be a potentially valuable healthcare application, particularly with continued advances in AI technology."

Scott M, Muncey W, Seranio N, Belladelli F, Del Giudice F, Li S, Ha A, Glover F, Zhang CA, Eisenberg ML.
Assessing Artificial Intelligence-Generated Responses to Urology Patient In-Basket Messages.
Urol Pract. 2024 Sep;11(5):793-798. doi: 10.1097/UPJ.0000000000000637

Most Popular Now

European Artificial Intelligence Act Com…

The European Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on artificial intelligence, enters into force. The AI Act is designed to ensure that AI developed and used...

Patient Safety must be Central to the De…

An EPR system brings together different patient information in one place, making it easier to access for healthcare professionals. This information can include patients' own notes, test results, observations by...

Generative AI can Not yet Reliably Read …

It may someday be possible to use Large Language Models (LLM) to automatically read clinical notes in medical records and reliably and efficiently extract relevant information to support patient care...

ChatGPT Shows Promise in Answering Patie…

The groundbreaking ChatGPT chatbot shows potential as a time-saving tool for responding to patient questions sent to the urologist's office, suggests a study in the September issue of Urology Practice®...

Survey: Most Americans Comfortable with …

Artificial intelligence (AI) is all around us - from smart home devices to entertainment and social media algorithms. But is AI okay in healthcare? A new national survey commissioned by...

AI can Help Rule out Abnormal Pathology …

A commercial artificial intelligence (AI) tool used off-label was effective at excluding pathology and had equal or lower rates of critical misses on chest X-ray than radiologists, according to a...

AI Spots Cancer and Viral Infections at …

Researchers at the Centre for Genomic Regulation (CRG), the University of the Basque Country (UPV/EHU), Donostia International Physics Center (DIPC) and the Fundación Biofisica Bizkaia (FBB, located in Biofisika Institute)...

Video Gaming Improves Mental Well-Being

A pioneering study titled "Causal effect of video gaming on mental well-being in Japan 2020-2022," published in Nature Human Behaviour, has conducted the most comprehensive investigation to date on the...

New Diabetes Research Links Blood Glucos…

As part of its ongoing exploration of vocal biomarkers and the role they can play in enhancing health outcomes, Klick Labs published a new study in Scientific Reports - confirming...

Machine learning helps identify rheumato…

A machine-learning tool created by Weill Cornell Medicine and Hospital for Special Surgery (HSS) investigators can help distinguish subtypes of rheumatoid arthritis (RA), which may help scientists find ways to...

New AI Software could Make Diagnosing De…

Although Alzheimer's is the most common cause of dementia - a catchall term for cognitive deficits that impact daily living, like the loss of memory or language - it's not...

A New AI Tool for Cancer

Scientists at Harvard Medical School have designed a versatile, ChatGPT-like AI model capable of performing an array of diagnostic tasks across multiple forms of cancers. The new AI system, described Sept...