ChatGPT Outperformed Trainee Doctors in Assessing Complex Respiratory Illness in Children

The chatbot ChatGPT performed better than trainee doctors in assessing complex cases of respiratory disease in areas such as cystic fibrosis, asthma and chest infections in a study presented at the European Respiratory Society (ERS) Congress in Vienna, Austria.

The study also showed that Google’s chatbot Bard performed better than trainees in some aspects and Microsoft’s Bing chatbot performed as well as trainees.

The research suggests that these large language models (LLMs) could be used to support trainee doctors, nurses and general practitioners to triage patients more quickly and ease pressure on health services.

The study was presented by Dr Manjith Narayanan, a consultant in paediatric pulmonology at the Royal Hospital for Children and Young People, Edinburgh and honorary senior clinical lecturer at the University of Edinburgh, UK. He said: “Large language models, like ChatGPT, have come into prominence in the last year and a half with their ability to seemingly understand natural language and provide responses that can adequately simulate a human-like conversation. These tools have several potential applications in medicine. My motivation to carry out this research was to assess how well LLMs are able to assist clinicians in real life.”

To investigate this, Dr Narayanan used clinical scenarios that occur frequently in paediatric respiratory medicine. The scenarios were provided by six other experts in paediatric respiratory medicine and covered topics like cystic fibrosis, asthma, sleep disordered breathing, breathlessness and chest infections. They were all scenarios where there is no obvious diagnosis, and where there is no published evidence, guidelines or expert consensus that point to a specific diagnosis or plan.

Ten trainee doctors who had less than four months of clinical experience in paediatrics were given an hour where they could use the internet, but not any chatbots, to solve each scenario with a descriptive answer of 200 to 400 words. Each scenario was also presented to the three chatbots.

All the responses were scored by six paediatric respiratory experts for correctness, comprehensiveness, usefulness, plausibility, and coherence. They were also asked to say whether they thought each response was human- or chatbot-generated and to give each response an overall score out of nine.

Solutions provided by ChatGPT version 3.5 scored an average of seven out of nine overall and were believed to be more human-like than responses from the other chatbots. Bard scored an average of six out of nine and was scored as more ‘coherent’ than trainee doctors, but in other respects was no better or worse than trainee doctors. Bing scored an average of four out of nine - the same as trainee doctors overall. Experts reliably identified Bing and Bard responses as non-human.

Dr Narayanan said: “Our study is the first, to our knowledge, to test LLMs against trainee doctors in situations that reflect real-life clinical practice. We did this by allowing the trainee doctors to have full access to resources available on the internet, as they would in real life. This moves the focus away from testing memory, where there is a clear advantage for LLMs. Therefore, this study shows us another way we could be using LLMs and how close we are to regular day-to-day clinical application.

"We have not directly tested how LLMs would work in patient facing roles. However, it could be used by triage nurses, trainee doctors and primary care physicians, who are often the first to review a patient."

The researchers did not find any obvious instances of ‘hallucinations’ (seemingly made-up information) with any of the three LLMs. "Even though, in our study, we did not see any instance of hallucination by LLMs, we need to be aware of this possibility and build mitigations against this," Dr Narayanan added. Answers that were judged to be irrelevant to the context were occasionally given by Bing, Bard and the trainee doctors.

Dr Narayanan and his colleagues are now planning to test chatbots against more senior doctors and to look at newer and more advanced LLMs.

Hilary Pinnock is ERS Education Council Chair and Professor of Primary Care Respiratory Medicine at The University of Edinburgh, UK, and was not involved in the research. She says: "This is a fascinating study. It is encouraging, but maybe also a bit scary, to see how a widely available AI tool like ChatGPT can provide solutions to complex cases of respiratory illness in children. It certainly points the way to a brave new world of AI-supported care.

"However, as the researchers point out, before we start to use AI in routine clinical practice, we need to be confident that it will not create errors either through ‘hallucinating’ fake information or because it has been trained on data that does not equitably represent the population we serve. As the researchers have demonstrated, AI holds out the promise of a new way of working, but we need extensive testing of clinical accuracy and safety, pragmatic assessment of organisational efficiency, and exploration of the societal implications before we embed this technology in routine care."

Most Popular Now

500 Patient Images per Second Shared thr…

The image exchange portal, widely known in the NHS as the IEP, is now being used to share as many as 500 images each second - including x-rays, CT, MRI...

Is Your Marketing Effective for an NHS C…

How can you make sure you get the right message across to an NHS chief information officer, or chief nursing information officer? Replay this webinar with Professor Natasha Phillips, former...

We could Soon Use AI to Detect Brain Tum…

A new paper in Biology Methods and Protocols, published by Oxford University Press, shows that scientists can train artificial intelligence (AI) models to distinguish brain tumors from healthy tissue. AI...

Welcome Evo, Generative AI for the Genom…

Brian Hie runs the Laboratory of Evolutionary Design at Stanford, where he works at the crossroads of artificial intelligence and biology. Not long ago, Hie pondered a provocative question: If...

Telehealth Significantly Boosts Treatmen…

New research reveals a dramatic improvement in diagnosing and curing people living with hepatitis C in rural communities using both telemedicine and support from peers with lived experience in drug...

AI can Predict Study Results Better than…

Large language models, a type of AI that analyses text, can predict the results of proposed neuroscience studies more accurately than human experts, finds a new study led by UCL...

Using AI to Treat Infections more Accura…

New research from the Centres for Antimicrobial Optimisation Network (CAMO-Net) at the University of Liverpool has shown that using artificial intelligence (AI) can improve how we treat urinary tract infections...

Research Study Shows the Cost-Effectiven…

Earlier research showed that primary care clinicians using AI-ECG tools identified more unknown cases of a weak heart pump, also called low ejection fraction, than without AI. New study findings...

New Guidance for Ensuring AI Safety in C…

As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an...

Remote Telemedicine Tool Found Highly Ac…

Collecting images of suspicious-looking skin growths and sending them off-site for specialists to analyze is as accurate in identifying skin cancers as having a dermatologist examine them in person, a...

Philips Aims to Advance Cardiac MRI Tech…

Royal Philips (NYSE: PHG, AEX: PHIA) and Mayo Clinic announced a research collaboration aimed at advancing MRI for cardiac applications. Through this investigation, Philips and Mayo Clinic will look to...

Deep Learning Model Accurately Diagnoses…

Using just one inhalation lung CT scan, a deep learning model can accurately diagnose and stage chronic obstructive pulmonary disease (COPD), according to a study published today in Radiology: Cardiothoracic...