A research group led by Dr. Hirotaka Takita and Associate Professor Daiju Ueda at Osaka Metropolitan University’s Graduate School of Medicine conducted a meta-analysis of generative AI's diagnostic capabilities using 83 research papers published between June 2018 and June 2024 that covered a wide range of medical specialties. Of the large language models (LLMs) that were analyzed, ChatGPT was the most commonly studied.
The comparative evaluation revealed that medical specialists had a 15.8% higher diagnostic accuracy than generative AI. The average diagnostic accuracy of generative AI was 52.1%, with the latest models of generative AI sometimes showing accuracy on par with non-specialist doctors.
"This research shows that generative AI’s diagnostic capabilities are comparable to non-specialist doctors. It could be used in medical education to support non-specialist doctors and assist in diagnostics in areas with limited medical resources." stated Dr. Takita. "Further research, such as evaluations in more complex clinical scenarios, performance evaluations using actual medical records, improving the transparency of AI decision-making, and verification in diverse patient groups, is needed to verify AI’s capabilities."
Takita H, Kabata D, Walston SL, Tatekawa H, Saito K, Tsujimoto Y, Miki Y, Ueda D.
A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians.
NPJ Digit Med. 2025 Mar 22;8(1):175. doi: 10.1038/s41746-025-01543-z