Towards an AI Diagnosis Like the Doctor's

Artificial intelligence (AI) is an important innovation in diagnostics, because it can quickly learn to recognize abnormalities that a doctor would also label as a disease. But the way that these systems work is often opaque, and doctors do have a better "overall picture" when they make the diagnosis. In a new publication, researchers from Radboudumc show how they can make the AI show how it's working, as well as let it diagnose more like a doctor, thus making AI-systems more relevant to clinical practice.

Doctor vs AI

In recent years, artificial intelligence has been on the rise in the diagnosis of medical imaging. A doctor can look at an X-ray or biopsy to identify abnormalities, but this can increasingly also be done by an AI system by means of "deep learning" (see 'Background: what is deep learning' below). Such a system learns to arrive at a diagnosis on its own, and in some cases it does this just as well or better than experienced doctors.

The two major differences compared to a human doctor are, first, that AI is often not transparent in how it's analyzing the images, and, second, that these systems are quite "lazy". AI looks at what is needed for a particular diagnosis, and then stops. This means that a scan does not always identify all abnormalities, even if the diagnosis is correct. A doctor, especially when considering the treatment plan, looks at the big picture: what do I see? Which anomalies should be removed or treated during surgery?

AI more like the doctor

To make AI systems more attractive for the clinical practice, Cristina González Gonzalo, PhD candidate at the A-eye Research and Diagnostic Image Analysis Group of Radboudumc, developed a two-sided innovation for diagnostic AI. She did this based on eye scans, in which abnormalities of the retina occurred - specifically diabetic retinopathy and age-related macular degeneration. These abnormalities can be easily recognized by both a doctor and AI. But they are also abnormalities that often occur in groups. A classic AI would diagnose one or a few spots and stop the analysis. In the process developed by González Gonzalo however, the AI goes through the picture over and over again, learning to ignore the places it has already passed, thus discovering new ones. Moreover, the AI also shows which areas of the eye scan it deemed suspicious, therefore making the diagnostic process transparent.

An iterative process

A basic AI could come up with a diagnosis based on one assessment of the eye scan, and thanks to the first contribution by González Gonzalo, it can show how it arrived at that diagnosis. This visual explanation shows that the system is indeed lazy - stopping the analysis after it as obtained just enough information to make a diagnosis. That's why she also made the process iterative in an innovative way, forcing the AI to look harder and create more of a 'complete picture' that radiologists would have.

How did the system learn to look at the same eye scan with 'fresh eyes'? The system ignored the familiar parts by digitally filling in the abnormalities already found using healthy tissue from around the abnormality. The results of all the assessment rounds are then added together and that produces the final diagnosis. In the study, this approach improved the sensitivity of the detection of diabetic retinopathy and age-related macular degeneration by 11.2+/-2.0% per image. What this project proves is that it's possible to have an AI system assess images more like a doctor, as well as make transparent how it's doing it. This might help these systems become easier to trust and thus to be adopted by radiologists.

Background: what is 'deep learning'?

Deep learning is a term used for systems that learn in a way that is similar to how our brain works. It consists of networks of electronic 'neurons', each of which learns to recognize one aspect of the desired image. It then follows the principles of 'learning by doing', and 'practice makes perfect'. The system is fed more and more images that include relevant information saying - in this case - whether there is an anomaly in the retina, and if so, which disease it is. The system then learns to recognize which characteristics belong to those diseases, and the more pictures it sees, the better it can recognize those characteristics in undiagnosed images. We do something similar with small children: we repeatedly hold up an object, say an apple, in front of them and say that it is an apple. After some time, you don't have to say it anymore - even though each apple is slightly different. Another major advantage of these systems is that they complete their training much faster than humans and can work 24 hours a day.

C González-Gonzalo, B Liefers, B van Ginneken, CI Sánchez.
Iterative augmentation of visual evidence for weakly-supervised lesion localization in deep interpretability frameworks: application to color fundus images.
IEEE Transactions on Medical Imaging, 2020. doi: 10.1109/TMI.2020.2994463.

Most Popular Now

New AI Tool Predicts Protein-Protein Int…

Scientists from Cleveland Clinic and Cornell University have designed a publicly-available software and web database to break down barriers to identifying key protein-protein interactions to treat with medication. The computational tool...

AI for Real-Rime, Patient-Focused Insigh…

A picture may be worth a thousand words, but still... they both have a lot of work to do to catch up to BiomedGPT. Covered recently in the prestigious journal Nature...

New Research Shows Promise and Limitatio…

Published in JAMA Network Open, a collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center and the University of Virginia studied...

G-Cloud 14 Makes it Easier for NHS to Bu…

NHS organisations will be able to save valuable time and resource in the procurement of technologies that can make a significant difference to patient experience, in the latest iteration of...

Start-Ups will Once Again Have a Starrin…

11 - 14 November 2024, Düsseldorf, Germany. The finalists in the 16th Healthcare Innovation World Cup and the 13th MEDICA START-UP COMPETITION have advanced from around 550 candidates based in 62...

Hampshire Emergency Departments Digitise…

Emergency departments in three hospitals across Hampshire Hospitals NHS Foundation Trust have deployed Alcidion's Miya Emergency, digitising paper processes, saving clinical teams time, automating tasks, and providing trust-wide visibility of...

MEDICA HEALTH IT FORUM: Success in Maste…

11 - 14 November 2024, Düsseldorf, Germany. How can innovations help to master the great challenges and demands with which healthcare is confronted across international borders? This central question will be...

A "Chemical ChatGPT" for New M…

Researchers from the University of Bonn have trained an AI process to predict potential active ingredients with special properties. Therefore, they derived a chemical language model - a kind of...

Siemens Healthineers co-leads EU Project…

Siemens Healthineers is joining forces with more than 20 industry and public partners, including seven leading stroke hospitals, to improve stroke management for patients all over Europe. With a total...

MEDICA and COMPAMED 2024: Shining a Ligh…

11 - 14 November 2024, Düsseldorf, Germany. Christian Grosser, Director Health & Medical Technologies, is looking forward to events getting under way: "From next Monday to Thursday, we will once again...

In 10 Seconds, an AI Model Detects Cance…

Researchers have developed an AI powered model that - in 10 seconds - can determine during surgery if any part of a cancerous brain tumor that could be removed remains...

Does AI Improve Doctors' Diagnoses?

With hospitals already deploying artificial intelligence to improve patient care, a new study has found that using Chat GPT Plus does not significantly improve the accuracy of doctors' diagnoses when...