We could Soon Use AI to Detect Brain Tumors

A new paper in Biology Methods and Protocols, published by Oxford University Press, shows that scientists can train artificial intelligence (AI) models to distinguish brain tumors from healthy tissue. AI models can already find brain tumors in MRI images almost as well as a human radiologist.

Researchers have made sustained progress in artificial intelligence (AI) for use in medicine. AI is particularly promising in radiology, where waiting for technicians to process medical images can delay patient treatment. Convolutional neural networks are powerful tools that allow researchers to train AI models on large image datasets to recognize and classify images. In this way the networks can "learn" to distinguish between pictures. The networks also have the capacity for "transfer learning." Scientists can reuse a model trained on one task for a new, related project.

Although detecting camouflaged animals and classifying brain tumors involves very different sorts of images, the researchers involved in this study believed that there was a parallel between an animal hiding through natural camouflage and a group of cancerous cells blending in with the surrounding healthy tissue. The learned process of generalization - the grouping of different things under the same object identity - is essential to understanding how network can detect camouflaged objects. Such training could be particularly useful for detecting tumors.

In this retrospective study of public domain MRI data, the researchers investigated how neural network models can be trained on brain cancer imaging data while introducing a unique camouflage animal detection transfer learning step to improve the networks' tumor detection skills.

Using MRIs from public online repositories of cancerous and healthy control brains (from sources including Kaggle, the Cancer Imaging Archive of NIH National Cancer Institute, and VA Boston Healthcare System), the researchers trained the networks to distinguish healthy vs cancerous MRIs, the area affected by cancer, and the cancer appearance prototype (what type of cancer it looks like). The researchers found that the networks were almost perfect at detecting normal brain images, with only 1-2 false negatives, and distinguishing between cancerous and healthy brains. The first network had an average accuracy of 85.99% at detecting brain cancer, the other had an accuracy rate of 83.85%.

A key feature of the network is the multitude of ways in which its decisions can be explained, allowing for increased trust in the models from medical professionals and patients alike. Deep models often lack transparency, and as the field grows the ability to explain how networks perform their decisions becomes important. Following this research, the network can generate images that show specific areas in its tumor-positive or negative classification. This would allow radiologists to cross-validate their own decisions with those of the network and add confidence, almost like a second robotic radiologist who can show the telltale area of an MRI that indicates a tumor. In the future, the researchers here believe it will be important to focus on creating deep network models whose decisions can be described in intuitive ways, so artificial intelligence can occupy a transparent supporting role in clinical environments.

While the networks struggled more to distinguish between types of brain cancer in all cases, it was still clear they had distinct internal representation in the network. The accuracy and clarity improved as the researchers trained the networks in camouflage detection. Transfer learning led to an increase in accuracy for the networks.

While the best performing proposed model was about 6% less accurate than standard human detection, the research successfully demonstrates the quantitative improvement brought on by this training paradigm. The researchers here believe that this paradigm, combined with the comprehensive application of explainability methods, promotes necessary transparency in future clinical AI research.

"Advances in AI permit more accurate detection and recognition of patterns," said the paper's lead author, Arash Yazdanbakhsh. "This consequently allows for better imaging-based diagnosis aid and screening, but also necessitate more explanation for how AI accomplishes the task. Aiming for AI explainability enhances communication between humans and AI in general. This is particularly important between medical professionals and AI designed for medical purposes. Clear and explainable models are better positioned to assist diagnosis, track disease progression, and monitor treatment."

Faris Rustom, Ezekiel Moroze, Pedram Parva, Haluk Ogmen, Arash Yazdanbakhsh.
Deep learning and transfer learning for brain tumor detection and classification.
Biology Methods and Protocols, 2024. doi: 10.1093/biomethods/bpae080

Most Popular Now

AI for Real-Rime, Patient-Focused Insigh…

A picture may be worth a thousand words, but still... they both have a lot of work to do to catch up to BiomedGPT. Covered recently in the prestigious journal Nature...

A "Chemical ChatGPT" for New M…

Researchers from the University of Bonn have trained an AI process to predict potential active ingredients with special properties. Therefore, they derived a chemical language model - a kind of...

Siemens Healthineers co-leads EU Project…

Siemens Healthineers is joining forces with more than 20 industry and public partners, including seven leading stroke hospitals, to improve stroke management for patients all over Europe. With a total...

In 10 Seconds, an AI Model Detects Cance…

Researchers have developed an AI powered model that - in 10 seconds - can determine during surgery if any part of a cancerous brain tumor that could be removed remains...

Does AI Improve Doctors' Diagnoses?

With hospitals already deploying artificial intelligence to improve patient care, a new study has found that using Chat GPT Plus does not significantly improve the accuracy of doctors' diagnoses when...

AI Analysis of PET/CT Images can Predict…

Dr. Watanabe and his teams from Niigata University have revealed that PET/CT image analysis using artificial intelligence (AI) can predict the occurrence of interstitial lung disease, known as a serious...

New Medical AI Tool Identifies more Case…

Investigators at Mass General Brigham have developed an AI-based tool to sift through electronic health records to help clinicians identify cases of long COVID, an often mysterious condition that can...

MEDICA and COMPAMED 2024: Shining a Ligh…

11 - 14 November 2024, Düsseldorf, Germany. Christian Grosser, Director Health & Medical Technologies, is looking forward to events getting under way: "From next Monday to Thursday, we will once again...

Jane Stephenson Joins SPARK TSL as Chief…

Jane Stephenson has joined SPARK TSL as chief executive as the company looks to establish the benefits of SPARK Fusion with trusts looking for deployable solutions to improve productivity. Stephenson joins...

500 Patient Images per Second Shared thr…

The image exchange portal, widely known in the NHS as the IEP, is now being used to share as many as 500 images each second - including x-rays, CT, MRI...

NIH-Developed AI Algorithm Successfully …

Researchers from the National Institutes of Health (NIH) have developed an artificial intelligence (AI) algorithm to help speed up the process of matching potential volunteers to relevant clinical research trials...

MEDICA 2024 and COMPAMED 2024: Medical T…

11 - 14 November 2024, Düsseldorf, Germany. "Meet Health. Future. People." is MEDICA's campaign motto for the future in the new trade fair year 2025. The aptness of the motto...