We could Soon Use AI to Detect Brain Tumors

A new paper in Biology Methods and Protocols, published by Oxford University Press, shows that scientists can train artificial intelligence (AI) models to distinguish brain tumors from healthy tissue. AI models can already find brain tumors in MRI images almost as well as a human radiologist.

Researchers have made sustained progress in artificial intelligence (AI) for use in medicine. AI is particularly promising in radiology, where waiting for technicians to process medical images can delay patient treatment. Convolutional neural networks are powerful tools that allow researchers to train AI models on large image datasets to recognize and classify images. In this way the networks can "learn" to distinguish between pictures. The networks also have the capacity for "transfer learning." Scientists can reuse a model trained on one task for a new, related project.

Although detecting camouflaged animals and classifying brain tumors involves very different sorts of images, the researchers involved in this study believed that there was a parallel between an animal hiding through natural camouflage and a group of cancerous cells blending in with the surrounding healthy tissue. The learned process of generalization - the grouping of different things under the same object identity - is essential to understanding how network can detect camouflaged objects. Such training could be particularly useful for detecting tumors.

In this retrospective study of public domain MRI data, the researchers investigated how neural network models can be trained on brain cancer imaging data while introducing a unique camouflage animal detection transfer learning step to improve the networks' tumor detection skills.

Using MRIs from public online repositories of cancerous and healthy control brains (from sources including Kaggle, the Cancer Imaging Archive of NIH National Cancer Institute, and VA Boston Healthcare System), the researchers trained the networks to distinguish healthy vs cancerous MRIs, the area affected by cancer, and the cancer appearance prototype (what type of cancer it looks like). The researchers found that the networks were almost perfect at detecting normal brain images, with only 1-2 false negatives, and distinguishing between cancerous and healthy brains. The first network had an average accuracy of 85.99% at detecting brain cancer, the other had an accuracy rate of 83.85%.

A key feature of the network is the multitude of ways in which its decisions can be explained, allowing for increased trust in the models from medical professionals and patients alike. Deep models often lack transparency, and as the field grows the ability to explain how networks perform their decisions becomes important. Following this research, the network can generate images that show specific areas in its tumor-positive or negative classification. This would allow radiologists to cross-validate their own decisions with those of the network and add confidence, almost like a second robotic radiologist who can show the telltale area of an MRI that indicates a tumor. In the future, the researchers here believe it will be important to focus on creating deep network models whose decisions can be described in intuitive ways, so artificial intelligence can occupy a transparent supporting role in clinical environments.

While the networks struggled more to distinguish between types of brain cancer in all cases, it was still clear they had distinct internal representation in the network. The accuracy and clarity improved as the researchers trained the networks in camouflage detection. Transfer learning led to an increase in accuracy for the networks.

While the best performing proposed model was about 6% less accurate than standard human detection, the research successfully demonstrates the quantitative improvement brought on by this training paradigm. The researchers here believe that this paradigm, combined with the comprehensive application of explainability methods, promotes necessary transparency in future clinical AI research.

"Advances in AI permit more accurate detection and recognition of patterns," said the paper's lead author, Arash Yazdanbakhsh. "This consequently allows for better imaging-based diagnosis aid and screening, but also necessitate more explanation for how AI accomplishes the task. Aiming for AI explainability enhances communication between humans and AI in general. This is particularly important between medical professionals and AI designed for medical purposes. Clear and explainable models are better positioned to assist diagnosis, track disease progression, and monitor treatment."

Faris Rustom, Ezekiel Moroze, Pedram Parva, Haluk Ogmen, Arash Yazdanbakhsh.
Deep learning and transfer learning for brain tumor detection and classification.
Biology Methods and Protocols, 2024. doi: 10.1093/biomethods/bpae080

Most Popular Now

Stanford Medicine Study Suggests Physici…

Artificial intelligence-powered chatbots are getting pretty good at diagnosing some diseases, even when they are complex. But how do chatbots do when guiding treatment and care after the diagnosis? For...

OmicsFootPrint: Mayo Clinic's AI To…

Mayo Clinic researchers have pioneered an artificial intelligence (AI) tool, called OmicsFootPrint, that helps convert vast amounts of complex biological data into two-dimensional circular images. The details of the tool...

Testing AI with AI: Ensuring Effective A…

Using a pioneering artificial intelligence platform, Flinders University researchers have assessed whether a cardiac AI tool recently trialled in South Australian hospitals actually has the potential to assist doctors and...

Adults don't Trust Health Care to U…

A study finds that 65.8% of adults surveyed had low trust in their health care system to use artificial intelligence responsibly and 57.7% had low trust in their health care...

AI Unlocks Genetic Clues to Personalize …

A groundbreaking study led by USC Assistant Professor of Computer Science Ruishan Liu has uncovered how specific genetic mutations influence cancer treatment outcomes - insights that could help doctors tailor...

The 10 Year Health Plan: What do We Need…

Opinion Article by Piyush Mahapatra, Consultant Orthopaedic Surgeon and Chief Innovation Officer at Open Medical. There is a new ten-year plan for the NHS. It will "focus efforts on preventing, as...

Deep Learning to Increase Accessibility…

Coronary artery disease is the leading cause of death globally. One of the most common tools used to diagnose and monitor heart disease, myocardial perfusion imaging (MPI) by single photon...

People's Trust in AI Systems to Mak…

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions. Artificial moral advisors (AMAs) are systems based on artificial...

DMEA 2025 - Innovations, Insights and Ne…

8 - 10 April 2025, Berlin, Germany. Less than 50 days to go before DMEA 2025 opens its doors: Europe's leading event for digital health will once again bring together experts...

Relationship Between Sleep and Nutrition…

Diet and sleep, which are essential for human survival, are interrelated. However, recently, various services and mobile applications have been introduced for the self-management of health, allowing users to record...

New AI Tool Mimics Radiologist Gaze to R…

Artificial intelligence (AI) can scan a chest X-ray and diagnose if an abnormality is fluid in the lungs, an enlarged heart or cancer. But being right is not enough, said...

AI Model can Read ECGs to Identify Femal…

A new AI model can flag female patients who are at higher risk of heart disease based on an electrocardiogram (ECG). The researchers say the algorithm, designed specifically for female patients...