Deep Learning Model Helps Detect Lung Tumors on CT

A new deep learning model shows promise in detecting and segmenting lung tumors, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA). The findings of the study could have important implications for lung cancer treatment.

According to the American Cancer Society, lung cancer is the second most common cancer among men and women in the U.S. and the leading cause of cancer death.

Accurate detection and segmentation of lung tumors on CT scans is critical for monitoring cancer progression, evaluating treatment responses and planning radiation therapy. Currently, experienced clinicians manually identify and segment lung tumors on medical images, a labor-intensive process that is subject to physician variability.

While artificial intelligence deep learning methods have been applied to lung tumor detection and segmentation, prior studies have been limited by small datasets, reliance on manual inputs, and a focus on segmenting single lung tumors, highlighting the need for models capable of robust and automated tumor delineation across diverse clinical settings.

In this study, a unique, large-scale dataset consisting of routinely collected pre-radiation treatment CT simulation scans and their associated clinical 3D segmentations was used to develop a near-expert-level lung tumor detection and segmentation model. The primary aim was to develop a model that accurately identifies and segments lung tumors on CT scans from different medical centers.

"To the best of our knowledge, our training dataset is the largest collection of CT scans and clinical tumor segmentations reported in the literature for constructing a lung tumor detection and segmentation model," said the study’s lead author, Mehr Kashyap, M.D., resident physician in the Department of Medicine at Stanford University School of Medicine in Stanford, California.

For the retrospective study, an ensemble 3D U-Net deep learning model was trained for lung tumor detection and segmentation using 1,504 CT scans with 1,828 segmented lung tumors. The model was then tested on 150 CT scans. Model-predicted tumor volumes were compared with physician-delineated volumes. Performance metrics included sensitivity, specificity, false positive rate and Dice similarity coefficient (DSC). DSC calculates the similarity between two sets of data by comparing the overlap between them. A value of 0 represents no overlap while a value of 1 represents perfect overlap. The model segmentations were compared to those from all three physician segmentations to generate the model-physician DSC values for each pairing.

The model achieved 92% sensitivity (92/100) and 82% specificity (41/50) in detecting lung tumors on the combined 150-CT scan test set.

For a subset of 100 CT scans with a single lung tumor each, the median model-physician and physician-physician segmentation DSCs were 0.77 and 0.80, respectively. Segmentation time was shorter for the model than for physicians.

Dr. Kashyap believes that the use of a 3D U-Net architecture in developing the model provides an advantage over approaches using a 2D architecture.

"By capturing rich interslice information, our 3D model is theoretically capable of identifying smaller lesions that 2D models may be unable to distinguish from structures such as blood vessels and airways," he said.

One limitation of the model was its tendency to underestimate tumor volume, resulting in poorer performance on very large tumors. Because of this, Dr. Kashyap cautions that the model should be implemented in a physician-supervised workflow, allowing clinicians to identify and discard incorrectly identified lesions and lower-quality segmentations.

The researchers suggest that future research should focus on applying the model to estimate total lung tumor burden and evaluate treatment response over time, comparing it to existing methods. They also recommend assessing the model’s ability to predict clinical outcomes on the basis of estimated tumor burden, particularly when combined with other prognostic models using diverse clinical data.

"Our study represents an important step toward automating lung tumor identification and segmentation," Dr. Kashyap said. "This approach could have wide-ranging implications, including its incorporation in automated treatment planning, tumor burden quantification, treatment response assessment and other radiomic applications."

Kashyap M, Wang X, Panjwani N, Hasan M, Zhang Q, Huang C, Bush K, Chin A, Vitzthum LK, Dong P, Zaky S, Loo BW, Diehn M, Xing L, Li R, Gensheimer MF.
Automated Deep Learning-Based Detection and Segmentation of Lung Tumors at CT.
Radiology. 2025 Jan;314(1):e233029. doi: 10.1148/radiol.233029

Most Popular Now

Stanford Medicine Study Suggests Physici…

Artificial intelligence-powered chatbots are getting pretty good at diagnosing some diseases, even when they are complex. But how do chatbots do when guiding treatment and care after the diagnosis? For...

OmicsFootPrint: Mayo Clinic's AI To…

Mayo Clinic researchers have pioneered an artificial intelligence (AI) tool, called OmicsFootPrint, that helps convert vast amounts of complex biological data into two-dimensional circular images. The details of the tool...

Testing AI with AI: Ensuring Effective A…

Using a pioneering artificial intelligence platform, Flinders University researchers have assessed whether a cardiac AI tool recently trialled in South Australian hospitals actually has the potential to assist doctors and...

Adults don't Trust Health Care to U…

A study finds that 65.8% of adults surveyed had low trust in their health care system to use artificial intelligence responsibly and 57.7% had low trust in their health care...

AI Unlocks Genetic Clues to Personalize …

A groundbreaking study led by USC Assistant Professor of Computer Science Ruishan Liu has uncovered how specific genetic mutations influence cancer treatment outcomes - insights that could help doctors tailor...

The 10 Year Health Plan: What do We Need…

Opinion Article by Piyush Mahapatra, Consultant Orthopaedic Surgeon and Chief Innovation Officer at Open Medical. There is a new ten-year plan for the NHS. It will "focus efforts on preventing, as...

Deep Learning to Increase Accessibility…

Coronary artery disease is the leading cause of death globally. One of the most common tools used to diagnose and monitor heart disease, myocardial perfusion imaging (MPI) by single photon...

People's Trust in AI Systems to Mak…

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions. Artificial moral advisors (AMAs) are systems based on artificial...

DMEA 2025 - Innovations, Insights and Ne…

8 - 10 April 2025, Berlin, Germany. Less than 50 days to go before DMEA 2025 opens its doors: Europe's leading event for digital health will once again bring together experts...

Relationship Between Sleep and Nutrition…

Diet and sleep, which are essential for human survival, are interrelated. However, recently, various services and mobile applications have been introduced for the self-management of health, allowing users to record...

New AI Tool Mimics Radiologist Gaze to R…

Artificial intelligence (AI) can scan a chest X-ray and diagnose if an abnormality is fluid in the lungs, an enlarged heart or cancer. But being right is not enough, said...

AI Model can Read ECGs to Identify Femal…

A new AI model can flag female patients who are at higher risk of heart disease based on an electrocardiogram (ECG). The researchers say the algorithm, designed specifically for female patients...