New Technique Makes Brain Scans Better

People who suffer a stroke often undergo a brain scan at the hospital, allowing doctors to determine the location and extent of the damage. Researchers who study the effects of strokes would love to be able to analyze these images, but the resolution is often too low for many analyses. To help scientists take advantage of this untapped wealth of data from hospital scans, a team of MIT researchers, working with doctors at Massachusetts General Hospital and many other institutions, has devised a way to boost the quality of these scans so they can be used for large-scale studies of how strokes affect different people and how they respond to treatment.

"These images are quite unique because they are acquired in routine clinical practice when a patient comes in with a stroke," says Polina Golland, an MIT professor of electrical engineering and computer science. "You couldn't stage a study like that."

Using these scans, researchers could study how genetic factors influence stroke survival or how people respond to different treatments. They could also use this approach to study other disorders such as Alzheimer's disease.

Golland is the senior author of the paper, which will be presented at the Information Processing in Medical Imaging conference during the week of June 25. The paper's lead author is Adrian Dalca, a postdoc in MIT's Computer Science and Artificial Intelligence Laboratory. Other authors are Katie Bouman, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering at MIT; Natalia Rost, director of the acute stroke service at MGH; and Mert Sabuncu, an assistant professor of electrical and computer engineering at Cornell University.

Filling in data
Scanning the brain with magnetic resonance imaging (MRI) produces many 2-D "slices" that can be combined to form a 3-D representation of the brain.

For clinical scans of patients who have had a stroke, images are taken rapidly due to limited scanning time. As a result, the scans are very sparse, meaning that the image slices are taken about 5-7 millimeters apart. (The in-slice resolution is 1 millimeter.)

For scientific studies, researchers usually obtain much higher-resolution images, with slices only 1 millimeter apart, which requires keeping subjects in the scanner for a much longer period of time. Scientists have developed specialized computer algorithms to analyze these images, but these algorithms don't work well on the much more plentiful but lower-quality patient scans taken in hospitals.

The MIT researchers, along with their collaborators at MGH and other hospitals, were interested in taking advantage of the vast numbers of patient scans, which would allow them to learn much more than can be gleaned from smaller studies that produce higher-quality scans.

"These research studies are very small because you need volunteers, but hospitals have hundreds of thousands of images. Our motivation was to take advantage of this huge set of data," Dalca says.

The new approach involves essentially filling in the data that is missing from each patient scan. This can be done by taking information from the entire set of scans and using it to recreate anatomical features that are missing from other scans.

"The key idea is to generate an image that is anatomically plausible, and to an algorithm looks like one of those research scans, and is completely consistent with clinical images that were acquired," Golland says. "Once you have that, you can apply every state-of-the-art algorithm that was developed for the beautiful research images and run the same analysis, and get the results as if these were the research images."

Once these research-quality images are generated, researchers can then run a set of algorithms designed to help with analyzing anatomical features. These include the alignment of slices and a process called skull-stripping that eliminates everything but the brain from the images.

Throughout this process, the algorithm keeps track of which pixels came from the original scans and which were filled in afterward, so that analyses done later, such as measuring the extent of brain damage, can be performed only on information from the original scans.

"In a sense, this is a scaffold that allows us to bring the image into the collection as if it were a high-resolution image, and then make measurements only on the pixels where we have the information," Golland says.

Higher quality
Now that the MIT team has developed this technique for enhancing low-quality images, they plan to apply it to a large set of stroke images obtained by the MGH-led consortium, which includes about 4,000 scans from 12 hospitals.

"Understanding spatial patterns of the damage that is done to the white matter promises to help us understand in more detail how the disease interacts with cognitive abilities of the person, with their ability to recover from stroke, and so on," Golland says.

The researchers also hope to apply this technique to scans of patients with other brain disorders.

"It opens up lots of interesting directions," Golland says. "Images acquired in routine medical practice can give anatomical insight, because we lift them up to that quality that the algorithms can analyze."

The research was funded by the National Institute of Neurological Disorders and Stroke and the National Institute of Biomedical Imaging and Bioengineering.

Dalca AV, Bouman KL, Freeman WT, Rost NS, Sabuncu MR, Golland P.
Population Based Image Imputation.
In International Conference on Information Processing in Medical Imaging 2017 Jun 25 (pp. 659-671). Springer, Cham.

Most Popular Now

AI for Real-Rime, Patient-Focused Insigh…

A picture may be worth a thousand words, but still... they both have a lot of work to do to catch up to BiomedGPT. Covered recently in the prestigious journal Nature...

New Research Shows Promise and Limitatio…

Published in JAMA Network Open, a collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center and the University of Virginia studied...

G-Cloud 14 Makes it Easier for NHS to Bu…

NHS organisations will be able to save valuable time and resource in the procurement of technologies that can make a significant difference to patient experience, in the latest iteration of...

Hampshire Emergency Departments Digitise…

Emergency departments in three hospitals across Hampshire Hospitals NHS Foundation Trust have deployed Alcidion's Miya Emergency, digitising paper processes, saving clinical teams time, automating tasks, and providing trust-wide visibility of...

Start-Ups will Once Again Have a Starrin…

11 - 14 November 2024, Düsseldorf, Germany. The finalists in the 16th Healthcare Innovation World Cup and the 13th MEDICA START-UP COMPETITION have advanced from around 550 candidates based in 62...

MEDICA HEALTH IT FORUM: Success in Maste…

11 - 14 November 2024, Düsseldorf, Germany. How can innovations help to master the great challenges and demands with which healthcare is confronted across international borders? This central question will be...

A "Chemical ChatGPT" for New M…

Researchers from the University of Bonn have trained an AI process to predict potential active ingredients with special properties. Therefore, they derived a chemical language model - a kind of...

Siemens Healthineers co-leads EU Project…

Siemens Healthineers is joining forces with more than 20 industry and public partners, including seven leading stroke hospitals, to improve stroke management for patients all over Europe. With a total...

MEDICA and COMPAMED 2024: Shining a Ligh…

11 - 14 November 2024, Düsseldorf, Germany. Christian Grosser, Director Health & Medical Technologies, is looking forward to events getting under way: "From next Monday to Thursday, we will once again...

In 10 Seconds, an AI Model Detects Cance…

Researchers have developed an AI powered model that - in 10 seconds - can determine during surgery if any part of a cancerous brain tumor that could be removed remains...

Does AI Improve Doctors' Diagnoses?

With hospitals already deploying artificial intelligence to improve patient care, a new study has found that using Chat GPT Plus does not significantly improve the accuracy of doctors' diagnoses when...

AI Analysis of PET/CT Images can Predict…

Dr. Watanabe and his teams from Niigata University have revealed that PET/CT image analysis using artificial intelligence (AI) can predict the occurrence of interstitial lung disease, known as a serious...