New Technique Makes Brain Scans Better

People who suffer a stroke often undergo a brain scan at the hospital, allowing doctors to determine the location and extent of the damage. Researchers who study the effects of strokes would love to be able to analyze these images, but the resolution is often too low for many analyses. To help scientists take advantage of this untapped wealth of data from hospital scans, a team of MIT researchers, working with doctors at Massachusetts General Hospital and many other institutions, has devised a way to boost the quality of these scans so they can be used for large-scale studies of how strokes affect different people and how they respond to treatment.

"These images are quite unique because they are acquired in routine clinical practice when a patient comes in with a stroke," says Polina Golland, an MIT professor of electrical engineering and computer science. "You couldn't stage a study like that."

Using these scans, researchers could study how genetic factors influence stroke survival or how people respond to different treatments. They could also use this approach to study other disorders such as Alzheimer's disease.

Golland is the senior author of the paper, which will be presented at the Information Processing in Medical Imaging conference during the week of June 25. The paper's lead author is Adrian Dalca, a postdoc in MIT's Computer Science and Artificial Intelligence Laboratory. Other authors are Katie Bouman, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering at MIT; Natalia Rost, director of the acute stroke service at MGH; and Mert Sabuncu, an assistant professor of electrical and computer engineering at Cornell University.

Filling in data
Scanning the brain with magnetic resonance imaging (MRI) produces many 2-D "slices" that can be combined to form a 3-D representation of the brain.

For clinical scans of patients who have had a stroke, images are taken rapidly due to limited scanning time. As a result, the scans are very sparse, meaning that the image slices are taken about 5-7 millimeters apart. (The in-slice resolution is 1 millimeter.)

For scientific studies, researchers usually obtain much higher-resolution images, with slices only 1 millimeter apart, which requires keeping subjects in the scanner for a much longer period of time. Scientists have developed specialized computer algorithms to analyze these images, but these algorithms don't work well on the much more plentiful but lower-quality patient scans taken in hospitals.

The MIT researchers, along with their collaborators at MGH and other hospitals, were interested in taking advantage of the vast numbers of patient scans, which would allow them to learn much more than can be gleaned from smaller studies that produce higher-quality scans.

"These research studies are very small because you need volunteers, but hospitals have hundreds of thousands of images. Our motivation was to take advantage of this huge set of data," Dalca says.

The new approach involves essentially filling in the data that is missing from each patient scan. This can be done by taking information from the entire set of scans and using it to recreate anatomical features that are missing from other scans.

"The key idea is to generate an image that is anatomically plausible, and to an algorithm looks like one of those research scans, and is completely consistent with clinical images that were acquired," Golland says. "Once you have that, you can apply every state-of-the-art algorithm that was developed for the beautiful research images and run the same analysis, and get the results as if these were the research images."

Once these research-quality images are generated, researchers can then run a set of algorithms designed to help with analyzing anatomical features. These include the alignment of slices and a process called skull-stripping that eliminates everything but the brain from the images.

Throughout this process, the algorithm keeps track of which pixels came from the original scans and which were filled in afterward, so that analyses done later, such as measuring the extent of brain damage, can be performed only on information from the original scans.

"In a sense, this is a scaffold that allows us to bring the image into the collection as if it were a high-resolution image, and then make measurements only on the pixels where we have the information," Golland says.

Higher quality
Now that the MIT team has developed this technique for enhancing low-quality images, they plan to apply it to a large set of stroke images obtained by the MGH-led consortium, which includes about 4,000 scans from 12 hospitals.

"Understanding spatial patterns of the damage that is done to the white matter promises to help us understand in more detail how the disease interacts with cognitive abilities of the person, with their ability to recover from stroke, and so on," Golland says.

The researchers also hope to apply this technique to scans of patients with other brain disorders.

"It opens up lots of interesting directions," Golland says. "Images acquired in routine medical practice can give anatomical insight, because we lift them up to that quality that the algorithms can analyze."

The research was funded by the National Institute of Neurological Disorders and Stroke and the National Institute of Biomedical Imaging and Bioengineering.

Dalca AV, Bouman KL, Freeman WT, Rost NS, Sabuncu MR, Golland P.
Population Based Image Imputation.
In International Conference on Information Processing in Medical Imaging 2017 Jun 25 (pp. 659-671). Springer, Cham.

Most Popular Now

European Artificial Intelligence Act Com…

The European Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on artificial intelligence, enters into force. The AI Act is designed to ensure that AI developed and used...

Patient Safety must be Central to the De…

An EPR system brings together different patient information in one place, making it easier to access for healthcare professionals. This information can include patients' own notes, test results, observations by...

ChatGPT Shows Promise in Answering Patie…

The groundbreaking ChatGPT chatbot shows potential as a time-saving tool for responding to patient questions sent to the urologist's office, suggests a study in the September issue of Urology Practice®...

Survey: Most Americans Comfortable with …

Artificial intelligence (AI) is all around us - from smart home devices to entertainment and social media algorithms. But is AI okay in healthcare? A new national survey commissioned by...

AI Spots Cancer and Viral Infections at …

Researchers at the Centre for Genomic Regulation (CRG), the University of the Basque Country (UPV/EHU), Donostia International Physics Center (DIPC) and the Fundación Biofisica Bizkaia (FBB, located in Biofisika Institute)...

Video Gaming Improves Mental Well-Being

A pioneering study titled "Causal effect of video gaming on mental well-being in Japan 2020-2022," published in Nature Human Behaviour, has conducted the most comprehensive investigation to date on the...

Machine learning helps identify rheumato…

A machine-learning tool created by Weill Cornell Medicine and Hospital for Special Surgery (HSS) investigators can help distinguish subtypes of rheumatoid arthritis (RA), which may help scientists find ways to...

New Diabetes Research Links Blood Glucos…

As part of its ongoing exploration of vocal biomarkers and the role they can play in enhancing health outcomes, Klick Labs published a new study in Scientific Reports - confirming...

New AI Software could Make Diagnosing De…

Although Alzheimer's is the most common cause of dementia - a catchall term for cognitive deficits that impact daily living, like the loss of memory or language - it's not...

A New AI Tool for Cancer

Scientists at Harvard Medical School have designed a versatile, ChatGPT-like AI model capable of performing an array of diagnostic tasks across multiple forms of cancers. The new AI system, described Sept...

Vision-Based ChatGPT Shows Deficits Inte…

Researchers evaluating the performance of ChatGPT-4 Vision found that the model performed well on text-based radiology exam questions but struggled to answer image-related questions accurately. The study's results were published...

Bayer Launches New Healthy-Aging Ecosyst…

Combining a scientifically formulated dietary supplement, a leading-edge wellness companion app, and a saliva-based a biological age test by Chronomics, Bayer is taking a big step in the emerging healthy-aging...