Dmcie: Diffusion model with concatenation of inputs and errors for enhanced brain tumor segmentation in MRI images
📌 Goal:
Enhance brain tumor segmentation accuracy by combining multimodal MRI inputs with diffusion reconstructed error maps, enabling precise tumor region correction and improved clinical reliability.
🧠 Domain: Medical Imaging & Generative AI
🎯 Task: Brain Tumor Segmentation
📂 Dataset: BraTS 2020 – Multimodal MRI Dataset (T1, T1ce, T2, FLAIR + Ground Truth Masks)
Project Domain
Imaging
Task
Segmentation
The Goal:
Traditional segmentation models are limited by their inability to recover from their own prediction errors. DMCIE introduces a novel pipeline where a diffusion model reconstructs error maps between predicted and ground-truth tumor masks. These refined error maps, concatenated with original MRI modalities, guide a secondary segmentation network — resulting in superior tumor delineation. The aim is to improve segmentation accuracy, especially in challenging tumor regions with irregular shapes or weak boundaries.
1
The Challenge:
DMCIE employs a two-stage framework: a 3D U-Net first predicts an initial tumor mask from multimodal MRI inputs (T1, T1ce, T2, FLAIR), and an error map highlighting discrepancies with the ground truth is generated. This error map, concatenated with the original inputs, is refined through a diffusion model that iteratively corrects misclassified and boundary regions.
Methodology & Process
📌 Stage 1 — Initial Segmentation
A 3D U-Net predicts preliminary tumor masks from multimodal MRI inputs.
📌 Stage 2 — Error Reconstruction via Diffusion
Compute error map between U-Net output & ground truth
Concatenate MRI + error map → passed through denoising diffusion model
Model focuses on boundary refinement and error-prone regions rather than entire mask
Final segmentation = Initial Mask + Reconstructed Error Map
📌 Training & Evaluation
Metrics: Dice Score, HD95
Fair comparison with nnU-Net, CorrDiff, MedSegDiff, SF-Diff, BerDiff under identical conditions
2
The Result
Results:
The proposed DMCIE method was evaluated on the BraTS2020 dataset. Compared to the initial U-Net segmentation, DMCIE improved segmentation performance by +5.18% Dice and 2.07 mm HD95 compared to the initial U-Net segmentation. It shows improvements in boundary accuracy and segmentation across diverse tumor shapes, and maintains spatial coherence, even in fragmented cases.
Model | Dice ↑ | HD95 ↓ |
|---|---|---|
3D U-Net | 88.28% | 8.01 mm |
nnU-Net | 87.66% | 9.33 mm |
CorrDiff | 90.68% | 7.88 mm |
SF-Diff | 92.03% | 6.82 mm |
MedSegDiff | 91.32% | 8.68 mm |
BerDiff | 89.98% | 8.56 mm |
DMCIE (Proposed) | 93.46% | 5.94 mm |
Conclusion:
DMCIE introduces an effective error-guided correction mechanism for binary brain tumor segmentation, using multimodal MRI data to enhance segmentation accuracy. By modeling and correcting segmentation errors during diffusion, DMCIE achieves anatomically precise and well-localized tumor segmentation.
View the complete Manuscript here:
3








