Skip to content

This project presents a deep learning-based solution for brain tumor segmentation using multimodal MRI scans and U-Net architecture.

Notifications You must be signed in to change notification settings

saky-semicolon/Multimodal-Brain-Tumor-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multimodal Brain Tumor Segmentation using Deep Learning

📌 Overview

This repository presents a deep learning-based approach to brain tumor segmentation using multimodal MRI scans. We implement and compare two fusion strategies—Input-Level Fusion and Feature-Input Fusion—based on a U-Net architecture. The project utilizes the BraTS 2020 dataset for training and evaluation.

Note: There are many implementation files in this repository which are basically for long term implementation works in different approaches for improvement.

🧠 Motivation

Brain tumor segmentation is essential for treatment planning and clinical decision-making. Manual annotation is labor-intensive and subjective, motivating the need for automated methods. This project addresses the challenges of tumor heterogeneity and modality differences by leveraging multimodal data fusion.

📁 Dataset

  • Source: BraTS 2020 Challenge

  • Modalities: T1, T1ce, T2, FLAIR

  • Classes:

    • 0: Background
    • 1: Necrotic Core
    • 2: Edema
    • 3: Enhancing Tumor

    image

⚙️ Preprocessing

  • Skull Stripping using Otsu’s method and morphological operations
  • Intensity Normalization (Min-Max scaling)
  • Data Augmentation (rotation, flipping)
  • Slice-wise preparation for 2D segmentation

🏗️ Model Architecture

  • Base: U-Net
  • Fusion Strategies:
    • Input-Level Fusion: Stack all modalities as 4-channel input
    • Feature-Input Fusion: Extract features separately and merge at bottleneck

image

🧪 Training Details

  • Loss Function: Categorical Crossentropy + Dice Coefficient
  • Optimizer: Adam (LR=0.0001)
  • Epochs: 20
  • Batch Size: 8
  • Metrics: Dice Score, Precision, Sensitivity, Specificity

📊 Results

  • Quantitative metrics and qualitative results provided in the notebook

  • Visualizations include Dice score comparisons and segmentation overlays

    Final Scores:
      - Training Accuracy: 0.99, Validation Accuracy: 0.99
      - Training Loss: 0.042, Validation Loss: 0.038
      - Training Dice Coeff: 0.401, Validation Dice Coeff: 0.407
    

image

🔍 Future Work

  • Incorporate transformer-based models (e.g., Swin UNET)
  • Evaluate performance on other datasets (BraTS 2021, TCIA)
  • Integrate clinical feedback for further validation

🙌 Acknowledgments

  • BraTS Challenge organizers for dataset access
  • Keras & TensorFlow for model development
  • Research inspiration from literature on multimodal fusion in medical imaging

About

This project presents a deep learning-based solution for brain tumor segmentation using multimodal MRI scans and U-Net architecture.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published