Skip to content

Commit 0e19ca1

Browse files
committed
update readme
1 parent fd22d5b commit 0e19ca1

File tree

1 file changed

+107
-144
lines changed

1 file changed

+107
-144
lines changed

readme.md

Lines changed: 107 additions & 144 deletions
Original file line numberDiff line numberDiff line change
@@ -1,144 +1,107 @@
1-
# Welcome to the new nnU-Net!
2-
3-
Click [here](https://github.com/MIC-DKFZ/nnUNet/tree/nnunetv1) if you were looking for the old one instead.
4-
5-
Coming from V1? Check out the [TLDR Migration Guide](documentation/tldr_migration_guide_from_v1.md). Reading the rest of the documentation is still strongly recommended ;-)
6-
7-
## **2024-04-18 UPDATE: New residual encoder UNet presets available!**
8-
Residual encoder UNet presets substantially improve segmentation performance.
9-
They ship for a variety of GPU memory targets. It's all awesome stuff, promised!
10-
Read more :point_right: [here](documentation/resenc_presets.md) :point_left:
11-
12-
Also check out our [new paper](https://arxiv.org/pdf/2404.09556.pdf) on systematically benchmarking recent developments in medical image segmentation. You might be surprised!
13-
14-
# What is nnU-Net?
15-
Image datasets are enormously diverse: image dimensionality (2D, 3D), modalities/input channels (RGB image, CT, MRI, microscopy, ...),
16-
image sizes, voxel sizes, class ratio, target structure properties and more change substantially between datasets.
17-
Traditionally, given a new problem, a tailored solution needs to be manually designed and optimized - a process that
18-
is prone to errors, not scalable and where success is overwhelmingly determined by the skill of the experimenter. Even
19-
for experts, this process is anything but simple: there are not only many design choices and data properties that need to
20-
be considered, but they are also tightly interconnected, rendering reliable manual pipeline optimization all but impossible!
21-
22-
![nnU-Net overview](documentation/assets/nnU-Net_overview.png)
23-
24-
**nnU-Net is a semantic segmentation method that automatically adapts to a given dataset. It will analyze the provided
25-
training cases and automatically configure a matching U-Net-based segmentation pipeline. No expertise required on your
26-
end! You can simply train the models and use them for your application**.
27-
28-
Upon release, nnU-Net was evaluated on 23 datasets belonging to competitions from the biomedical domain. Despite competing
29-
with handcrafted solutions for each respective dataset, nnU-Net's fully automated pipeline scored several first places on
30-
open leaderboards! Since then nnU-Net has stood the test of time: it continues to be used as a baseline and method
31-
development framework ([9 out of 10 challenge winners at MICCAI 2020](https://arxiv.org/abs/2101.00232) and 5 out of 7
32-
in MICCAI 2021 built their methods on top of nnU-Net,
33-
[we won AMOS2022 with nnU-Net](https://amos22.grand-challenge.org/final-ranking/))!
34-
35-
Please cite the [following paper](https://www.google.com/url?q=https://www.nature.com/articles/s41592-020-01008-z&sa=D&source=docs&ust=1677235958581755&usg=AOvVaw3dWL0SrITLhCJUBiNIHCQO) when using nnU-Net:
36-
37-
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring
38-
method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
39-
40-
41-
## What can nnU-Net do for you?
42-
If you are a **domain scientist** (biologist, radiologist, ...) looking to analyze your own images, nnU-Net provides
43-
an out-of-the-box solution that is all but guaranteed to provide excellent results on your individual dataset. Simply
44-
convert your dataset into the nnU-Net format and enjoy the power of AI - no expertise required!
45-
46-
If you are an **AI researcher** developing segmentation methods, nnU-Net:
47-
- offers a fantastic out-of-the-box applicable baseline algorithm to compete against
48-
- can act as a method development framework to test your contribution on a large number of datasets without having to
49-
tune individual pipelines (for example evaluating a new loss function)
50-
- provides a strong starting point for further dataset-specific optimizations. This is particularly used when competing
51-
in segmentation challenges
52-
- provides a new perspective on the design of segmentation methods: maybe you can find better connections between
53-
dataset properties and best-fitting segmentation pipelines?
54-
55-
## What is the scope of nnU-Net?
56-
nnU-Net is built for semantic segmentation. It can handle 2D and 3D images with arbitrary
57-
input modalities/channels. It can understand voxel spacings, anisotropies and is robust even when classes are highly
58-
imbalanced.
59-
60-
nnU-Net relies on supervised learning, which means that you need to provide training cases for your application. The number of
61-
required training cases varies heavily depending on the complexity of the segmentation problem. No
62-
one-fits-all number can be provided here! nnU-Net does not require more training cases than other solutions - maybe
63-
even less due to our extensive use of data augmentation.
64-
65-
nnU-Net expects to be able to process entire images at once during preprocessing and postprocessing, so it cannot
66-
handle enormous images. As a reference: we tested images from 40x40x40 pixels all the way up to 1500x1500x1500 in 3D
67-
and 40x40 up to ~30000x30000 in 2D! If your RAM allows it, larger is always possible.
68-
69-
## How does nnU-Net work?
70-
Given a new dataset, nnU-Net will systematically analyze the provided training cases and create a 'dataset fingerprint'.
71-
nnU-Net then creates several U-Net configurations for each dataset:
72-
- `2d`: a 2D U-Net (for 2D and 3D datasets)
73-
- `3d_fullres`: a 3D U-Net that operates on a high image resolution (for 3D datasets only)
74-
- `3d_lowres``3d_cascade_fullres`: a 3D U-Net cascade where first a 3D U-Net operates on low resolution images and
75-
then a second high-resolution 3D U-Net refined the predictions of the former (for 3D datasets with large image sizes only)
76-
77-
**Note that not all U-Net configurations are created for all datasets. In datasets with small image sizes, the
78-
U-Net cascade (and with it the 3d_lowres configuration) is omitted because the patch size of the full
79-
resolution U-Net already covers a large part of the input images.**
80-
81-
nnU-Net configures its segmentation pipelines based on a three-step recipe:
82-
- **Fixed parameters** are not adapted. During development of nnU-Net we identified a robust configuration (that is, certain architecture and training properties) that can
83-
simply be used all the time. This includes, for example, nnU-Net's loss function, (most of the) data augmentation strategy and learning rate.
84-
- **Rule-based parameters** use the dataset fingerprint to adapt certain segmentation pipeline properties by following
85-
hard-coded heuristic rules. For example, the network topology (pooling behavior and depth of the network architecture)
86-
are adapted to the patch size; the patch size, network topology and batch size are optimized jointly given some GPU
87-
memory constraint.
88-
- **Empirical parameters** are essentially trial-and-error. For example the selection of the best U-net configuration
89-
for the given dataset (2D, 3D full resolution, 3D low resolution, 3D cascade) and the optimization of the postprocessing strategy.
90-
91-
## How to get started?
92-
Read these:
93-
- [Installation instructions](documentation/installation_instructions.md)
94-
- [Dataset conversion](documentation/dataset_format.md)
95-
- [Usage instructions](documentation/how_to_use_nnunet.md)
96-
97-
Additional information:
98-
- [Learning from sparse annotations (scribbles, slices)](documentation/ignore_label.md)
99-
- [Region-based training](documentation/region_based_training.md)
100-
- [Manual data splits](documentation/manual_data_splits.md)
101-
- [Pretraining and finetuning](documentation/pretraining_and_finetuning.md)
102-
- [Intensity Normalization in nnU-Net](documentation/explanation_normalization.md)
103-
- [Manually editing nnU-Net configurations](documentation/explanation_plans_files.md)
104-
- [Extending nnU-Net](documentation/extending_nnunet.md)
105-
- [What is different in V2?](documentation/changelog.md)
106-
107-
Competitions:
108-
- [AutoPET II](documentation/competitions/AutoPETII.md)
109-
110-
[//]: # (- [Ignore label](documentation/ignore_label.md))
111-
112-
## Where does nnU-Net perform well and where does it not perform?
113-
nnU-Net excels in segmentation problems that need to be solved by training from scratch,
114-
for example: research applications that feature non-standard image modalities and input channels,
115-
challenge datasets from the biomedical domain, majority of 3D segmentation problems, etc . We have yet to find a
116-
dataset for which nnU-Net's working principle fails!
117-
118-
Note: On standard segmentation
119-
problems, such as 2D RGB images in ADE20k and Cityscapes, fine-tuning a foundation model (that was pretrained on a large corpus of
120-
similar images, e.g. Imagenet 22k, JFT-300M) will provide better performance than nnU-Net! That is simply because these
121-
models allow much better initialization. Foundation models are not supported by nnU-Net as
122-
they 1) are not useful for segmentation problems that deviate from the standard setting (see above mentioned
123-
datasets), 2) would typically only support 2D architectures and 3) conflict with our core design principle of carefully adapting
124-
the network topology for each dataset (if the topology is changed one can no longer transfer pretrained weights!)
125-
126-
## What happened to the old nnU-Net?
127-
The core of the old nnU-Net was hacked together in a short time period while participating in the Medical Segmentation
128-
Decathlon challenge in 2018. Consequently, code structure and quality were not the best. Many features
129-
were added later on and didn't quite fit into the nnU-Net design principles. Overall quite messy, really. And annoying to work with.
130-
131-
nnU-Net V2 is a complete overhaul. The "delete everything and start again" kind. So everything is better
132-
(in the author's opinion haha). While the segmentation performance [remains the same](https://docs.google.com/spreadsheets/d/13gqjIKEMPFPyMMMwA1EML57IyoBjfC3-QCTn4zRN_Mg/edit?usp=sharing), a lot of cool stuff has been added.
133-
It is now also much easier to use it as a development framework and to manually fine-tune its configuration to new
134-
datasets. A big driver for the reimplementation was also the emergence of [Helmholtz Imaging](http://helmholtz-imaging.de),
135-
prompting us to extend nnU-Net to more image formats and domains. Take a look [here](documentation/changelog.md) for some highlights.
136-
137-
# Acknowledgements
138-
<img src="documentation/assets/HI_Logo.png" height="100px" />
139-
140-
<img src="documentation/assets/dkfz_logo.png" height="100px" />
141-
142-
nnU-Net is developed and maintained by the Applied Computer Vision Lab (ACVL) of [Helmholtz Imaging](http://helmholtz-imaging.de)
143-
and the [Division of Medical Image Computing](https://www.dkfz.de/en/mic/index.php) at the
144-
[German Cancer Research Center (DKFZ)](https://www.dkfz.de/en/index.html).
1+
# [MICCAI2025 Panther] Multi-Stage Fine-Tuning and Ensembling for Pancreatic Tumor Segmentation in MRI
2+
3+
This repository contains the official implementation of our MICCAI 2025 challenge paper:
4+
5+
### **A Multi-Stage Fine-Tuning and Ensembling Strategy for Pancreatic Tumor Segmentation in Diagnostic and Therapeutic MRI**
6+
7+
We present a **cascaded pre-training, fine-tuning, and ensembling framework** for **pancreatic ductal adenocarcinoma (PDAC) segmentation** in MRI. Our method, built on **nnU-Net v2**, leverages:
8+
- Multi-stage cascaded fine-tuning (from a foundation model → CT lesions → target MRI modalities)
9+
- Systematic augmentation ablations (aggressive vs. default)
10+
- Metric-aware heterogeneous ensembling ("mix of experts")
11+
12+
This approach achieved **the first place** in the **PANTHER challenge**, with robust performance across both **diagnostic T1W (Task 1)** and **therapeutic T2W MR-Linac (Task 2)** scans.
13+
14+
> **Authors**: Omer Faruk Durugol*, Maximilian Rokuss*, Yannick Kirchhoff, Klaus H. Maier-Hein
15+
> *Equal contribution
16+
> **Paper**: [![arXiv](https://img.shields.io/badge/arXiv-2508.21775-b31b1b.svg)](https://arxiv.org/abs/2508.21775)
17+
> **Challenge**: [PANTHER](https://panther.grand-challenge.org)
18+
19+
---
20+
21+
## News/Updates
22+
- 🏆 **Aug 2025**: Achieved **1st place in both tasks** in the [PANTHER Challenge](https://panther.grand-challenge.org)!
23+
- 📄 **Aug 2025**: Paper preprint released on [arXiv](https://arxiv.org/abs/2508.21775).
24+
25+
---
26+
27+
## 🚀 Usage
28+
29+
This section provides a complete guide to reproduce our training and inference workflow. Our method is developed entirely within the nnU-Net v2 framework.
30+
31+
### Installation and Setup
32+
33+
First, set up an environment with PyTorch and clone this repository to access our custom code. Here, we refer to nnU-Net v2 repository [installation instructions](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/installation_instructions.md) for a detailed explanation but provide a rundown on how to set up the environment quickly:
34+
35+
#### 1. Create and activate your Conda environment (recommended)
36+
```python
37+
conda create -n panther python=3.9
38+
conda activate panther
39+
```
40+
41+
#### 2. Install [PyTorch](https://pytorch.org/get-started/locally) as described on their website (conda/pip)
42+
43+
#### 3. Clone this repository
44+
```bash
45+
git clone https://github.com/MIC-DKFZ/panther.git
46+
cd panther
47+
```
48+
49+
#### 4. Install nnU-Net v2 and our project in editable mode
50+
This makes our custom trainers and plans available to the framework.
51+
```python
52+
pip install -e .
53+
```
54+
55+
#### 5. Set environment variables
56+
We recommend editing the `.bashrc` (or corresponding startup shell) file of your home folder by adding the following lines:
57+
```python
58+
export nnUNet_raw="/panther/nnUNet_raw"
59+
export nnUNet_preprocessed="/panther/nnUNet_preprocessed"
60+
export nnUNet_results="/panther/nnUNet_results"
61+
```
62+
(Change the paths according to your preferred directory structure.)
63+
64+
### Pre-trained Checkpoints
65+
66+
Coming soon!
67+
68+
### Evaluation
69+
70+
Coming soon!
71+
72+
---
73+
74+
## 📂 Data
75+
76+
We trained and evaluated on the **PANTHER Challenge dataset**:
77+
👉 [https://zenodo.org/records/15192302](https://zenodo.org/records/15192302)
78+
79+
Pretraining leveraged:
80+
81+
* [MultiTalentV2](https://zenodo.org/records/13753413)
82+
* Pancreatic lesion CT datasets (MSD + PANORAMA)
83+
84+
---
85+
86+
## 📚 Citation
87+
88+
If you find this repository useful, please cite:
89+
90+
```bibtex
91+
@article{durugol2025multistagefinetuningensemblingstrategy,
92+
title={A Multi-Stage Fine-Tuning and Ensembling Strategy for Pancreatic Tumor Segmentation in Diagnostic and Therapeutic MRI},
93+
author={Omer Faruk Durugol and Maximilian Rokuss and Yannick Kirchhoff and Klaus H. Maier-Hein},
94+
year={2025},
95+
eprint={2508.21775},
96+
archivePrefix={arXiv},
97+
primaryClass={cs.CV},
98+
url={https://arxiv.org/abs/2508.21775},
99+
}
100+
```
101+
102+
---
103+
104+
## 📬 Contact
105+
106+
For questions, issues, or collaborations, please reach out:
107+
📧 [maximilian.rokuss@dkfz-heidelberg.de](mailto:maximilian.rokuss@dkfz-heidelberg.de)

0 commit comments

Comments
 (0)