Skip to content

Commit a13ca37

Browse files
committed
[+] update docs
1 parent ad5ed05 commit a13ca37

File tree

3 files changed

+137
-40
lines changed

3 files changed

+137
-40
lines changed

CITATION.cff

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
cff-version: 1.2.0
2+
message: "If you use this code, please cite our paper as below."
3+
title: "PSAT: Pediatric Segmentation Approaches via Adult Augmentations and Transfer Learning"
4+
authors:
5+
- family-names: Kirscher
6+
given-names: Tristan
7+
- orcid: https://orcid.org/0009-0004-6646-6548
8+
version: "0.5.0"
9+
year: 2025
10+
journal: "MICCAI"
11+
license: MIT
12+
url: "https://github.com/ICANS-Strasbourg/PSAT"
13+
id: "arXiv:xxxx.xxxxx"

README.md

Lines changed: 83 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,39 @@
22

33
[![Python package](https://github.com/ICANS-Strasbourg/PSAT/actions/workflows/python-package.yml/badge.svg?branch=main)](https://github.com/ICANS-Strasbourg/PSAT/actions/workflows/python-package.yml)
44

5-
This repository contains the code and configuration files for PSAT (Pediatric Segmentation Approaches via Adult Augmentations and Transfer Learning).
5+
Pediatric Segmentation Approaches via Adult Augmentations and Transfer Learning
6+
7+
---
8+
9+
## Table of Contents
10+
- [Overview](#overview)
11+
- [Features](#features)
12+
- [Citation](#citation)
13+
- [Checkpoints & Pretrained Models](#checkpoints--pretrained-models)
14+
- [Quickstart](#quickstart)
15+
- [Usage](#usage)
16+
- [Dependencies](#dependencies)
17+
- [Documentation](#documentation)
18+
- [Running Tests](#running-tests)
19+
- [Contributing](#contributing)
20+
- [License](#license)
21+
22+
---
623

724
## Overview
825

9-
PSAT addresses pediatric segmentation challenges by combining:
10-
- **Training Plans:** Derived from adult, pediatric, or mixed data ($P_a$, $P_p$, $P_m$).
11-
- **Learning Sets:** Adult-only, pediatric-only, or mixed ($S_a$, $S_p$, $S_m$).
12-
- **Augmentations:** Default ($A_d$) and contraction-based ($A_c$) strategies.
13-
- **Transfer Learning:** Direct inference ($T_o$), fine-tuning ($T_p$), or continual learning ($T_m$).
26+
**PSAT** addresses pediatric segmentation challenges by leveraging adult, pediatric, and mixed datasets, advanced augmentation strategies, and transfer learning. It is designed for researchers and practitioners working on medical image segmentation, especially in pediatric contexts.
1427

1528
<img src="resources/images/PSAT_overview.png" alt="PSAT Overview" style="width:80%; max-width:1000px; display:block; margin: 0 auto;">
1629

30+
## Features
31+
- **Flexible Training Plans:** Use adult, pediatric, or mixed data ($P_a$, $P_p$, $P_m$)
32+
- **Customizable Learning Sets:** Adult-only, pediatric-only, or mixed ($S_a$, $S_p$, $S_m$)
33+
- **Augmentation Strategies:** Default ($A_d$) and contraction-based ($A_c$)
34+
- **Transfer Learning:** Direct inference ($T_o$), fine-tuning ($T_p$), continual learning ($T_m$)
35+
- **Pretrained Models:** Ready-to-use checkpoints for nnU-Net
36+
- **Evaluation Scripts:** For fast metrics computation
37+
1738
## Citation
1839
If you use this code, please cite our paper:
1940

@@ -27,18 +48,56 @@ If you use this code, please cite our paper:
2748
}
2849
```
2950

51+
This repository includes a `CITATION.cff` file for standardized citation metadata. You can also use the "Cite this repository" button on GitHub to obtain citation formats automatically.
52+
53+
54+
## Checkpoints & Pretrained Models
55+
56+
We provide two model checkpoints for nnU-Net:
57+
- **mixed_model_continual_learning.zip**
58+
- **pure_pediatric_model.zip**
59+
60+
### Installing Pretrained Models
61+
62+
1. **Download Pretrained Weights:**
63+
- Go to the [GitHub Releases](https://github.com/ICANS-Strasbourg/PSAT/releases) page.
64+
- Download `mixed_model_continual_learning.zip` and `pure_pediatric_model.zip`.
65+
- Place them in `resources/checkpoints/`.
66+
67+
2. **Install the Checkpoint Using nnU-Net:**
68+
```bash
69+
nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/mixed_model_continual_learning.zip
70+
nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/pure_pediatric_model.zip
71+
```
72+
73+
3. **Run Inference:**
74+
After installing a checkpoint, run inference on your images:
75+
```bash
76+
nnUNetv2_predict -i <input_images_dir> -o <output_dir> -d <dataset_id> -c <trainer_name> -f 0 -tr <task_name>
77+
```
78+
Replace `<input_images_dir>`, `<output_dir>`, `<dataset_id>`, `<trainer_name>`, and `<task_name>` as appropriate. See [nnUNet documentation](https://github.com/MIC-DKFZ/nnUNet) for details.
79+
80+
For more details, see the [Resources](resources/resources.md) section.
81+
3082
## Quickstart
3183

32-
Install dependencies:
33-
```bash
34-
pip install -r requirements.txt
35-
```
84+
1. **Install dependencies:**
85+
```bash
86+
pip install -r requirements.txt
87+
```
3688

37-
Run metrics evaluation (example):
38-
```bash
39-
python scripts/compute_metrics.py <ground_truth_dir> <predictions_dir>
40-
```
41-
Replace `<ground_truth_dir>` and `<predictions_dir>` with your folder paths containing NIfTI files.
89+
2. **Evaluate Metrics (Example):**
90+
```bash
91+
python scripts/compute_metrics.py <ground_truth_dir> <predictions_dir>
92+
```
93+
Replace `<ground_truth_dir>` and `<predictions_dir>` with your folder paths containing NIfTI files.
94+
95+
## Usage
96+
97+
- **Preprocessing, Training, and Inference:**
98+
- See the [nnUNet documentation](https://github.com/MIC-DKFZ/nnUNet) and [nnUNet/nnUNet.md](nnUNet/nnUNet.md) for details on running full pipelines.
99+
- **Scripts:**
100+
- Utility scripts are in the `scripts/` directory. See [scripts/scripts.md](scripts/scripts.md) for usage.
42101

43102
## Dependencies
44103
- nibabel
@@ -48,7 +107,7 @@ Replace `<ground_truth_dir>` and `<predictions_dir>` with your folder paths cont
48107
- scipy
49108
- surface-distance
50109

51-
(See `requirements.txt` for full list.)
110+
(See `requirements.txt` for the full list.)
52111

53112
## Documentation
54113

@@ -66,3 +125,11 @@ Install dependencies listed in `requirements.txt` and run:
66125
```bash
67126
pytest -q
68127
```
128+
129+
## Contributing
130+
131+
Contributions are welcome! Please open issues or pull requests for bug fixes, improvements, or new features.
132+
133+
## License
134+
135+
This project is licensed under the MIT License. See [LICENSE](LICENSE) for details.

resources/resources.md

Lines changed: 41 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,40 +1,57 @@
11
# Resources
22

3-
## TCIA Dataset
3+
## TCIA Dataset: Pediatric CT Segmentation
4+
5+
- **Source:** [TCIA Pediatric CT Segmentation Collection](https://www.cancerimagingarchive.net/collection/pediatric-ct-seg/)
6+
- **Description:** A collection of pediatric CT scans with expert-annotated organ and tumor segmentations, suitable for training and evaluating medical image segmentation models.
7+
- **Metadata:**
8+
- `TCIA/meta.csv`: Contains patient IDs, scan information, and basic demographic data for each case in the dataset.
49

510
## TotalSegmentator Dataset
611

7-
## Checkpoints
12+
- **Source:** [TotalSegmentator on Zenodo](https://zenodo.org/records/10047292)
13+
- **Description:** A large-scale dataset of adult CT scans with comprehensive multi-organ segmentations, designed for general-purpose medical image segmentation tasks.
14+
- **Metadata:**
15+
- `TotalSegmentator/meta.csv`: Contains scan identifiers, acquisition parameters, and summary statistics for each scan in the dataset.
16+
17+
## Using Pretrained nnU-Net Models
18+
19+
We provide two nnU-Net v2 model checkpoints:
20+
- `mixed_model_continual_learning.zip`
21+
- `pure_pediatric_model.zip`
822

9-
We provide two model checkpoints that you can use directly with nnU-Net:
23+
You can use these as pretrained weights for inference or further fine-tuning with nnU-Net v2.
1024

11-
- **mixed_model_continual_learning.zip**
25+
### 1. Download the Model Weights
1226

13-
- **pure_pediatric_model.zip**
27+
Go to the [GitHub Releases](https://github.com/ICANS-Strasbourg/PSAT/releases) page and download the desired zip files. Place them in a directory of your choice (e.g., `resources/checkpoints/`).
1428

15-
### Installing Pretrained Models
29+
### 2. Install the Pretrained Model
1630

17-
nnU-Net offers a convenient utility to install pretrained models from a zip archive. Follow these steps:
31+
Use the nnU-Net v2 utility to install the model from the zip file:
1832

19-
1. **Downloading Pretrained Weights:**
20-
Go to the [GitHub Releases](https://github.com/ICANS-Strasbourg/PSAT/releases) page.
21-
22-
Download:
23-
- `mixed_model_continual_learning.zip`
24-
- `pure_pediatric_model.zip`
25-
26-
Place them in `resources/checkpoints/`.
33+
```bash
34+
# For the mixed model:
35+
nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/mixed_model_continual_learning.zip
2736

28-
2. **Install the Checkpoint Using nnU-Net**
29-
Use the `nnUNet_install_pretrained_model_from_zip` command in your terminal.
37+
# For the pure pediatric model:
38+
nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/pure_pediatric_model.zip
39+
```
3040

31-
```bash
32-
# To install the mixed model checkpoint:
33-
nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/mixed_model_continual_learning.zip
41+
### 3. Run Inference with the Installed Model
3442

35-
# To install the pure pediatric model checkpoint:
36-
nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/pure_pediatric_model.zip
43+
After installation, you can run inference using nnU-Net v2. For example:
3744

38-
3. **Running Inference**
45+
```bash
46+
nnUNetv2_predict -d <DATASET_ID> -i <INPUT_FOLDER> -o <OUTPUT_FOLDER> -c <CONFIGURATION> -f <FOLD>
47+
```
48+
- `<DATASET_ID>`: The dataset number or name (e.g., `297` for TotalSegmentator, `797` for the mixed dataset).
49+
- `<INPUT_FOLDER>`: Folder with your images (NIfTI format).
50+
- `<OUTPUT_FOLDER>`: Where predictions will be saved.
51+
- `<CONFIGURATION>`: Model configuration (e.g., `3d_fullres`, `2d`).
52+
- `<FOLD>`: Fold number (usually `0`, or `all` for all folds).
3953

40-
Once you have installed a checkpoint, you can run inference on your input images using the `nnUNet_predict` command.
54+
**Example:**
55+
```bash
56+
nnUNetv2_predict -d 797 -i imagesTs/ -o predictions/ -c 3d_fullres -f all
57+
```

0 commit comments

Comments
 (0)