Skip to content

Commit 50abf3b

Browse files
committed
update readme
1 parent af03e23 commit 50abf3b

File tree

3 files changed

+20
-14
lines changed

3 files changed

+20
-14
lines changed

examples/JSRT/README.md

100644100755
Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,20 @@
1-
# Heart segmentation from JSRT dataset
1+
# Heart segmentation from 2D X-ray images
22

33
![image_example](./picture/JPCLN001.png)
44
![label_example](./picture/JPCLN001_seg.png)
55

6-
In this example, we use U-Net to segment the heart from X-Ray images. First we download the images from internet, then edit the configuration file for training and testing. During training, we use tensorboard to observe the performance of the network at different iterations. We then apply the trained model to testing images and obtain quantitative evaluation results.
6+
In this example, we use 2D U-Net to segment the heart from X-Ray images. First we download the images from internet, then edit the configuration file for training and testing. During training, we use tensorboard to observe the performance of the network at different iterations. We then apply the trained model to testing images and obtain quantitative evaluation results.
7+
8+
If you don't want to train the model by yourself, you can download a pre-trained model [here][model_link] and jump to the `Testing and evaluation` section.
79

810
## Data and preprocessing
911
1. The JSRT dataset is available at the [JSRT website][jsrt_link]. It consists of 247 chest radiographs. Create a new folder as `JSRT_root`, and download the images and save them in to a single folder, like `JSRT_root/All247images`.
1012
2. The annotation of this dataset is provided by the [SCR database][scr_link]. Download the annotations and move the unzipped folder `scratch` to `JSRT_root/scratch`.
1113
3. Create two new folders `JSRT_root/image` and `JSRT_root/label` for preprocessing.
1214
4. Set `JSRT_root` according to your computer in `image_convert.py` and run `python image_convert.py` for preprocessing. This command converts the raw image format to png and resizes all images into 256X256. The processed image and label are saved in `JSRT_root/image` and `JSRT_root/label` respectively.
13-
5. Set `JSRT_root` according to your computer in `write_csv_files.py` and run `python write_csv_files.py` to randomly split the 247 images into training (180), validation (20) and testing (47) sets. The output csv files are saved in `config`.
15+
5. Set `JSRT_root` according to your computer in `write_csv_files.py` and run `python write_csv_files.py` to randomly split the 247 images into training (180 images), validation (20 images) and testing (47 images) sets. The output csv files are saved in `config`.
1416

17+
[model_link]:https://drive.google.com/open?id=1pYwt0lRiV_QrCJe5ef9IsLf4NKyrFRRD
1518
[jsrt_link]:http://db.jsrt.or.jp/eng.php
1619
[scr_link]:https://www.isi.uu.nl/Research/Databases/SCR/
1720

@@ -29,18 +32,18 @@ python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
2932
![avg_loss](./picture/jsrt_avg_loss.png)
3033

3134
## Testing and evaluation
32-
1. When training is finished. Run the following command to obtain segmentation results of testing images:
35+
1. Run the following command to obtain segmentation results of testing images. If you use [the pretrained model][model_link], you need to edit `checkpoint_name` in `config/train_test.cfg`.
3336

3437
```bash
3538
mkdir result
3639
python ../../pymic/train_infer/train_infer.py test config/train_test.cfg
3740
```
3841

39-
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder` as your `JSRT_root/label`, and run the following command to obtain quantitative evaluation results in terms of dice.
42+
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder_list` as your `JSRT_root/label`, and run the following command to obtain quantitative evaluation results in terms of dice.
4043

4144
```bash
4245
python ../../pymic/util/evaluation.py config/evaluation.cfg
4346
```
4447

45-
The obtained dice score by default setting should be close to 94.59+/-3.16. You can set `metric = assd` in `config/evaluation.cfg` and run the evaluation command again. You will get average symmetric surface distance (assd) evaluation results. By default setting, the assd is close to 2.21+/-1.23 pixels.
48+
The obtained dice score by default setting should be close to 94.59+/-3.16%. You can set `metric = assd` in `config/evaluation.cfg` and run the evaluation command again. You will get average symmetric surface distance (assd) evaluation results. By default setting, the assd is close to 2.21+/-1.23 pixels.
4649

examples/fetal_hc/README.md

100644100755
Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,17 @@
22

33
<img src="./picture/001_HC.png" width="256" height="256"/> <img src="./picture/001_HC_seg.png" width="256" height="256"/>
44

5-
In this example, we use U-Net to segment the fetal brain from ultrasound images. First we download the images from internet, then edit the configuration file for training and testing. During training, we use tensorboard to observe the performance of the network at different iterations. We then apply the trained model to testing images and obtain quantitative evaluation results.
5+
In this example, we use 2D U-Net to segment the fetal brain from ultrasound images. First we download the images from internet, then edit the configuration file for training and testing. During training, we use tensorboard to observe the performance of the network at different iterations. We then apply the trained model to testing images and obtain quantitative evaluation results.
6+
7+
If you don't want to train the model by yourself, you can download a pre-trained model [here][model_link] and jump to the `Testing and evaluation` section.
68

79
## Data and preprocessing
810
1. We use the `HC18` dataset for this example. The images are available from the [website][hc18_link]. Download the HC18 training set that consists of 999 2D ultrasound images and their annotations. Create a new folder as `HC_root`, and download the images and save them in a sub-folder, like `HC_root/training_set`.
911
2. The annotation of this dataset are contours. We need to convert them into binary masks for segmentation. Therefore, create a folder `HC_root/training_set_label` for preprocessing.
1012
4. Set `HC_root` according to your computer in `get_ground_truth.py` and run `python get_ground_truth.py` for preprocessing. This command converts the contours into binary masks for brain, and the masks are saved in `HC_root/training_set_label`.
11-
5. Set `HC_root` according to your computer in `write_csv_files.py` and run `python write_csv_files.py` to randomly split the official HC18 training set into our own training (780), validation (70) and testing (149) sets. The output csv files are saved in `config`.
13+
5. Set `HC_root` according to your computer in `write_csv_files.py` and run `python write_csv_files.py` to randomly split the official HC18 training set into our own training (780 images), validation (70 images) and testing (149 images) sets. The output csv files are saved in `config`.
1214

15+
[model_link]:https://drive.google.com/open?id=1pYwt0lRiV_QrCJe5ef9IsLf4NKyrFRRD
1316
[hc18_link]:https://hc18.grand-challenge.org/
1417

1518
## Training
@@ -26,18 +29,18 @@ python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
2629
![avg_loss](./picture/train_avg_loss.png)
2730

2831
## Testing and evaluation
29-
1. When training is finished. Run the following command to obtain segmentation results of testing images:
32+
1. Run the following command to obtain segmentation results of testing images. If you use [the pretrained model][model_link], you need to edit `checkpoint_name` in `config/train_test.cfg`.
3033

3134
```bash
3235
mkdir result
3336
python ../../pymic/train_infer/train_infer.py test config/train_test.cfg
3437
```
3538

36-
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder` as your `HC_root/training_set_label`, and run the following command to obtain quantitative evaluation results in terms of dice.
39+
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder_list` as your `HC_root/training_set_label`, and run the following command to obtain quantitative evaluation results in terms of dice.
3740

3841
```bash
3942
python ../../pymic/util/evaluation.py config/evaluation.cfg
4043
```
4144

42-
The obtained dice score by default setting should be close to 97.05+/-3.63. You can set `metric = assd` in `config/evaluation.cfg` and run the evaluation command again. You will get average symmetric surface distance (assd) evaluation results. By default setting, the assd is close to 7.83+/-11.88 pixels. We find that the assd values are high for the segmentation results. You can try your efforts to improve the performance with different networks or training strategies by changing the configuration file `config/train_test.cfg`.
45+
The obtained dice score by default setting should be close to 97.05+/-3.63%. You can set `metric = assd` in `config/evaluation.cfg` and run the evaluation command again. You will get average symmetric surface distance (assd) evaluation results. By default setting, the assd is close to 7.83+/-11.88 pixels. We find that the assd values are high for the segmentation results. You can try your efforts to improve the performance with different networks or training strategies by changing the configuration file `config/train_test.cfg`.
4346

examples/prostate/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,12 @@
33

44
In this example, we use 3D U-Net to segment the prostate from 3D MRI. First we download the images from internet, then edit the configuration file for training and testing. During training, we use tensorboard to observe the performance of the network at different iterations. We then apply the trained model to testing images and obtain quantitative evaluation results.
55

6-
If you don't want to train the model by your self, you can download a pre-trained model [here][model_link] and jump to the `Testing and evaluation` section.
6+
If you don't want to train the model by yourself, you can download a pre-trained model [here][model_link] and jump to the `Testing and evaluation` section.
77

88
## Data and preprocessing
99
1. We use the `Promise12` dataset for this example. The images are available from the [website][promise12_link]. Download the training set that consists of 50 3D MR images and their annotations. The whole dataset consists of three parts. Create a new folder such as `data/promise12`, and download the images and save them in a sub-folder, like `data/promise12/TrainingData_Part1`, `data/promise12/TrainingData_Part2`, and `data/promise12/TrainingData_Part3`.
10-
2. Before we use these data, some preprocessing steps are needed, such as resampling them into a uniform resolution and crop the images to a smaller size. Create two folders `data/promise12/preprocess/image` and `data/promise12/preprocess/label`, then set `data_dir` in `preprocess.py` to according to your system. Run `python preprocess.py` for preprocessing.
11-
3. Open `write_csv_files.py` and set `data_dir` accordingly, such as `data/promise12/preprocess`. Run run `python write_csv_files.py` to randomly split the dataset into our own training (35), validation (5) and testing (10) sets. The output csv files are saved in `config`.
10+
2. Before we use these data, some preprocessing steps are needed, such as resampling them into a uniform resolution and crop the images to a smaller size. Create two folders `data/promise12/preprocess/image` and `data/promise12/preprocess/label`, then set the value of `data_dir` in `preprocess.py` according to your system. Run `python preprocess.py` for preprocessing.
11+
3. Open `write_csv_files.py` and set `data_dir` accordingly, such as `data/promise12/preprocess`. Run `python write_csv_files.py` to randomly split the dataset into our own training (35 images), validation (5 images) and testing (10 images) sets. The output csv files are saved in `config/data`.
1212

1313
[model_link]:https://drive.google.com/open?id=1pYwt0lRiV_QrCJe5ef9IsLf4NKyrFRRD
1414
[promise12_link]:https://promise12.grand-challenge.org/

0 commit comments

Comments
 (0)