Skip to content

Commit 0f83eb3

Browse files
authored
Merge pull request #11 from HiLab-git/dev
Dev
2 parents 76382ce + 55bfa53 commit 0f83eb3

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+841
-141
lines changed

.gitignore

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
PyMIC_data/ACDC/*
2-
PyMIC_data/Fetal_HC/*
3-
PyMIC_data/JSRT/*
4-
PyMIC_data/MyoPS/*
5-
PyMIC_data/Promis12/*
1+
PyMIC_data/ACDC/
2+
PyMIC_data/Fetal_HC
3+
PyMIC_data/JSRT
4+
PyMIC_data/MyoPS
5+
PyMIC_data/Promise12

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
The latest released version of PyMIC can be installed by:
66

77
```bash
8-
pip install PYMIC
8+
pip install PYMIC==0.3.1.1
99
```
1010

1111
To use the latest development version, you can download the source code [here][PyMIC_link], and install it by:
@@ -29,7 +29,8 @@ Currently we provide the following examples in this repository:
2929
|Fully supervised segmentation|[JSRT2][JSRT2_link]|Using a customized network and loss function for the JSRT dataset|
3030
|Fully supervised segmentation|[Fetal_HC][fetal_hc_link]|Using a 2D UNet for fetal head segmentation from 2D ultrasound images|
3131
|Fully supervised segmentation|[Prostate][prostate_link]|Using a 3D UNet for prostate segmentation from 3D MRI|
32-
|Semi-supervised segmentation|[seg_ssl/ACDC][ssl_acdc_link]|Comparing different semi-supervised methods for heart structure segmentation|
32+
|Semi-supervised segmentation|[seg_ssl/ACDC][ssl_acdc_link]|Semi-supervised methods for heart structure segmentation using 2D CNNs|
33+
|Semi-supervised segmentation|[seg_ssl/AtriaSeg][ssl_atrial_link]|Semi-supervised methods for left atrial segmentation using 3D CNNs|
3334
|Weakly-supervised segmentation|[seg_wsl/ACDC][wsl_acdc_link]|Segmentation of heart structure with scrible annotations|
3435
|Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
3536

@@ -43,6 +44,7 @@ Currently we provide the following examples in this repository:
4344
[fetal_hc_link]:segmentation/fetal_hc
4445
[prostate_link]:segmentation/prostate
4546
[ssl_acdc_link]:seg_ssl/ACDC
47+
[ssl_atrial_link]:seg_ssl/AtriaSeg/
4648
[wsl_acdc_link]:seg_wsl/ACDC
4749
[nll_jsrt_link]:seg_nll/JSRT
4850

classification/AntBee/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,14 @@ In this example, we finetune a pretrained resnet18 for classification of images
1313
[data_link]:https://download.pytorch.org/tutorial/hymenoptera_data.zip
1414

1515
## Finetuning all layers of resnet18
16-
1. Here we use resnet18 for finetuning, and update all the layers. Open the configure file `config/train_test_ce1.cfg`. In the `network` section we can find details for the network. Here `update_layers = 0` means updating all the layers.
16+
1. Here we use resnet18 for finetuning, and update all the layers. Open the configure file `config/train_test_ce1.cfg`. In the `network` section we can find details for the network. Here `update_mode = all` means updating all the layers.
1717
```bash
1818
# type of network
1919
net_type = resnet18
2020
pretrain = True
2121
input_chns = 3
2222
# finetune all the layers
23-
update_layers = 0
23+
update_mode = all
2424
```
2525

2626
Then start to train by running:
@@ -48,20 +48,20 @@ pymic_run test config/train_test_ce1.cfg
4848
pymic_eval_cls config/evaluation.cfg
4949
```
5050

51-
The obtained accuracy by default setting should be around 0.9412, and the AUC will be around 0.976.
51+
The obtained accuracy by default setting should be around 0.9477, and the AUC will be around 0.9745.
5252

5353
3. Run `python show_roc.py` to show the receiver operating characteristic curve.
5454

5555
![roc](./picture/roc.png)
5656

5757
## Finetuning the last layer of resnet18
58-
Similarly to the above example, we further try to only finetune the last layer of resnet18 for the same classification task. Use a different configure file `config/train_test_ce2.cfg` for training and testing, where `update_layers = -1` in the `network` section means updating the last layer only:
58+
Similarly to the above example, we further try to only finetune the last layer of resnet18 for the same classification task. Use a different configure file `config/train_test_ce2.cfg` for training and testing, where `update_mode = last` in the `network` section means updating the last layer only:
5959
```bash
6060
net_type = resnet18
6161
pretrain = True
6262
input_chns = 3
6363
# finetune the last layer only
64-
update_layers = -1
64+
update_mode = last
6565
```
6666

67-
Edit `config/evaluation.cfg` accordinly for evaluation.
67+
Edit `config/evaluation.cfg` accordinly for evaluation. The corresponding accuracy and AUC would be around 0.9477 and 0.9778, respectively.

classification/AntBee/config/train_test_ce1.cfg

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -39,7 +39,7 @@ net_type = resnet18
3939
pretrain = True
4040
input_chns = 3
4141
# finetune all the layers
42-
update_layers = 0
42+
update_mode = all
4343

4444
# number of classes
4545
class_num = 2
@@ -56,19 +56,20 @@ learning_rate = 1e-3
5656
momentum = 0.9
5757
weight_decay = 1e-5
5858

59-
# for lr schedular (MultiStepLR)
60-
lr_scheduler = MultiStepLR
61-
lr_gamma = 0.1
62-
lr_milestones = [500, 1000]
59+
# for lr schedular (StepLR)
60+
lr_scheduler = StepLR
61+
lr_gamma = 0.5
62+
lr_step = 500
6363

64-
ckpt_save_dir = model/resnet18_ce1
65-
ckpt_prefix = resnet18
64+
ckpt_save_dir = model/resnet18_ce1
65+
ckpt_prefix = resnet18
6666

6767
# iteration
6868
iter_start = 0
69-
iter_max = 1500
69+
iter_max = 2000
7070
iter_valid = 100
71-
iter_save = 1500
71+
iter_save = 2000
72+
early_stop_patience = 1000
7273

7374
[testing]
7475
# list of gpus

classification/AntBee/config/train_test_ce2.cfg

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -39,8 +39,7 @@ net_type = resnet18
3939
pretrain = True
4040
input_chns = 3
4141
# finetune the last layer only
42-
update_layers = -1
43-
42+
update_mode = last
4443

4544
# number of classes
4645
class_num = 2
@@ -57,19 +56,20 @@ learning_rate = 1e-3
5756
momentum = 0.9
5857
weight_decay = 1e-5
5958

60-
# for lr schedular (MultiStepLR)
61-
lr_scheduler = MultiStepLR
62-
lr_gamma = 0.1
63-
lr_milestones = [500, 1000]
59+
# for lr schedular (StepLR)
60+
lr_scheduler = StepLR
61+
lr_gamma = 0.5
62+
lr_step = 500
6463

6564
ckpt_save_dir = model/resnet18_ce2
6665
ckpt_prefix = resnet18
6766

6867
# iteration
6968
iter_start = 0
70-
iter_max = 1500
69+
iter_max = 2000
7170
iter_valid = 100
72-
iter_save = 1500
71+
iter_save = 2000
72+
early_stop_patience = 1000
7373

7474
[testing]
7575
# list of gpus

classification/CHNCXR/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ net_type = resnet18
2020
pretrain = True
2121
input_chns = 3
2222
# finetune all the layers
23-
update_layers = 0
23+
update_mode = all
2424
```
2525

2626
Start to train by running:
@@ -48,12 +48,12 @@ pymic_run test config/net_resnet18.cfg
4848
pymic_eval_cls config/evaluation.cfg
4949
```
5050

51-
The obtained accuracy by default setting should be around 0.8571, and the AUC is 0.94.
51+
The obtained accuracy by default setting should be around 0.8271, and the AUC is 0.9343.
5252

5353
3. Run `python show_roc.py` to show the receiver operating characteristic curve.
5454

5555
![roc](./picture/roc.png)
5656

5757

5858
## Finetuning vgg16
59-
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The iteration number for the highest accuracy on the validation set was 2300, and the accuracy will be around 0.8797.
59+
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The accuracy and AUC would be around 0.8571 and 0.9271, respectively.

classification/CHNCXR/config/net_resnet18.cfg

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -37,7 +37,7 @@ net_type = resnet18
3737
pretrain = True
3838
input_chns = 3
3939
# finetune all the layers
40-
update_layers = 0
40+
update_mode = all
4141

4242
# number of classes
4343
class_num = 2
@@ -54,10 +54,10 @@ learning_rate = 1e-3
5454
momentum = 0.9
5555
weight_decay = 1e-5
5656

57-
# for lr schedular (MultiStepLR)
58-
lr_scheduler = MultiStepLR
59-
lr_gamma = 0.1
60-
lr_milestones = [1500, 3000]
57+
# for lr schedular (StepLR)
58+
lr_scheduler = StepLR
59+
lr_gamma = 0.5
60+
lr_step = 1000
6161

6262
ckpt_save_dir = model/resnet18
6363
ckpt_prefix = resnet18
@@ -66,7 +66,8 @@ ckpt_prefix = resnet18
6666
iter_start = 0
6767
iter_max = 5000
6868
iter_valid = 100
69-
iter_save = 1000
69+
iter_save = 5000
70+
early_stop_patience = 2000
7071

7172
[testing]
7273
# list of gpus

classification/CHNCXR/config/net_vgg16.cfg

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -37,7 +37,7 @@ net_type = vgg16
3737
pretrain = True
3838
input_chns = 3
3939
# finetune all the layers
40-
update_layers = 0
40+
update_mode = all
4141

4242
# number of classes
4343
class_num = 2
@@ -54,10 +54,10 @@ learning_rate = 1e-3
5454
momentum = 0.9
5555
weight_decay = 1e-5
5656

57-
# for lr schedular (MultiStepLR)
58-
lr_scheduler = MultiStepLR
59-
lr_gamma = 0.1
60-
lr_milestones = [1500, 3000]
57+
# for lr schedular (StepLR)
58+
lr_scheduler = StepLR
59+
lr_gamma = 0.5
60+
lr_step = 1000
6161

6262
ckpt_save_dir = model/vgg16
6363
ckpt_prefix = vgg16
@@ -66,7 +66,8 @@ ckpt_prefix = vgg16
6666
iter_start = 0
6767
iter_max = 5000
6868
iter_valid = 100
69-
iter_save = 1000
69+
iter_save = 5000
70+
early_stop_patience = 2000
7071

7172
[testing]
7273
# list of gpus

seg_nll/JSRT/README.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,22 @@ pymic_run train config/unet_gce.cfg
7171
pymic_run test config/unet_gce.cfg
7272
```
7373

74+
### CLSLSR
75+
The CLSLSR method estimates errors in the original noisy label and obtains pixel-level weight maps based on an intial model, and then uses the weight maps to suppress noises in a standard supervised learning procedure. Assume that the initial model is the baseline method, run the following command to obtain the weight maps:
76+
77+
```bash
78+
python clslsr_get_condience config/unet_ce.cfg
79+
```
80+
81+
The weight maps will be saved in `$root_dir/slsr_conf`. Then train the new model and do inference by:
82+
83+
```bash
84+
pymic_run train config/unet_clslsr.cfg
85+
pymic_run test config/unet_clslsr.cfg
86+
```
87+
88+
Note that the weight maps for training images are specified in the configuration file `train_csv = config/data/jsrt_train_mix_clslsr.csv`.
89+
7490
### Co-Teaching
7591
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. The corresponding setting is:
7692

@@ -128,7 +144,7 @@ pymic_run test config/unet_dast.cfg
128144
Use `pymic_eval_seg config/evaluation.cfg` for quantitative evaluation of the segmentation results. You need to edit `config/evaluation.cfg` first, for example:
129145

130146
```bash
131-
metric = dice
147+
metric_list = [dice, assd]
132148
label_list = [255]
133149
organ_name = lung
134150

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# -*- coding: utf-8 -*-
2+
from __future__ import print_function, division
3+
import sys
4+
from pymic.net_run_nll.nll_clslsr import get_confidence_map
5+
6+
7+
if __name__ == "__main__":
8+
"""
9+
The main function to get the confidence map during inference.
10+
"""
11+
if(len(sys.argv) < 2):
12+
print('Number of arguments should be 2. e.g.')
13+
print(' python nll_clslsr.py config.cfg')
14+
exit()
15+
cfg_file = str(sys.argv[1])
16+
get_confidence_map(cfg_file)

0 commit comments

Comments
 (0)