Skip to content

Commit 484d25b

Browse files
authored
Merge pull request #12 from HiLab-git/dev
Dev
2 parents 0f83eb3 + 211421d commit 484d25b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+463
-159
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.
33

44
## Install PyMIC
5-
The latest released version of PyMIC can be installed by:
5+
The released version of PyMIC (v0.4.0) is required for these examples, and it can be installed by:
66

77
```bash
8-
pip install PYMIC==0.3.1.1
8+
pip install PYMIC==0.4.0
99
```
1010

1111
To use the latest development version, you can download the source code [here][PyMIC_link], and install it by:
@@ -15,7 +15,7 @@ python setup.py install
1515
```
1616

1717
## Data
18-
The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: n07g). Extract the files to `PyMIC_data` after the download.
18+
The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: xlwg). Extract the files to `PyMIC_data` after downloading.
1919

2020

2121
## List of Examples
@@ -35,8 +35,8 @@ Currently we provide the following examples in this repository:
3535
|Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
3636

3737
[PyMIC_link]: https://github.com/HiLab-git/PyMIC
38-
[google_link]:https://drive.google.com/file/d/1-LrMHsX7ZdBto2iC1WnbFFZ0tDeJQFHy/view?usp=sharing
39-
[baidu_link]:https://pan.baidu.com/s/15mjc0QqH75xztmc23PPWQQ
38+
[google_link]:https://drive.google.com/file/d/1eZakSEBr_zfIHFTAc96OFJix8cUBf-KR/view?usp=sharing
39+
[baidu_link]:https://pan.baidu.com/s/1tN0inIrVYtSxTVRfErD9Bw
4040
[AntBee_link]:classification/AntBee
4141
[CHNCXR_link]:classification/CHNCXR
4242
[JSRT_link]:segmentation/JSRT

classification/AntBee/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ update_mode = all
2626
Then start to train by running:
2727

2828
```bash
29-
pymic_run train config/train_test_ce1.cfg
29+
pymic_train config/train_test_ce1.cfg
3030
```
3131

3232
2. During training or after training, run `tensorboard --logdir model/resnet18_ce1` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 400, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18_ce1`.
@@ -39,7 +39,7 @@ pymic_run train config/train_test_ce1.cfg
3939

4040
```bash
4141
mkdir result
42-
pymic_run test config/train_test_ce1.cfg
42+
pymic_test config/train_test_ce1.cfg
4343
```
4444

4545
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.

classification/AntBee/config/train_test_ce1.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,5 +77,6 @@ gpus = [0]
7777

7878
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7979
ckpt_mode = 1
80-
output_csv = result/resnet18_ce1.csv
80+
output_dir = result
81+
output_csv = resnet18_ce1.csv
8182
save_probability = True

classification/AntBee/config/train_test_ce2.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,5 +77,6 @@ gpus = [0]
7777

7878
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7979
ckpt_mode = 1
80-
output_csv = result/resnet18_ce2.csv
80+
output_dir = result
81+
output_csv = resnet18_ce2.csv
8182
save_probability = True

classification/CHNCXR/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ update_mode = all
2626
Start to train by running:
2727

2828
```bash
29-
pymic_run train config/net_resnet18.cfg
29+
pymic_train config/net_resnet18.cfg
3030
```
3131

3232
2. During training or after training, run `tensorboard --logdir model/resnet18` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 1800, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18`.
@@ -39,7 +39,7 @@ pymic_run train config/net_resnet18.cfg
3939

4040
```bash
4141
mkdir result
42-
pymic_run test config/net_resnet18.cfg
42+
pymic_test config/net_resnet18.cfg
4343
```
4444

4545
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.

classification/CHNCXR/config/net_resnet18.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,5 +75,6 @@ gpus = [0]
7575

7676
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7777
ckpt_mode = 1
78-
output_csv = result/resnet18.csv
78+
output_dir = result
79+
output_csv = resnet18.csv
7980
save_probability = True

classification/CHNCXR/config/net_vgg16.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,5 +75,6 @@ gpus = [0]
7575

7676
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7777
ckpt_mode = 1
78-
output_csv = result/vgg16.csv
78+
output_dir = result
79+
output_csv = vgg16.csv
7980
save_probability = True

seg_nll/JSRT/README.md

Lines changed: 38 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,10 @@ The dataset setting is similar to that in the `segmentation/JSRT` demo. See `con
3939

4040
```bash
4141
...
42-
task_type = seg
42+
tensor_type = float
43+
task_type = seg
44+
supervise_type = fully_sup
45+
4346
root_dir = ../../PyMIC_data/JSRT
4447
train_csv = config/data/jsrt_train_mix.csv
4548
valid_csv = config/data/jsrt_valid.csv
@@ -51,8 +54,8 @@ loss_type = CrossEntropyLoss
5154
The following commands are used for training and inference with this method, respectively:
5255

5356
```bash
54-
pymic_run train config/unet_ce.cfg
55-
pymic_run test config/unet_ce.cfg
57+
pymic_train config/unet_ce.cfg
58+
pymic_test config/unet_ce.cfg
5659
```
5760

5861
### GCE Loss
@@ -67,8 +70,8 @@ loss_type = GeneralizedCELoss
6770
The following commands are used for training and inference with this method, respectively:
6871

6972
```bash
70-
pymic_run train config/unet_gce.cfg
71-
pymic_run test config/unet_gce.cfg
73+
pymic_train config/unet_gce.cfg
74+
pymic_test config/unet_gce.cfg
7275
```
7376

7477
### CLSLSR
@@ -81,33 +84,45 @@ python clslsr_get_condience config/unet_ce.cfg
8184
The weight maps will be saved in `$root_dir/slsr_conf`. Then train the new model and do inference by:
8285

8386
```bash
84-
pymic_run train config/unet_clslsr.cfg
85-
pymic_run test config/unet_clslsr.cfg
87+
pymic_train config/unet_clslsr.cfg
88+
pymic_test config/unet_clslsr.cfg
8689
```
8790

8891
Note that the weight maps for training images are specified in the configuration file `train_csv = config/data/jsrt_train_mix_clslsr.csv`.
8992

9093
### Co-Teaching
91-
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. The corresponding setting is:
94+
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. Note that for the following methods, `supervise_type` should be set to `noisy_label`.
9295

9396
```bash
94-
nll_method = CoTeaching
97+
[dataset]
98+
...
99+
supervise_type = noisy_label
100+
...
101+
102+
[noisy_label_learning]
103+
method_name = CoTeaching
95104
co_teaching_select_ratio = 0.8
96105
rampup_start = 1000
97106
rampup_end = 8000
98107
```
99108

100109
The following commands are used for training and inference with this method, respectively:
101110
```bash
102-
pymic_nll train config/unet_cot.cfg
103-
pymic_nll test config/unet_cot.cfg
111+
pymic_train config/unet_cot.cfg
112+
pymic_test config/unet_cot.cfg
104113
```
105114

106115
### TriNet
107116
The configuration file for TriNet is `config/unet_trinet.cfg`. The corresponding setting is:
108117

109118
```bash
110-
nll_method = TriNet
119+
[dataset]
120+
...
121+
supervise_type = noisy_label
122+
...
123+
124+
[noisy_label_learning]
125+
method_name = TriNet
111126
trinet_select_ratio = 0.9
112127
rampup_start = 1000
113128
rampup_end = 8000
@@ -116,15 +131,21 @@ rampup_end = 8000
116131
The following commands are used for training and inference with this method, respectively:
117132

118133
```bash
119-
pymic_nll train config/unet_trinet.cfg
120-
pymic_nll test config/unet_trinet.cfg
134+
pymic_train config/unet_trinet.cfg
135+
pymic_test config/unet_trinet.cfg
121136
```
122137

123138
### DAST
124139
The configuration file for DAST is `config/unet_dast.cfg`. The corresponding setting is:
125140

126141
```bash
127-
nll_method = DAST
142+
[dataset]
143+
...
144+
supervise_type = noisy_label
145+
...
146+
147+
[noisy_label_learning]
148+
method_name = DAST
128149
dast_dbc_w = 0.1
129150
dast_st_w = 0.1
130151
dast_rank_length = 20
@@ -136,8 +157,8 @@ rampup_end = 8000
136157
The commands for training and inference are:
137158

138159
```bash
139-
pymic_nll train config/unet_dast.cfg
140-
pymic_run test config/unet_dast.cfg
160+
pymic_train config/unet_dast.cfg
161+
pymic_test config/unet_dast.cfg
141162
```
142163

143164
## Evaluation

seg_nll/JSRT/config/unet_ce.cfg

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
[dataset]
22
# tensor type (float or double)
3-
tensor_type = float
3+
tensor_type = float
4+
task_type = seg
5+
supervise_type = fully_sup
46

5-
task_type = seg
67
root_dir = ../../PyMIC_data/JSRT
78
train_csv = config/data/jsrt_train_mix.csv
89
valid_csv = config/data/jsrt_valid.csv
@@ -64,8 +65,6 @@ ReduceLROnPlateau_patience = 2000
6465
ckpt_save_dir = model/unet_ce
6566
ckpt_prefix = unet_ce
6667

67-
# start iter
68-
iter_start = 0
6968
iter_max = 10000
7069
iter_valid = 100
7170
iter_save = [10000]

seg_nll/JSRT/config/unet_clslsr.cfg

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
[dataset]
22
# tensor type (float or double)
3-
tensor_type = float
3+
tensor_type = float
4+
task_type = seg
5+
supervise_type = fully_sup
46

5-
task_type = seg
67
root_dir = ../../PyMIC_data/JSRT
78
train_csv = config/data/jsrt_train_mix_clslsr.csv
89
valid_csv = config/data/jsrt_valid.csv
@@ -65,8 +66,6 @@ early_stop_patience = 4000
6566
ckpt_save_dir = model/unet_clslsr
6667
ckpt_prefix = unet_clslsr
6768

68-
# start iter
69-
iter_start = 0
7069
iter_max = 10000
7170
iter_valid = 100
7271
iter_save = [10000]

0 commit comments

Comments
 (0)