You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,10 +2,10 @@
2
2
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.
3
3
4
4
## Install PyMIC
5
-
The latest released version of PyMIC can be installed by:
5
+
The released version of PyMIC (v0.4.0) is required for these examples, and it can be installed by:
6
6
7
7
```bash
8
-
pip install PYMIC==0.3.1.1
8
+
pip install PYMIC==0.4.0
9
9
```
10
10
11
11
To use the latest development version, you can download the source code [here][PyMIC_link], and install it by:
@@ -15,7 +15,7 @@ python setup.py install
15
15
```
16
16
17
17
## Data
18
-
The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: n07g). Extract the files to `PyMIC_data` after the download.
18
+
The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: xlwg). Extract the files to `PyMIC_data` after downloading.
19
19
20
20
21
21
## List of Examples
@@ -35,8 +35,8 @@ Currently we provide the following examples in this repository:
35
35
|Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
Copy file name to clipboardExpand all lines: classification/AntBee/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ update_mode = all
26
26
Then start to train by running:
27
27
28
28
```bash
29
-
pymic_run train config/train_test_ce1.cfg
29
+
pymic_train config/train_test_ce1.cfg
30
30
```
31
31
32
32
2. During training or after training, run `tensorboard --logdir model/resnet18_ce1` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 400, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18_ce1`.
Copy file name to clipboardExpand all lines: classification/CHNCXR/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ update_mode = all
26
26
Start to train by running:
27
27
28
28
```bash
29
-
pymic_run train config/net_resnet18.cfg
29
+
pymic_train config/net_resnet18.cfg
30
30
```
31
31
32
32
2. During training or after training, run `tensorboard --logdir model/resnet18` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 1800, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18`.
The weight maps will be saved in `$root_dir/slsr_conf`. Then train the new model and do inference by:
82
85
83
86
```bash
84
-
pymic_run train config/unet_clslsr.cfg
85
-
pymic_run test config/unet_clslsr.cfg
87
+
pymic_train config/unet_clslsr.cfg
88
+
pymic_test config/unet_clslsr.cfg
86
89
```
87
90
88
91
Note that the weight maps for training images are specified in the configuration file `train_csv = config/data/jsrt_train_mix_clslsr.csv`.
89
92
90
93
### Co-Teaching
91
-
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. The corresponding setting is:
94
+
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. Note that for the following methods, `supervise_type` should be set to `noisy_label`.
92
95
93
96
```bash
94
-
nll_method = CoTeaching
97
+
[dataset]
98
+
...
99
+
supervise_type = noisy_label
100
+
...
101
+
102
+
[noisy_label_learning]
103
+
method_name = CoTeaching
95
104
co_teaching_select_ratio = 0.8
96
105
rampup_start = 1000
97
106
rampup_end = 8000
98
107
```
99
108
100
109
The following commands are used for training and inference with this method, respectively:
101
110
```bash
102
-
pymic_nll train config/unet_cot.cfg
103
-
pymic_nll test config/unet_cot.cfg
111
+
pymic_train config/unet_cot.cfg
112
+
pymic_test config/unet_cot.cfg
104
113
```
105
114
106
115
### TriNet
107
116
The configuration file for TriNet is `config/unet_trinet.cfg`. The corresponding setting is:
108
117
109
118
```bash
110
-
nll_method = TriNet
119
+
[dataset]
120
+
...
121
+
supervise_type = noisy_label
122
+
...
123
+
124
+
[noisy_label_learning]
125
+
method_name = TriNet
111
126
trinet_select_ratio = 0.9
112
127
rampup_start = 1000
113
128
rampup_end = 8000
@@ -116,15 +131,21 @@ rampup_end = 8000
116
131
The following commands are used for training and inference with this method, respectively:
117
132
118
133
```bash
119
-
pymic_nll train config/unet_trinet.cfg
120
-
pymic_nll test config/unet_trinet.cfg
134
+
pymic_train config/unet_trinet.cfg
135
+
pymic_test config/unet_trinet.cfg
121
136
```
122
137
123
138
### DAST
124
139
The configuration file for DAST is `config/unet_dast.cfg`. The corresponding setting is:
0 commit comments