You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[PyMIC][PyMIC_link] is an Pytorch-based medical image computing toolkit with deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For beginners, you can follow the examples by just editting the configure files for model training, testing and evaluation. For advanced users, you can develop your own modules, such as customized networks and loss functions.
2
+
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.
3
3
4
4
## Install PyMIC
5
5
The latest released version of PyMIC can be installed by:
6
6
7
7
```bash
8
-
pip install PYMIC==0.2.4
8
+
pip install PYMIC
9
9
```
10
10
11
11
To use the latest development version, you can download the source code [here][PyMIC_link], and install it by:
@@ -15,19 +15,19 @@ python setup.py install
15
15
```
16
16
17
17
## List of Examples
18
-
Currently we provide two examples for image classification, and four examples for 2D/3D image segmentation. These examples include:
19
18
20
-
1, [classification/AntBee][AntBee_link]: finetuning a resnet18 for Ant and Bee classification.
21
-
22
-
2, [classification/CHNCXR][CHNCXR_link]: finetuning restnet18 and vgg16 for normal/tuberculosis X-ray image classification.
23
-
24
-
3, [segmentation/JSRT][JSRT_link]: using a 2D UNet for heart segmentation from chest X-ray images.
25
-
26
-
4, [segmentation/JSRT2][JSRT2_link]: defining a customized network for heart segmentation from chest X-ray images.
27
-
28
-
5, [segmentation/fetal_hc][fetal_hc_link]: using a 2D UNet for fetal head segmentation from 2D ultrasound images.
29
-
30
-
6, [segmentation/prostate][prostate_link]: using a 3D UNet for prostate segmentation from 3D MRI.
19
+
Currently we provide the following examples in this repository:
20
+
|Catetory|Example|Remarks|
21
+
|---|---|---|
22
+
|Classification|[AntBee][AntBee_link]|Finetuning a resnet18 for Ant and Bee classification|
23
+
|Classification|[CHNCXR][CHNCXR_link]|Finetuning restnet18 and vgg16 for normal/tuberculosis X-ray image classification|
24
+
|Fully supervised segmentation|[JSRT][JSRT_link]|Using a 2D UNet for lung segmentation from chest X-ray images|
25
+
|Fully supervised segmentation|[JSRT2][JSRT2_link]|Using a customized network and loss function for the JSRT dataset|
26
+
|Fully supervised segmentation|[Fetal_HC][fetal_hc_link]|Using a 2D UNet for fetal head segmentation from 2D ultrasound images|
27
+
|Fully supervised segmentation|[Prostate][prostate_link]|Using a 3D UNet for prostate segmentation from 3D MRI|
28
+
|Semi-supervised segmentation|[seg_ssl/ACDC][ssl_acdc_link]|Comparing different semi-supervised methods for heart structure segmentation|
29
+
|Weakly-supervised segmentation|[seg_wsl/ACDC][wsl_acdc_link]|Segmentation of heart structure with scrible annotations|
30
+
|Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
31
31
32
32
[PyMIC_link]: https://github.com/HiLab-git/PyMIC
33
33
[AntBee_link]:classification/AntBee
@@ -36,4 +36,7 @@ Currently we provide two examples for image classification, and four examples fo
Copy file name to clipboardExpand all lines: classification/AntBee/README.md
+28-10Lines changed: 28 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,18 +6,27 @@
6
6
In this example, we finetune a pretrained resnet18 for classification of images with two categries: Ant and Bee. This example is a PyMIC implementation of pytorch's "transfer learning for computer vision tutorial". The orginal tutorial can be found [here][torch_tutorial]. In PyMIC's implementation, we only need to edit the configure file to run the code.
7
7
8
8
## Data and preprocessing
9
-
1. The dataset contains about 120 training images each for ants and bees. There are 75 validation images for each class. Download the data from [here][data_link] and extract it.
10
-
2. Set `AntBee_root` according to your computer in `write_csv_files.py`, where `AntBee_root` should be the path of `hymenoptera_data` based on the dataset you extracted.
11
-
3. Run `python write_csv_files.py` to create two csv files storing the paths and labels of training and validation images. They are `train_data.csv` and `valid_data.csv` and saved in `./config`.
9
+
1. The dataset contains about 120 training images each for ants and bees. There are 75 validation images for each class. Download the data from [here][data_link] and extract it to `PyMIC_data`. Then the path for training and validation set should be `PyMIC_data/hymenoptera_data/train` and `PyMIC_data/hymenoptera_data/val`, respectively.
10
+
2. Run `python write_csv_files.py` to create two csv files storing the paths and labels of training and validation images. They are `train_data.csv` and `valid_data.csv` and saved in `./config`.
1. Here we use resnet18 for finetuning, and update all the layers. Open the configure file `config/train_test_ce1.cfg`. In the `network` section we can find details for the network. In the `dataset` section, set the value of `root_dir` as your `AntBee_root`. Then start to train by running:
16
+
1. Here we use resnet18 for finetuning, and update all the layers. Open the configure file `config/train_test_ce1.cfg`. In the `network` section we can find details for the network. Here `update_layers = 0` means updating all the layers.
17
+
```bash
18
+
# type of network
19
+
net_type = resnet18
20
+
pretrain = True
21
+
input_chns = 3
22
+
# finetune all the layers
23
+
update_layers = 0
24
+
```
25
+
26
+
Then start to train by running:
18
27
19
28
```bash
20
-
pymic_net_run train config/train_test_ce1.cfg
29
+
pymic_run train config/train_test_ce1.cfg
21
30
```
22
31
23
32
2. During training or after training, run `tensorboard --logdir model/resnet18_ce1` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 400, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18_ce1`.
1. Run the following command to obtain classification results of testing images. By default we use the best performing checkpoint based on the validation set. You can set `ckpt_mode` to 0 in `config/train_test.cfg` to use the latest checkpoint.
38
+
1. Run the following command to obtain classification results of testing images. By default we use the best performing checkpoint based on the validation set. You can set `ckpt_mode` to 0 in `config/train_test_ce1.cfg` to use the latest checkpoint.
30
39
31
40
```bash
32
41
mkdir result
33
-
pymic_net_runtest config/train_test_ce1.cfg
42
+
pymic_runtest config/train_test_ce1.cfg
34
43
```
35
44
36
45
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.
37
46
38
47
```bash
39
-
pymic_evaluate_cls config/evaluation.cfg
48
+
pymic_eval_cls config/evaluation.cfg
40
49
```
41
50
42
-
The obtained accuracy by default setting should be around 0.9412, and the AUC will be 0.973.
51
+
The obtained accuracy by default setting should be around 0.9412, and the AUC will be around 0.976.
43
52
44
53
3. Run `python show_roc.py` to show the receiver operating characteristic curve.
45
54
46
55

47
56
48
57
## Finetuning the last layer of resnet18
49
-
Similarly to the above example, we further try to only finetune the last layer of resnet18 for the same classification task. Use a different configure file `config/train_test_ce2.cfg` for training and testing, where `update_layers = -1` in the `network` section means updating the last layer only. Edit `config/evaluation.cfg` accordinly for evaluation. The iteration number obtained the highest accuracy on the validation set was 400 in our testing machine, and the accuracy was around 0.9543. The AUC was 0.981.
58
+
Similarly to the above example, we further try to only finetune the last layer of resnet18 for the same classification task. Use a different configure file `config/train_test_ce2.cfg` for training and testing, where `update_layers = -1` in the `network` section means updating the last layer only:
59
+
```bash
60
+
net_type = resnet18
61
+
pretrain = True
62
+
input_chns = 3
63
+
# finetune the last layer only
64
+
update_layers = -1
65
+
```
66
+
67
+
Edit `config/evaluation.cfg` accordinly for evaluation.
Copy file name to clipboardExpand all lines: classification/CHNCXR/README.md
+17-8Lines changed: 17 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,18 +6,27 @@
6
6
In this example, we finetune a pretrained resnet18 and vgg16 for classification of X-Ray images with two categries: normal and tuberculosis.
7
7
8
8
## Data and preprocessing
9
-
1. We use the Shenzhen Hospital X-ray Set for this experiment. This dataset contains images in JPEG format. There are 326 normal x-rays and 336 abnormal x-rays showing various manifestations of tuberculosis. Download the dataset from [here][data_link] and extract it, and the folder name will be "ChinaSet_AllFiles/CXR_png".
9
+
1. We use the Shenzhen Hospital X-ray Set for this experiment. This [dataset] contains images in JPEG format. There are 326 normal x-rays and 336 abnormal x-rays showing various manifestations of tuberculosis. The images are available in `PyMIC_data/CHNCXR`.
2. Set `image_dir` according to your computer in `write_csv_files.py`, where `image_dir` should be the path of "CXR_png" based on the dataset you extracted.
14
-
3. Run `python write_csv_files.py` to randomly split the entire dataset into 70% for training, 10% for validation and 20% for testing. The output files are `cxr_train.csv`, `cxr_valid.csv` and `cxr_test.csv` under folder `./config`.
13
+
2. Run `python write_csv_files.py` to randomly split the entire dataset into 70% for training, 10% for validation and 20% for testing. The output files are `cxr_train.csv`, `cxr_valid.csv` and `cxr_test.csv` under folder `./config`.
15
14
16
15
## Finetuning resnet18
17
-
1. First, we use resnet18 for finetuning, and update all the layers. Open the configure file `config/net_resnet18.cfg`. In the `dataset` section, set the value of `root_dir` as your path of "CXR_png". Then start to train by running:
16
+
1. First, we use resnet18 for finetuning, and update all the layers. The configuration file is `config/net_resnet18.cfg`. The setting for network is:
17
+
18
+
```bash
19
+
net_type = resnet18
20
+
pretrain = True
21
+
input_chns = 3
22
+
# finetune all the layers
23
+
update_layers = 0
24
+
```
25
+
26
+
Start to train by running:
18
27
19
28
```bash
20
-
pymic_net_run train config/net_resnet18.cfg
29
+
pymic_run train config/net_resnet18.cfg
21
30
```
22
31
23
32
2. During training or after training, run `tensorboard --logdir model/resnet18` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 1800, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18`.
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.
37
46
38
47
```bash
39
-
pymic_evaluate_cls config/evaluation.cfg
48
+
pymic_eval_cls config/evaluation.cfg
40
49
```
41
50
42
51
The obtained accuracy by default setting should be around 0.8571, and the AUC is 0.94.
@@ -47,4 +56,4 @@ The obtained accuracy by default setting should be around 0.8571, and the AUC is
47
56
48
57
49
58
## Finetuning vgg16
50
-
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The iteration number for the highest accuracy on the validation set was 2300, and the accuracy will be around 0.8797.
59
+
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The iteration number for the highest accuracy on the validation set was 2300, and the accuracy will be around 0.8797.
0 commit comments