Skip to content

Commit cdf6f86

Browse files
committed
update readme
replace pymic_run with pymic_train or pymic_test
1 parent 9fb9ed8 commit cdf6f86

File tree

9 files changed

+18
-14
lines changed

9 files changed

+18
-14
lines changed

classification/AntBee/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ update_mode = all
2626
Then start to train by running:
2727

2828
```bash
29-
pymic_run train config/train_test_ce1.cfg
29+
pymic_train config/train_test_ce1.cfg
3030
```
3131

3232
2. During training or after training, run `tensorboard --logdir model/resnet18_ce1` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 400, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18_ce1`.
@@ -39,7 +39,7 @@ pymic_run train config/train_test_ce1.cfg
3939

4040
```bash
4141
mkdir result
42-
pymic_run test config/train_test_ce1.cfg
42+
pymic_test config/train_test_ce1.cfg
4343
```
4444

4545
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.

classification/AntBee/config/train_test_ce1.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,5 +77,6 @@ gpus = [0]
7777

7878
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7979
ckpt_mode = 1
80-
output_csv = result/resnet18_ce1.csv
80+
output_dir = result
81+
output_csv = resnet18_ce1.csv
8182
save_probability = True

classification/AntBee/config/train_test_ce2.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,5 +77,6 @@ gpus = [0]
7777

7878
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7979
ckpt_mode = 1
80-
output_csv = result/resnet18_ce2.csv
80+
output_dir = result
81+
output_csv = restnet18_ce2.csv
8182
save_probability = True

classification/CHNCXR/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ update_mode = all
2626
Start to train by running:
2727

2828
```bash
29-
pymic_run train config/net_resnet18.cfg
29+
pymic_train config/net_resnet18.cfg
3030
```
3131

3232
2. During training or after training, run `tensorboard --logdir model/resnet18` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 1800, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18`.
@@ -39,7 +39,7 @@ pymic_run train config/net_resnet18.cfg
3939

4040
```bash
4141
mkdir result
42-
pymic_run test config/net_resnet18.cfg
42+
pymic_test config/net_resnet18.cfg
4343
```
4444

4545
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.

classification/CHNCXR/config/net_resnet18.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,5 +75,6 @@ gpus = [0]
7575

7676
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7777
ckpt_mode = 1
78-
output_csv = result/resnet18.csv
78+
output_dir = result
79+
output_csv = resnet18.csv
7980
save_probability = True

classification/CHNCXR/config/net_vgg16.cfg

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,5 +75,6 @@ gpus = [0]
7575

7676
# checkpoint mode can be [0-latest, 1-best, 2-specified]
7777
ckpt_mode = 1
78-
output_csv = result/vgg16.csv
78+
output_dir = result
79+
output_csv = vgg16.csv
7980
save_probability = True

segmentation/JSRT/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ In this example, we use 2D U-Net to segment the lung from X-Ray images. First we
1616
1. Start to train by running:
1717

1818
```bash
19-
pymic_run train config/unet.cfg
19+
pymic_train config/unet.cfg
2020
```
2121

2222
2. During training or after training, run `tensorboard --logdir model/unet` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where red and blue curves are for training set and validation set respectively. We can observe some over-fitting on the training set.
@@ -28,7 +28,7 @@ pymic_run train config/unet.cfg
2828
1. Run the following command to obtain segmentation results of testing images. By default we use the latest checkpoint. You can set `ckpt_mode` to 1 in `config/unet.cfg` to use the best performing checkpoint based on the validation set.
2929

3030
```bash
31-
pymic_run test config/unet.cfg
31+
pymic_test config/unet.cfg
3232
```
3333

3434
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder_root` as your `JSRT_root`, and run the following command to obtain quantitative evaluation results in terms of dice.

segmentation/fetal_hc/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ In this example, we use 2D U-Net to segment the fetal brain from ultrasound imag
1515
1. Start to train by running:
1616

1717
```bash
18-
pymic_run train config/unet.cfg
18+
pymic_train config/unet.cfg
1919
```
2020

2121
2. During training or after training, run `tensorboard --logdir model/unet` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where red and blue curves are for training set and validation set respectively.
@@ -27,7 +27,7 @@ pymic_run train config/unet.cfg
2727
1. Run the following command to obtain segmentation results of testing images based on the best-performing checkpoint on the validation set. By default we use sliding window inference to get better results. You can also edit the `testing` section of `config/unet.cfg` to use other inference strategies.
2828

2929
```bash
30-
pymic_run test config/unet.cfg
30+
pymic_test config/unet.cfg
3131
```
3232

3333
2. Use the following command to obtain quantitative evaluation results in terms of Dice.

segmentation/prostate/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ In this example, we use 3D U-Net with deep supervision to segment the prostate f
1313
1. Start to train by running:
1414

1515
```bash
16-
pymic_run train config/unet3d.cfg
16+
pymic_train config/unet3d.cfg
1717
```
1818

1919
Note that we set `multiscale_pred = True`, `deep_supervise = True` and `loss_type = [DiceLoss, CrossEntropyLoss]` in the configure file. We also use Mixup for data
@@ -28,7 +28,7 @@ augmentation by setting `mixup_probability=0.5`.
2828
1. Run the following command to obtain segmentation results of testing images. By default we set `ckpt_mode` to 1, which means using the best performing checkpoint based on the validation set.
2929

3030
```bash
31-
pymic_run test config/unet3d.cfg
31+
pymic_test config/unet3d.cfg
3232
```
3333

3434
2. Run the following command to obtain quantitative evaluation results in terms of Dice.

0 commit comments

Comments
 (0)