@@ -52,6 +52,30 @@ If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75,2.0
5252| HRNetV2-W48 | 60 classes | No | Yes | Yes | 48.3 | [ OneDrive] ( https://1drv.ms/u/s!Aus8VCZ_C_33gQEHDQrZCiv4R5mf ) /[ BaiduYun(Access Code:9uf8)] ( https://pan.baidu.com/s/1pgYt8P8ht2HOOzcA0F7Kag ) |
5353| HRNetV2-W48 + OCR | 60 classes | No | Yes | Yes | 50.1 | [ GoogleDrive] ( https://drive.google.com/file/d/1ZAZ94GME3wmijF7ax5bqa0P3KxNLPUXR/view?usp=sharing ) /[ BaiduYun(Access Code: gtkb )] ( https://pan.baidu.com/s/13AYjwzh1LJSlipJwNpJ3Uw ) |
5454
55+ 4 . Performance on the COCO-Stuff dataset. The models are trained and tested with the input size of 520x520.
56+ If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75,2.0 (the same as EncNet, DANet etc.).
57+
58+ | model | OHEM | Multi-scale| Flip | mIoU | Link |
59+ | :--: | :--: | :--: | :--: | :--: | :--: |
60+ | HRNetV2-W48 | Yes | No | No | 36.2 | [ GoogleDrive] ( https://drive.google.com/open?id=1tXSWTCNyG4ETLfROJM1L6Lswg8wj5WvL ) /[ BaiduYun(Access Code:92gw)] ( https://pan.baidu.com/s/1VAV6KThH1Irzv9HZgLWE2Q ) |
61+ | HRNetV2-W48 + OCR | Yes | No | No | 39.7 | [ GoogleDrive] ( https://drive.google.com/open?id=1yMJ7-1-7LbbWotrqj1S4vM6M6Nj0feXv ) /[ BaiduYun(Access Code: sjc4 )] ( https://pan.baidu.com/s/1HFSYyVwKBG3E6y76gcPjDA ) |
62+ | HRNetV2-W48 | Yes | Yes | Yes | 37.9 | [ GoogleDrive] ( https://drive.google.com/open?id=1tXSWTCNyG4ETLfROJM1L6Lswg8wj5WvL ) /[ BaiduYun(Access Code:92gw)] ( https://pan.baidu.com/s/1VAV6KThH1Irzv9HZgLWE2Q ) |
63+ | HRNetV2-W48 + OCR | Yes | Yes | Yes | 40.6 | [ GoogleDrive] ( https://drive.google.com/open?id=1yMJ7-1-7LbbWotrqj1S4vM6M6Nj0feXv ) /[ BaiduYun(Access Code: sjc4 )] ( https://pan.baidu.com/s/1HFSYyVwKBG3E6y76gcPjDA ) |
64+
65+ ** Note** Currently we reproduce HRNet+OCR results on COCO-Stuff dataset with PyTorch 0.4.1, but PyTorch 1.1.0 may also be OK.
66+
67+ 5 . Performance on the ADE20K dataset. The models are trained and tested with the input size of 520x520.
68+ If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75,2.0 (the same as EncNet, DANet etc.).
69+
70+ | model | OHEM | Multi-scale| Flip | mIoU | Link |
71+ | :--: | :--: | :--: | :--: | :--: | :--: |
72+ | HRNetV2-W48 | Yes | No | No | 43.1 | [ GoogleDrive] ( https://drive.google.com/open?id=1OlTm8k3fIQpZXmOKXipd5BdxVYtbSWt- ) /[ BaiduYun(Access Code: f6xf )] ( https://pan.baidu.com/s/11neVkzxx27qS2-mPFW9dfg ) |
73+ | HRNetV2-W48 + OCR | Yes | No | No | 44.5 | [ GoogleDrive] ( https://drive.google.com/open?id=1JEzwhkcPUc-HXnq5ErbWy0vWnNpI9sZ8 ) /[ BaiduYun(Access Code: peg4 )] ( https://pan.baidu.com/s/1HLhjiLIdgaOHs0SzEtkgkQ ) |
74+ | HRNetV2-W48 | Yes | Yes | Yes | 44.2 | [ GoogleDrive] ( https://drive.google.com/open?id=1OlTm8k3fIQpZXmOKXipd5BdxVYtbSWt- ) /[ BaiduYun(Access Code: f6xf )] ( https://pan.baidu.com/s/11neVkzxx27qS2-mPFW9dfg ) |
75+ | HRNetV2-W48 + OCR | Yes | Yes | Yes | 45.5 | [ GoogleDrive] ( https://drive.google.com/open?id=1JEzwhkcPUc-HXnq5ErbWy0vWnNpI9sZ8 ) /[ BaiduYun(Access Code: peg4 )] ( https://pan.baidu.com/s/1HLhjiLIdgaOHs0SzEtkgkQ ) |
76+
77+ ** Note** Currently we reproduce HRNet+OCR results on ADE20K dataset with PyTorch 0.4.1, but PyTorch 1.1.0 may also be OK.
78+
5579## Quick start
5680### Install
57811 . For LIP dataset, install PyTorch=0.4.1 following the [ official instructions] ( https://pytorch.org/ ) . For Cityscapes and PASCAL-Context, we use PyTorch=1.1.0.
@@ -92,6 +116,20 @@ $SEG_ROOT/data
92116│ ├── res
93117│ └── VOCdevkit
94118│ └── VOC2010
119+ ├── cocostuff
120+ │ ├── train
121+ │ │ ├── image
122+ │ │ └── label
123+ │ └── val
124+ │ ├── image
125+ │ └── label
126+ ├── ade20k
127+ │ ├── train
128+ │ │ ├── image
129+ │ │ └── label
130+ │ └── val
131+ │ ├── image
132+ │ └── label
95133├── list
96134│ ├── cityscapes
97135│ │ ├── test.lst
@@ -172,6 +210,22 @@ python tools/test.py --cfg experiments/lip/seg_hrnet_w48_473x473_sgd_lr7e-3_wd5e
172210 TEST.FLIP_TEST True \
173211 TEST.NUM_SAMPLES 0
174212````
213+ Evaluating HRNet+OCR on the COCO-Stuff validation set with multi-scale and flip testing:
214+ ```` bash
215+ python tools/test.py --cfg experiments/cocostuff/seg_hrnet_ocr_w48_520x520_ohem_sgd_lr1e-3_wd1e-4_bs_16_epoch110.yaml \
216+ DATASET.TEST_SET list/cocostuff/testval.lst \
217+ TEST.MODEL_FILE hrnet_ocr_cocostuff_3965_torch04.pth \
218+ TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75,2.0 \
219+ TEST.MULTI_SCALE True TEST.FLIP_TEST True
220+ ````
221+ Evaluating HRNet+OCR on the ADE20K validation set with multi-scale and flip testing:
222+ ```` bash
223+ python tools/test.py --cfg experiments/ade20k/seg_hrnet_ocr_w48_520x520_ohem_sgd_lr2e-2_wd1e-4_bs_16_epoch120.yaml \
224+ DATASET.TEST_SET list/cocostuff/testval.lst \
225+ TEST.MODEL_FILE hrnet_ocr_ade20k_4451_torch04.pth \
226+ TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75,2.0 \
227+ TEST.MULTI_SCALE True TEST.FLIP_TEST True
228+ ````
175229
176230## Other applications of HRNet
177231* [ Human pose estimation] ( https://github.com/leoxiaobin/deep-high-resolution-net.pytorch )
0 commit comments