|
| 1 | +# Tutorial 11: Test Results Submission |
| 2 | + |
| 3 | +## Panoptic segmentation test results submission |
| 4 | + |
| 5 | +The following sections introduce how to produce the prediction results of panoptic segmentation models on the COCO test-dev set and submit the predictions to [COCO evaluation server](https://competitions.codalab.org/competitions/19507). |
| 6 | + |
| 7 | +### Prerequisites |
| 8 | + |
| 9 | +- Download [COCO test dataset images](http://images.cocodataset.org/zips/test2017.zip), [testing image info](http://images.cocodataset.org/annotations/image_info_test2017.zip), and [panoptic train/val annotations](http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip), then unzip them, put 'test2017' to `data/coco/`, put json files and annotation files to `data/coco/annotations/`. |
| 10 | + |
| 11 | +```shell |
| 12 | +# suppose data/coco/ does not exist |
| 13 | +mkdir -pv data/coco/ |
| 14 | + |
| 15 | +# download test2017 |
| 16 | +wget -P data/coco/ http://images.cocodataset.org/zips/test2017.zip |
| 17 | +wget -P data/coco/ http://images.cocodataset.org/annotations/image_info_test2017.zip |
| 18 | +wget -P data/coco/ http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip |
| 19 | + |
| 20 | +# unzip them |
| 21 | +unzip data/coco/test2017.zip -d data/coco/ |
| 22 | +unzip data/coco/image_info_test2017.zip -d data/coco/ |
| 23 | +unzip data/coco/panoptic_annotations_trainval2017.zip -d data/coco/ |
| 24 | + |
| 25 | +# remove zip files (optional) |
| 26 | +rm -rf data/coco/test2017.zip data/coco/image_info_test2017.zip data/coco/panoptic_annotations_trainval2017.zip |
| 27 | +``` |
| 28 | + |
| 29 | +- Run the following code to update category information in testing image info. Since the attribute `isthing` is missing in category information of 'image_info_test-dev2017.json', we need to update it with the category information in 'panoptic_val2017.json'. |
| 30 | + |
| 31 | +```shell |
| 32 | +python tools/misc/gen_coco_panoptic_test_info.py data/coco/annotations |
| 33 | +``` |
| 34 | + |
| 35 | +After completing the above preparations, your directory structure of `data` should be like this: |
| 36 | + |
| 37 | +```text |
| 38 | +data |
| 39 | +`-- coco |
| 40 | + |-- annotations |
| 41 | + | |-- image_info_test-dev2017.json |
| 42 | + | |-- image_info_test2017.json |
| 43 | + | |-- panoptic_image_info_test-dev2017.json |
| 44 | + | |-- panoptic_train2017.json |
| 45 | + | |-- panoptic_train2017.zip |
| 46 | + | |-- panoptic_val2017.json |
| 47 | + | `-- panoptic_val2017.zip |
| 48 | + `-- test2017 |
| 49 | +``` |
| 50 | + |
| 51 | +### Inference on coco test-dev |
| 52 | + |
| 53 | +The commands to perform inference on test2017 are as below: |
| 54 | + |
| 55 | +```shell |
| 56 | +# test with single gpu |
| 57 | +CUDA_VISIBLE_DEVICES=0 python tools/test.py \ |
| 58 | + ${CONFIG_FILE} \ |
| 59 | + ${CHECKPOINT_FILE} \ |
| 60 | + --format-only \ |
| 61 | + --cfg-options data.test.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json data.test.img_prefix=data/coco/test2017 \ |
| 62 | + --eval-options jsonfile_prefix=${WORK_DIR}/results |
| 63 | + |
| 64 | +# test with four gpus |
| 65 | +CUDA_VISIBLE_DEVICES=0,1,3,4 bash tools/dist_test.sh \ |
| 66 | + ${CONFIG_FILE} \ |
| 67 | + ${CHECKPOINT_FILE} \ |
| 68 | + 4 \ # four gpus |
| 69 | + --format-only \ |
| 70 | + --cfg-options data.test.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json data.test.img_prefix=data/coco/test2017 \ |
| 71 | + --eval-options jsonfile_prefix=${WORK_DIR}/results |
| 72 | + |
| 73 | +# test with slurm |
| 74 | +GPUS=8 tools/slurm_test.sh \ |
| 75 | + ${Partition} \ |
| 76 | + ${JOB_NAME} \ |
| 77 | + ${CONFIG_FILE} \ |
| 78 | + ${CHECKPOINT_FILE} \ |
| 79 | + --format-only \ |
| 80 | + --cfg-options data.test.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json data.test.img_prefix=data/coco/test2017 \ |
| 81 | + --eval-options jsonfile_prefix=${WORK_DIR}/results |
| 82 | +``` |
| 83 | + |
| 84 | +Example |
| 85 | + |
| 86 | +Suppose we perform inference on `test2017` using pretrained MaskFormer with ResNet-50 backbone. |
| 87 | + |
| 88 | +```shell |
| 89 | +# test with single gpu |
| 90 | +CUDA_VISIBLE_DEVICES=0 python tools/test.py \ |
| 91 | + configs/maskformer/maskformer_r50_mstrain_16x1_75e_coco.py \ |
| 92 | + checkpoints/maskformer_r50_mstrain_16x1_75e_coco_20220221_141956-bc2699cb.pth \ |
| 93 | + --format-only \ |
| 94 | + --cfg-options data.test.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json data.test.img_prefix=data/coco/test2017 \ |
| 95 | + --eval-options jsonfile_prefix=work_dirs/maskformer/results |
| 96 | +``` |
| 97 | + |
| 98 | +### Rename files and zip results |
| 99 | + |
| 100 | +After inference, the panoptic segmentation results (a json file and a directory where the masks are stored) will be in `WORK_DIR`. We should rename them according to the naming convention described on [COCO's Website](https://cocodataset.org/#upload). Finally, we need to compress the json and the directory where the masks are stored into a zip file, and rename the zip file according to the naming convention. Note that the zip file should **directly** contains the above two files. |
| 101 | + |
| 102 | +The commands to rename files and zip results: |
| 103 | + |
| 104 | +```shell |
| 105 | +# In WORK_DIR, we have panoptic segmentation results: 'panoptic' and 'results.panoptic.json'. |
| 106 | +cd ${WORK_DIR} |
| 107 | + |
| 108 | +# replace '[algorithm_name]' with the name of algorithm you used. |
| 109 | +mv ./panoptic ./panoptic_test-dev2017_[algorithm_name]_results |
| 110 | +mv ./results.panoptic.json ./panoptic_test-dev2017_[algorithm_name]_results.json |
| 111 | +zip panoptic_test-dev2017_[algorithm_name]_results.zip -ur panoptic_test-dev2017_[algorithm_name]_results panoptic_test-dev2017_[algorithm_name]_results.json |
| 112 | +``` |
0 commit comments