|
| 1 | +## YOLO2COCO |
| 2 | +English | [简体中文](../README.md) |
| 3 | + |
| 4 | +<p align="left"> |
| 5 | + <a href=""><img src="https://img.shields.io/badge/Python-3.6+-aff.svg"></a> |
| 6 | + <a href=""><img src="https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-pink.svg"></a> |
| 7 | + <a href="https://github.com/RapidAI/YOLO2COCO/graphs/contributors"><img src="https://img.shields.io/github/contributors/RapidAI/YOLO2COCO?color=9ea"></a> |
| 8 | + <a href="https://github.com/RapidAI/YOLO2COCO/stargazers"><img src="https://img.shields.io/github/stars/RapidAI/YOLO2COCO?color=ccf" ></a> |
| 9 | + <a href=". /LICENSE"><img src="https://img.shields.io/badge/License-Apache%202-dfd.svg"></a> |
| 10 | +</p> |
| 11 | + |
| 12 | +#### YOLOV5格式数据 → COCO |
| 13 | +- Some background images can be added to the training by directly placing them into the `backgroud_images` directory. |
| 14 | +- The conversion program will automatically scan this directory and add it to the training set, allowing seamless integration with subsequent [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) training. |
| 15 | +- YOLOV5 training format directory structure (see `dataset/YOLOV5` for details). |
| 16 | + ```text |
| 17 | + YOLOV5 |
| 18 | + ├── classes.txt |
| 19 | + ├── background_images # usually images that are easily confused with the object to be detected |
| 20 | + │ └── bg1.jpeg |
| 21 | + ├── images |
| 22 | + │ ├── images(13).jpg |
| 23 | + │ └── images(3).jpg |
| 24 | + ├── labels |
| 25 | + │ ├── images(13).txt |
| 26 | + │ └── images(3).txt |
| 27 | + ├── train.txt |
| 28 | + └── val.txt |
| 29 | + ``` |
| 30 | +
|
| 31 | +- Convert |
| 32 | + ```shell |
| 33 | + python yolov5_2_coco.py --dir_path dataset/YOLOV5 --mode_list train,val |
| 34 | + ``` |
| 35 | + - `--dir_path`: the directory where the collated dataset is located |
| 36 | + - `--mode_list`: specify the generated json, provided that there is a corresponding txt file, which can be specified separately. (e.g. `-train,val,test`) |
| 37 | + |
| 38 | +- The structure of the converted directory (see `dataset/YOLOV5_COCO_format` for details) |
| 39 | + ``text |
| 40 | + YOLOV5_COCO_format |
| 41 | + YOLOV5_COCO_format |
| 42 | + ├── annotations |
| 43 | + │ ├── instances_train2017.json |
| 44 | + │ └── instances_val2017.json |
| 45 | + ├── train2017 |
| 46 | + │ ├── 000000000001.jpg |
| 47 | + │ └── 000000000002.jpg # This is the background image. |
| 48 | + └── val2017 |
| 49 | + └── 000000000001.jpg |
| 50 | + ``` |
| 51 | +
|
| 52 | +#### YOLOV5 YAML description file → COCO |
| 53 | +- The YOLOV5 yaml data file needs to contain. |
| 54 | + ```text |
| 55 | + YOLOV5_yaml |
| 56 | + ├── images |
| 57 | + │ ├── train |
| 58 | + │ │ ├── images(13).jpg |
| 59 | + │ │ └── images(3).jpg |
| 60 | + │ └── val |
| 61 | + │ ├── images(13).jpg |
| 62 | + │ └── images(3).jpg |
| 63 | + ├── labels |
| 64 | + │ ├── train |
| 65 | + │ │ ├── images(13).txt |
| 66 | + │ │ └── images(3).txt |
| 67 | + │ └── val |
| 68 | + │ ├── images(13).txt |
| 69 | + │ └── images(3).txt |
| 70 | + └── sample.yaml |
| 71 | + ``` |
| 72 | +
|
| 73 | +- Convert |
| 74 | + ```shell |
| 75 | + python yolov5_yaml_2_coco.py --yaml_path dataset/YOLOV5_yaml/sample.yaml |
| 76 | + ``` |
| 77 | + |
| 78 | +#### darknet format data → COCO |
| 79 | +- Darknet training data directory structure (see `dataset/darknet` for details). |
| 80 | + ```text |
| 81 | + darknet |
| 82 | + ├── class.names |
| 83 | + ├── gen_config.data |
| 84 | + ├── gen_train.txt |
| 85 | + ├── gen_valid.txt |
| 86 | + └── images |
| 87 | + ├── train |
| 88 | + └── valid |
| 89 | + ``` |
| 90 | + |
| 91 | +- Convert |
| 92 | + ```shell |
| 93 | + python darknet2coco.py --data_path dataset/darknet/gen_config.data |
| 94 | + ``` |
| 95 | + |
| 96 | +#### Visualize images in COCO format |
| 97 | +```shell |
| 98 | +python coco_visual.py --vis_num 1 \ |
| 99 | + --json_path dataset/YOLOV5_COCO_format/annotations/instances_train2017.json \ |
| 100 | + --img_dir dataset/YOLOV5_COCO_format/train2017 |
| 101 | +``` |
| 102 | + |
| 103 | +- `--vis_num`: specify the index of the image to be viewed |
| 104 | +- `--json_path`: path to the json file of the image to view |
| 105 | +- `--img_dir`: view the directory where the image is located |
| 106 | + |
| 107 | +#### Related information |
| 108 | +- [MSCOCO Data Annotation Details](https://blog.csdn.net/wc781708249/article/details/79603522) |
0 commit comments