Skip to content

Commit fb9768e

Browse files
committed
Add English README
1 parent bc8ffbe commit fb9768e

File tree

2 files changed

+120
-3
lines changed

2 files changed

+120
-3
lines changed

README.md

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,22 @@
11
## YOLO2COCO
2+
简体中文 | [English](./docs/README_en.md)
3+
4+
<p align="left">
5+
<a href=""><img src="https://img.shields.io/badge/Python-3.6+-aff.svg"></a>
6+
<a href=""><img src="https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-pink.svg"></a>
7+
<a href="https://github.com/RapidAI/YOLO2COCO/graphs/contributors"><img src="https://img.shields.io/github/contributors/RapidAI/RapidOCR?color=9ea"></a>
8+
<a href="https://github.com/RapidAI/YOLO2COCO/stargazers"><img src="https://img.shields.io/github/stars/RapidAI/YOLO2COCO?color=ccf"></a>
9+
<a href="./LICENSE"><img src="https://img.shields.io/badge/License-Apache%202-dfd.svg"></a>
10+
</p>
211

312
#### YOLOV5格式数据 → COCO
413
- 可以将一些背景图像加入到训练中,具体做法是:直接将背景图像放入`backgroud_images`目录即可。
5-
- 转换程序会自动扫描该目录,添加到训练集中,可以无缝集成后续YOLOX的训练
14+
- 转换程序会自动扫描该目录,添加到训练集中,可以无缝集成后续[YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)的训练
615
- YOLOV5训练格式目录结构(详情参见`dataset/YOLOV5`):
716
```text
817
YOLOV5
918
├── classes.txt
10-
├── background_images # 背景图像,一般是和要检测的对象容易混淆的图像
19+
├── background_images # 一般是和要检测的对象容易混淆的图像
1120
│ └── bg1.jpeg
1221
├── images
1322
│ ├── images(13).jpg
@@ -42,7 +51,7 @@
4251
#### YOLOV5 YAML描述文件 → COCO
4352
- YOLOV5 yaml 数据文件需要包含:
4453
```text
45-
YOLOV5_yaml/
54+
YOLOV5_yaml
4655
├── images
4756
│   ├── train
4857
│   │   ├── images(13).jpg

docs/README_en.md

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
## YOLO2COCO
2+
English | [简体中文](../README.md)
3+
4+
<p align="left">
5+
<a href=""><img src="https://img.shields.io/badge/Python-3.6+-aff.svg"></a>
6+
<a href=""><img src="https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-pink.svg"></a>
7+
<a href="https://github.com/RapidAI/YOLO2COCO/graphs/contributors"><img src="https://img.shields.io/github/contributors/RapidAI/YOLO2COCO?color=9ea"></a>
8+
<a href="https://github.com/RapidAI/YOLO2COCO/stargazers"><img src="https://img.shields.io/github/stars/RapidAI/YOLO2COCO?color=ccf" ></a>
9+
<a href=". /LICENSE"><img src="https://img.shields.io/badge/License-Apache%202-dfd.svg"></a>
10+
</p>
11+
12+
#### YOLOV5格式数据 → COCO
13+
- Some background images can be added to the training by directly placing them into the `backgroud_images` directory.
14+
- The conversion program will automatically scan this directory and add it to the training set, allowing seamless integration with subsequent [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) training.
15+
- YOLOV5 training format directory structure (see `dataset/YOLOV5` for details).
16+
```text
17+
YOLOV5
18+
├── classes.txt
19+
├── background_images # usually images that are easily confused with the object to be detected
20+
│ └── bg1.jpeg
21+
├── images
22+
│ ├── images(13).jpg
23+
│ └── images(3).jpg
24+
├── labels
25+
│ ├── images(13).txt
26+
│ └── images(3).txt
27+
├── train.txt
28+
└── val.txt
29+
```
30+
31+
- Convert
32+
```shell
33+
python yolov5_2_coco.py --dir_path dataset/YOLOV5 --mode_list train,val
34+
```
35+
- `--dir_path`: the directory where the collated dataset is located
36+
- `--mode_list`: specify the generated json, provided that there is a corresponding txt file, which can be specified separately. (e.g. `-train,val,test`)
37+
38+
- The structure of the converted directory (see `dataset/YOLOV5_COCO_format` for details)
39+
``text
40+
YOLOV5_COCO_format
41+
YOLOV5_COCO_format
42+
├── annotations
43+
│ ├── instances_train2017.json
44+
│ └── instances_val2017.json
45+
├── train2017
46+
│ ├── 000000000001.jpg
47+
│ └── 000000000002.jpg # This is the background image.
48+
└── val2017
49+
└── 000000000001.jpg
50+
```
51+
52+
#### YOLOV5 YAML description file → COCO
53+
- The YOLOV5 yaml data file needs to contain.
54+
```text
55+
YOLOV5_yaml
56+
├── images
57+
│   ├── train
58+
│   │   ├── images(13).jpg
59+
│   │   └── images(3).jpg
60+
│   └── val
61+
│   ├── images(13).jpg
62+
│   └── images(3).jpg
63+
├── labels
64+
│   ├── train
65+
│   │   ├── images(13).txt
66+
│   │   └── images(3).txt
67+
│   └── val
68+
│   ├── images(13).txt
69+
│   └── images(3).txt
70+
└── sample.yaml
71+
```
72+
73+
- Convert
74+
```shell
75+
python yolov5_yaml_2_coco.py --yaml_path dataset/YOLOV5_yaml/sample.yaml
76+
```
77+
78+
#### darknet format data → COCO
79+
- Darknet training data directory structure (see `dataset/darknet` for details).
80+
```text
81+
darknet
82+
├── class.names
83+
├── gen_config.data
84+
├── gen_train.txt
85+
├── gen_valid.txt
86+
└── images
87+
├── train
88+
└── valid
89+
```
90+
91+
- Convert
92+
```shell
93+
python darknet2coco.py --data_path dataset/darknet/gen_config.data
94+
```
95+
96+
#### Visualize images in COCO format
97+
```shell
98+
python coco_visual.py --vis_num 1 \
99+
--json_path dataset/YOLOV5_COCO_format/annotations/instances_train2017.json \
100+
--img_dir dataset/YOLOV5_COCO_format/train2017
101+
```
102+
103+
- `--vis_num`: specify the index of the image to be viewed
104+
- `--json_path`: path to the json file of the image to view
105+
- `--img_dir`: view the directory where the image is located
106+
107+
#### Related information
108+
- [MSCOCO Data Annotation Details](https://blog.csdn.net/wc781708249/article/details/79603522)

0 commit comments

Comments
 (0)