Skip to content

Commit ff9bc39

Browse files
authored
Bump version to v2.20.0
Bump version to v2.20.0
2 parents fb5463e + 815e7a5 commit ff9bc39

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+1813
-252
lines changed

.pre-commit-config.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,3 +43,9 @@ repos:
4343
hooks:
4444
- id: docformatter
4545
args: ["--in-place", "--wrap-descriptions", "79"]
46+
- repo: https://github.com/open-mmlab/pre-commit-hooks
47+
rev: master # Use the ref you want to point at
48+
hooks:
49+
- id: check-algo-readme
50+
- id: check-copyright
51+
args: ["mmdet"] # replace the dir_to_check with your expected directory to check

README.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,10 @@
1313
<img src="https://user-images.githubusercontent.com/12907710/137271636-56ba1cd2-b110-4812-8221-b4c120320aa9.png"/>
1414

1515

16-
[📘Documentation](https://mmdetection.readthedocs.io/en/v2.19.1/) |
17-
[🛠️Installation](https://mmdetection.readthedocs.io/en/v2.19.1/get_started.html) |
18-
[👀Model Zoo](https://mmdetection.readthedocs.io/en/v2.19.1/model_zoo.html) |
19-
[🆕Update News](https://mmdetection.readthedocs.io/en/v2.19.1/changelog.html) |
16+
[📘Documentation](https://mmdetection.readthedocs.io/en/v2.20.0/) |
17+
[🛠️Installation](https://mmdetection.readthedocs.io/en/v2.20.0/get_started.html) |
18+
[👀Model Zoo](https://mmdetection.readthedocs.io/en/v2.20.0/model_zoo.html) |
19+
[🆕Update News](https://mmdetection.readthedocs.io/en/v2.20.0/changelog.html) |
2020
[🚀Ongoing Projects](https://github.com/open-mmlab/mmdetection/projects) |
2121
[🤔Reporting Issues](https://github.com/open-mmlab/mmdetection/issues/new/choose)
2222

@@ -60,11 +60,10 @@ This project is released under the [Apache 2.0 license](LICENSE).
6060

6161
## Changelog
6262

63-
**2.19.1** was released in 14/12/2021:
63+
**2.20.0** was released in 27/12/2021:
6464

65-
- Release [YOLOX](configs/yolox/README.md) COCO pretrained models
66-
- Add abstract and sketch of the papers in readmes
67-
- Fix some weight initialization bugs
65+
- Support [TOOD](configs/tood/README.md): Task-aligned One-stage Object Detection (ICCV 2021 Oral)
66+
- Support resuming from the latest checkpoint automatically
6867

6968
Please refer to [changelog.md](docs/en/changelog.md) for details and release history.
7069

@@ -149,6 +148,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
149148
- [x] [YOLOX (ArXiv'2021)](configs/yolox/README.md)
150149
- [x] [SOLO (ECCV'2020)](configs/solo/README.md)
151150
- [x] [QueryInst (ICCV'2021)](configs/queryinst/README.md)
151+
- [x] [TOOD (ICCV'2021)](configs/tood/README.md)
152152
</details>
153153

154154
Some other methods are also supported in [projects using MMDetection](./docs/en/projects.md).
@@ -209,3 +209,5 @@ If you use this toolbox or benchmark in your research, please cite this project.
209209
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
210210
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
211211
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
212+
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
213+
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab Model Compression Toolbox and Benchmark.

README_zh-CN.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,10 @@
1313
<img src="https://user-images.githubusercontent.com/12907710/137271636-56ba1cd2-b110-4812-8221-b4c120320aa9.png"/>
1414

1515

16-
[📘使用文档](https://mmdetection.readthedocs.io/zh_CN/v2.19.1/) |
17-
[🛠️安装教程](https://mmdetection.readthedocs.io/zh_CN/v2.19.1/get_started.html) |
18-
[👀模型库](https://mmdetection.readthedocs.io/zh_CN/v2.19.1/model_zoo.html) |
19-
[🆕更新日志](https://mmdetection.readthedocs.io/en/v2.19.1/changelog.html) |
16+
[📘使用文档](https://mmdetection.readthedocs.io/zh_CN/v2.20.0/) |
17+
[🛠️安装教程](https://mmdetection.readthedocs.io/zh_CN/v2.20.0/get_started.html) |
18+
[👀模型库](https://mmdetection.readthedocs.io/zh_CN/v2.20.0/model_zoo.html) |
19+
[🆕更新日志](https://mmdetection.readthedocs.io/en/v2.20.0/changelog.html) |
2020
[🚀进行中的项目](https://github.com/open-mmlab/mmdetection/projects) |
2121
[🤔报告问题](https://github.com/open-mmlab/mmdetection/issues/new/choose)
2222

@@ -59,10 +59,9 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
5959

6060
## 更新日志
6161

62-
最新的 **2.19.1** 版本已经在 2021.12.14 发布:
63-
- 发布 [YOLOX](configs/yolox/README.md) COCO 预训练模型
64-
- 在自述文件中添加论文的摘要和草图
65-
- 修复一些权重初始化错误
62+
最新的 **2.20.0** 版本已经在 2021.12.27 发布:
63+
- 支持了 ICCV 2021 Oral 方法 [TOOD](configs/tood/README.md): Task-aligned One-stage Object Detection
64+
- 支持了自动从最新的存储参数节点恢复训练
6665

6766
如果想了解更多版本更新细节和历史信息,请阅读[更新日志](docs/changelog.md)
6867

@@ -146,6 +145,7 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
146145
- [x] [YOLOX (ArXiv'2021)](configs/yolox/README.md)
147146
- [x] [SOLO (ECCV'2020)](configs/solo/README.md)
148147
- [x] [QueryInst (ICCV'2021)](configs/queryinst/README.md)
148+
- [x] [TOOD (ICCV'2021)](configs/tood/README.md)
149149
</details>
150150

151151
我们在[基于 MMDetection 的项目](./docs/zh_cn/projects.md)中列举了一些其他的支持的算法。
@@ -206,6 +206,8 @@ MMDetection 是一款由来自不同高校和企业的研发人员共同参与
206206
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
207207
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
208208
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准
209+
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准
210+
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准
209211

210212
## 欢迎加入 OpenMMLab 社区
211213

configs/resnest/metafile.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Collections:
1111
Paper:
1212
URL: https://arxiv.org/abs/2004.08955
1313
Title: 'ResNeSt: Split-Attention Networks'
14-
README: configs/renest/README.md
14+
README: configs/resnest/README.md
1515
Code:
1616
URL: https://github.com/open-mmlab/mmdetection/blob/v2.7.0/mmdet/models/backbones/resnest.py#L273
1717
Version: v2.7.0

configs/strong_baselines/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Strong Baselines
22

3-
We train Mask R-CNN with large-scale jittor and longer schedule as strong baselines.
3+
We train Mask R-CNN with large-scale jitter and longer schedule as strong baselines.
44
The modifications follow those in [Detectron2](https://github.com/facebookresearch/detectron2/tree/master/configs/new_baselines).
55

66
## Results and models

configs/tood/README.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# TOOD: Task-aligned One-stage Object Detection
2+
3+
## Abstract
4+
5+
<!-- [ABSTRACT] -->
6+
7+
One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of spatial misalignment in predictions between the two tasks. In this work, we propose a Task-aligned One-stage Object Detection (TOOD) that explicitly aligns the two tasks in a learning-based manner. First, we design a novel Task-aligned Head (T-Head) which offers a better balance between learning task-interactive and task-specific features, as well as a greater flexibility to learn the alignment via a task-aligned predictor. Second, we propose Task Alignment Learning (TAL) to explicitly pull closer (or even unify) the optimal anchors for the two tasks during training via a designed sample assignment scheme and a task-aligned loss. Extensive experiments are conducted on MS-COCO, where TOOD achieves a 51.1 AP at single-model single-scale testing. This surpasses the recent one-stage detectors by a large margin, such as ATSS (47.7 AP), GFL (48.2 AP), and PAA (49.0 AP), with fewer parameters and FLOPs. Qualitative results also demonstrate the effectiveness of TOOD for better aligning the tasks of object classification and localization.
8+
9+
<!-- [IMAGE] -->
10+
<div align=center>
11+
<img src="https://user-images.githubusercontent.com/12907710/145400075-e08191f5-8afa-4335-9b3b-27926fc9a26e.png"/>
12+
</div>
13+
14+
<!-- [PAPER_TITLE: TOOD: Task-aligned One-stage Object Detection] -->
15+
<!-- [PAPER_URL: https://arxiv.org/abs/2108.07755] -->
16+
17+
## Citation
18+
19+
<!-- [ALGORITHM] -->
20+
21+
```latex
22+
@inproceedings{feng2021tood,
23+
title={TOOD: Task-aligned One-stage Object Detection},
24+
author={Feng, Chengjian and Zhong, Yujie and Gao, Yu and Scott, Matthew R and Huang, Weilin},
25+
booktitle={ICCV},
26+
year={2021}
27+
}
28+
```
29+
30+
## Results and Models
31+
32+
| Backbone | Style | Anchor Type | Lr schd | Multi-scale Training| Mem (GB)| Inf time (fps) | box AP | Config | Download |
33+
|:-----------------:|:-------:|:------------:|:-------:|:-------------------:|:-------:|:--------------:|:------:|:------:|:--------:|
34+
| R-50 | pytorch | Anchor-free | 1x | N | 4.1 | | 42.4 | [config](./tood_r50_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_1x_coco/tood_r50_fpn_1x_coco_20211210_103425-20e20746.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_1x_coco/tood_r50_fpn_1x_coco_20211210_103425.log) |
35+
| R-50 | pytorch | Anchor-based | 1x | N | 4.1 | | 42.4 | [config](./tood_r50_fpn_anchor_based_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_anchor_based_1x_coco/tood_r50_fpn_anchor_based_1x_coco_20211214_100105-b776c134.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_anchor_based_1x_coco/tood_r50_fpn_anchor_based_1x_coco_20211214_100105.log) |
36+
| R-50 | pytorch | Anchor-free | 2x | Y | 4.1 | | 44.5 | [config](./tood_r50_fpn_mstrain_2x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_mstrain_2x_coco/tood_r50_fpn_mstrain_2x_coco_20211210_144231-3b23174c.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_mstrain_2x_coco/tood_r50_fpn_mstrain_2x_coco_20211210_144231.log) |
37+
| R-101 | pytorch | Anchor-free | 2x | Y | 6.0 | | 46.1 | [config](./tood_r101_fpn_mstrain_2x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r101_fpn_mstrain_2x_coco/tood_r101_fpn_mstrain_2x_coco_20211210_144232-a18f53c8.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r101_fpn_mstrain_2x_coco/tood_r101_fpn_mstrain_2x_coco_20211210_144232.log) |
38+
| R-101-dcnv2 | pytorch | Anchor-free | 2x | Y | 6.2 | | 49.3 | [config](./tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco_20211210_213728-4a824142.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco_20211210_213728.log) |
39+
| X-101-64x4d | pytorch | Anchor-free | 2x | Y | 10.2 | | 47.6 | [config](./tood_x101_64x4d_fpn_mstrain_2x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_x101_64x4d_fpn_mstrain_2x_coco/tood_x101_64x4d_fpn_mstrain_2x_coco_20211211_003519-a4f36113.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/tood/tood_x101_64x4d_fpn_mstrain_2x_coco/tood_x101_64x4d_fpn_mstrain_2x_coco_20211211_003519.log) |
40+
| X-101-64x4d-dcnv2 | pytorch | Anchor-free | 2x | Y | | | | [config](./tood_x101_64x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py) | [model]() &#124; [log]() |
41+
42+
[1] *1x and 2x mean the model is trained for 90K and 180K iterations, respectively.* \
43+
[2] *All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..* \
44+
[3] *`dcnv2` denotes deformable convolutional networks v2.* \

configs/tood/metafile.yml

Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
Collections:
2+
- Name: TOOD
3+
Metadata:
4+
Training Data: COCO
5+
Training Techniques:
6+
- SGD
7+
Training Resources: 8x V100 GPUs
8+
Architecture:
9+
- TOOD
10+
Paper:
11+
URL: https://arxiv.org/abs/2108.07755
12+
Title: 'TOOD: Task-aligned One-stage Object Detection'
13+
README: configs/tood/README.md
14+
Code:
15+
URL: https://github.com/open-mmlab/mmdetection/blob/v2.20.0/mmdet/models/detectors/tood.py#L7
16+
Version: v2.20.0
17+
18+
Models:
19+
- Name: tood_r101_fpn_mstrain_2x_coco
20+
In Collection: TOOD
21+
Config: configs/tood/tood_r101_fpn_mstrain_2x_coco.py
22+
Metadata:
23+
Training Memory (GB): 6.0
24+
Epochs: 24
25+
Results:
26+
- Task: Object Detection
27+
Dataset: COCO
28+
Metrics:
29+
box AP: 46.1
30+
Weights: https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r101_fpn_mstrain_2x_coco/tood_r101_fpn_mstrain_2x_coco_20211210_144232-a18f53c8.pth
31+
32+
- Name: tood_x101_64x4d_fpn_mstrain_2x_coco
33+
In Collection: TOOD
34+
Config: configs/tood/tood_x101_64x4d_fpn_mstrain_2x_coco.py
35+
Metadata:
36+
Training Memory (GB): 10.2
37+
Epochs: 24
38+
Results:
39+
- Task: Object Detection
40+
Dataset: COCO
41+
Metrics:
42+
box AP: 47.6
43+
Weights: https://download.openmmlab.com/mmdetection/v2.0/tood/tood_x101_64x4d_fpn_mstrain_2x_coco/tood_x101_64x4d_fpn_mstrain_2x_coco_20211211_003519-a4f36113.pth
44+
45+
- Name: tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco
46+
In Collection: TOOD
47+
Config: configs/tood/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py
48+
Metadata:
49+
Training Memory (GB): 6.2
50+
Epochs: 24
51+
Results:
52+
- Task: Object Detection
53+
Dataset: COCO
54+
Metrics:
55+
box AP: 49.3
56+
Weights: https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco_20211210_213728-4a824142.pth
57+
58+
- Name: tood_r50_fpn_anchor_based_1x_coco
59+
In Collection: TOOD
60+
Config: configs/tood/tood_r50_fpn_anchor_based_1x_coco.py
61+
Metadata:
62+
Training Memory (GB): 4.1
63+
Epochs: 12
64+
Results:
65+
- Task: Object Detection
66+
Dataset: COCO
67+
Metrics:
68+
box AP: 42.4
69+
Weights: https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_anchor_based_1x_coco/tood_r50_fpn_anchor_based_1x_coco_20211214_100105-b776c134.pth
70+
71+
- Name: tood_r50_fpn_1x_coco
72+
In Collection: TOOD
73+
Config: configs/tood/tood_r50_fpn_1x_coco.py
74+
Metadata:
75+
Training Memory (GB): 4.1
76+
Epochs: 12
77+
Results:
78+
- Task: Object Detection
79+
Dataset: COCO
80+
Metrics:
81+
box AP: 42.4
82+
Weights: https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_1x_coco/tood_r50_fpn_1x_coco_20211210_103425-20e20746.pth
83+
84+
- Name: tood_r50_fpn_mstrain_2x_coco
85+
In Collection: TOOD
86+
Config: configs/tood/tood_r50_fpn_mstrain_2x_coco.py
87+
Metadata:
88+
Training Memory (GB): 4.1
89+
Epochs: 24
90+
Results:
91+
- Task: Object Detection
92+
Dataset: COCO
93+
Metrics:
94+
box AP: 44.5
95+
Weights: https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_mstrain_2x_coco/tood_r50_fpn_mstrain_2x_coco_20211210_144231-3b23174c.pth
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
_base_ = './tood_r101_fpn_mstrain_2x_coco.py'
2+
3+
model = dict(
4+
backbone=dict(
5+
dcn=dict(type='DCNv2', deformable_groups=1, fallback_on_stride=False),
6+
stage_with_dcn=(False, True, True, True)),
7+
bbox_head=dict(num_dcn=2))
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
_base_ = './tood_r50_fpn_mstrain_2x_coco.py'
2+
3+
model = dict(
4+
backbone=dict(
5+
depth=101,
6+
init_cfg=dict(type='Pretrained',
7+
checkpoint='torchvision://resnet101')))
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
_base_ = [
2+
'../_base_/datasets/coco_detection.py',
3+
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
4+
]
5+
model = dict(
6+
type='TOOD',
7+
backbone=dict(
8+
type='ResNet',
9+
depth=50,
10+
num_stages=4,
11+
out_indices=(0, 1, 2, 3),
12+
frozen_stages=1,
13+
norm_cfg=dict(type='BN', requires_grad=True),
14+
norm_eval=True,
15+
style='pytorch',
16+
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
17+
neck=dict(
18+
type='FPN',
19+
in_channels=[256, 512, 1024, 2048],
20+
out_channels=256,
21+
start_level=1,
22+
add_extra_convs='on_output',
23+
num_outs=5),
24+
bbox_head=dict(
25+
type='TOODHead',
26+
num_classes=80,
27+
in_channels=256,
28+
stacked_convs=6,
29+
feat_channels=256,
30+
anchor_type='anchor_free',
31+
anchor_generator=dict(
32+
type='AnchorGenerator',
33+
ratios=[1.0],
34+
octave_base_scale=8,
35+
scales_per_octave=1,
36+
strides=[8, 16, 32, 64, 128]),
37+
bbox_coder=dict(
38+
type='DeltaXYWHBBoxCoder',
39+
target_means=[.0, .0, .0, .0],
40+
target_stds=[0.1, 0.1, 0.2, 0.2]),
41+
initial_loss_cls=dict(
42+
type='FocalLoss',
43+
use_sigmoid=True,
44+
activated=True, # use probability instead of logit as input
45+
gamma=2.0,
46+
alpha=0.25,
47+
loss_weight=1.0),
48+
loss_cls=dict(
49+
type='QualityFocalLoss',
50+
use_sigmoid=True,
51+
activated=True, # use probability instead of logit as input
52+
beta=2.0,
53+
loss_weight=1.0),
54+
loss_bbox=dict(type='GIoULoss', loss_weight=2.0)),
55+
train_cfg=dict(
56+
initial_epoch=4,
57+
initial_assigner=dict(type='ATSSAssigner', topk=9),
58+
assigner=dict(type='TaskAlignedAssigner', topk=13),
59+
alpha=1,
60+
beta=6,
61+
allowed_border=-1,
62+
pos_weight=-1,
63+
debug=False),
64+
test_cfg=dict(
65+
nms_pre=1000,
66+
min_bbox_size=0,
67+
score_thr=0.05,
68+
nms=dict(type='nms', iou_threshold=0.6),
69+
max_per_img=100))
70+
# optimizer
71+
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
72+
73+
# custom hooks
74+
custom_hooks = [dict(type='SetEpochInfoHook')]

0 commit comments

Comments
 (0)