|
1 | 1 | ## Changelog |
2 | 2 |
|
| 3 | +### v2.24.0 (26/4/2022) |
| 4 | + |
| 5 | +#### Highlights |
| 6 | + |
| 7 | +- Support [Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation](https://arxiv.org/abs/2012.07177) |
| 8 | +- Support automatically scaling LR according to GPU number and samples per GPU |
| 9 | +- Support Class Aware Sampler that improves performance on OpenImages Dataset |
| 10 | + |
| 11 | +#### New Features |
| 12 | + |
| 13 | +- Support [Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation](https://arxiv.org/abs/2012.07177), see [example configs](configs/simple_copy_paste/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_ssj_scp_32x2_270k_coco.py) (#7501) |
| 14 | +- Support Class Aware Sampler, users can set |
| 15 | + |
| 16 | + ```python |
| 17 | + data=dict(train_dataloader=dict(class_aware_sampler=dict(num_sample_class=1)))) |
| 18 | + ``` |
| 19 | + |
| 20 | + in the config to use `ClassAwareSampler`. Examples can be found in [the configs of OpenImages Dataset](https://github.com/open-mmlab/mmdetection/tree/master/configs/openimages/faster_rcnn_r50_fpn_32x2_cas_1x_openimages.py). (#7436) |
| 21 | + |
| 22 | +- Support automatically scaling LR according to GPU number and samples per GPU. (#7482) |
| 23 | + In each config, there is a corresponding config of auto-scaling LR as below, |
| 24 | + |
| 25 | + ```python |
| 26 | + auto_scale_lr = dict(enable=True, base_batch_size=N) |
| 27 | + ``` |
| 28 | + |
| 29 | + where `N` is the batch size used for the current learning rate in the config (also equals to `samples_per_gpu` * gpu number to train this config). |
| 30 | + By default, we set `enable=False` so that the original usages will not be affected. Users can set `enable=True` in each config or add `--auto-scale-lr` after the command line to enable this feature and should check the correctness of `base_batch_size` in customized configs. |
| 31 | + |
| 32 | +- Support setting dataloader arguments in config and add functions to handle config compatibility. (#7668) |
| 33 | + The comparison between the old and new usages is as below. |
| 34 | + |
| 35 | + <table align="center"> |
| 36 | + <thead> |
| 37 | + <tr align='center'> |
| 38 | + <td>Before v2.24.0</td> |
| 39 | + <td>Since v2.24.0 </td> |
| 40 | + </tr> |
| 41 | + </thead> |
| 42 | + <tbody><tr valign='top'> |
| 43 | + <th> |
| 44 | + |
| 45 | + ```python |
| 46 | + data = dict( |
| 47 | + samples_per_gpu=64, workers_per_gpu=4, |
| 48 | + train=dict(type='xxx', ...), |
| 49 | + val=dict(type='xxx', samples_per_gpu=4, ...), |
| 50 | + test=dict(type='xxx', ...), |
| 51 | + ) |
| 52 | + ``` |
| 53 | + |
| 54 | + </th> |
| 55 | + <th> |
| 56 | + |
| 57 | + ```python |
| 58 | + # A recommended config that is clear |
| 59 | + data = dict( |
| 60 | + train=dict(type='xxx', ...), |
| 61 | + val=dict(type='xxx', ...), |
| 62 | + test=dict(type='xxx', ...), |
| 63 | + # Use different batch size during inference. |
| 64 | + train_dataloader=dict(samples_per_gpu=64, workers_per_gpu=4), |
| 65 | + val_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), |
| 66 | + test_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), |
| 67 | + ) |
| 68 | + |
| 69 | + # Old style still works but allows to set more arguments about data loaders |
| 70 | + data = dict( |
| 71 | + samples_per_gpu=64, # only works for train_dataloader |
| 72 | + workers_per_gpu=4, # only works for train_dataloader |
| 73 | + train=dict(type='xxx', ...), |
| 74 | + val=dict(type='xxx', ...), |
| 75 | + test=dict(type='xxx', ...), |
| 76 | + # Use different batch size during inference. |
| 77 | + val_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), |
| 78 | + test_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), |
| 79 | + ) |
| 80 | + ``` |
| 81 | + |
| 82 | + </th></tr> |
| 83 | + </tbody></table> |
| 84 | + |
| 85 | +- Support memory profile hook. Users can use it to monitor the memory usages during training as below (#7560) |
| 86 | + |
| 87 | + ```python |
| 88 | + custom_hooks = [ |
| 89 | + dict(type='MemoryProfilerHook', interval=50) |
| 90 | + ] |
| 91 | + ``` |
| 92 | + |
| 93 | +- Support to run on PyTorch with MLU chip (#7578) |
| 94 | +- Support re-spliting data batch with tag (#7641) |
| 95 | +- Support the `DiceCost` used by [K-Net](https://arxiv.org/abs/2106.14855) in `MaskHungarianAssigner` (#7716) |
| 96 | +- Support splitting COCO data for Semi-supervised object detection (#7431) |
| 97 | +- Support Pathlib for Config.fromfile (#7685) |
| 98 | +- Support to use file client in OpenImages dataset (#7433) |
| 99 | +- Add a probability parameter to Mosaic transformation (#7371) |
| 100 | +- Support specifying interpolation mode in `Resize` pipeline (#7585) |
| 101 | + |
| 102 | +#### Bug Fixes |
| 103 | + |
| 104 | +- Avoid invalid bbox after deform_sampling (#7567) |
| 105 | +- Fix the issue that argument color_theme does not take effect when exporting confusion matrix (#7701) |
| 106 | +- Fix the `end_level` in Necks, which should be the index of the end input backbone level (#7502) |
| 107 | +- Fix the bug that `mix_results` may be None in `MultiImageMixDataset` (#7530) |
| 108 | +- Fix the bug in ResNet plugin when two plugins are used (#7797) |
| 109 | + |
| 110 | +#### Improvements |
| 111 | + |
| 112 | +- Enhance `load_json_logs` of analyze_logs.py for resumed training logs (#7732) |
| 113 | +- Add argument `out_file` in image_demo.py (#7676) |
| 114 | +- Allow mixed precision training with `SimOTAAssigner` (#7516) |
| 115 | +- Updated INF to 100000.0 to be the same as that in the official YOLOX (#7778) |
| 116 | +- Add documentations of: |
| 117 | + - how to get channels of a new backbone (#7642) |
| 118 | + - how to unfreeze the backbone network (#7570) |
| 119 | + - how to train fast_rcnn model (#7549) |
| 120 | + - proposals in Deformable DETR (#7690) |
| 121 | + - from-scratch install script in get_started.md (#7575) |
| 122 | +- Release pre-trained models of |
| 123 | + - [Mask2Former](configs/mask2former) (#7595, #7709) |
| 124 | + - RetinaNet with ResNet-18 and release models (#7387) |
| 125 | + - RetinaNet with EfficientNet backbone (#7646) |
| 126 | + |
| 127 | +#### Contributors |
| 128 | + |
| 129 | +A total of 27 developers contributed to this release. |
| 130 | +Thanks @jovialio, @zhangsanfeng2022, @HarryZJ, @jamiechoi1995, @nestiank, @PeterH0323, @RangeKing, @Y-M-Y, @mattcasey02, @weiji14, @Yulv-git, @xiefeifeihu, @FANG-MING, @meng976537406, @nijkah, @sudz123, @CCODING04, @SheffieldCao, @Czm369, @BIGWangYuDong, @zytx121, @jbwang1997, @chhluo, @jshilong, @RangiLyu, @hhaAndroid, @ZwwWayne |
| 131 | + |
3 | 132 | ### v2.23.0 (28/3/2022) |
4 | 133 |
|
5 | 134 | #### Highlights |
|
54 | 183 | A total of 27 developers contributed to this release. |
55 | 184 | Thanks @ZwwWayne, @haofanwang, @shinya7y, @chhluo, @yangrisheng, @triple-Mu, @jbwang1997, @HikariTJU, @imflash217, @274869388, @zytx121, @matrixgame2018, @jamiechoi1995, @BIGWangYuDong, @JingweiZhang12, @Xiangxu-0103, @hhaAndroid, @jshilong, @osbm, @ceroytres, @bunge-bedstraw-herb, @Youth-Got, @daavoo, @jiangyitong, @RangiLyu, @CCODING04, @yarkable |
56 | 185 |
|
57 | | - |
58 | | - |
59 | 186 | ### v2.22.0 (24/2/2022) |
60 | 187 |
|
61 | 188 | #### Highlights |
@@ -97,7 +224,6 @@ In order to support the visualization for Panoptic Segmentation, the `num_classe |
97 | 224 | A total of 20 developers contributed to this release. |
98 | 225 | Thanks @ZwwWayne, @hhaAndroid, @RangiLyu, @AronLin, @BIGWangYuDong, @jbwang1997, @zytx121, @chhluo, @shinya7y, @LuooChen, @dvansa, @siatwangmin, @del-zhenwu, @vikashranjan26, @haofanwang, @jamiechoi1995, @HJoonKwon, @yarkable, @zhijian-liu, @RangeKing |
99 | 226 |
|
100 | | - |
101 | 227 | ### v2.21.0 (8/2/2022) |
102 | 228 |
|
103 | 229 | ### Breaking Changes |
|
0 commit comments