You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**We have renamed the branch `1.1` to `main` and switched the default branch from `master` to `main`. We encourage
53
-
users to migrate to the latest version, though it comes with some cost. Please refer to [Migration Guide](docs/en/migration.md) for more details.**
54
-
55
-
**v1.1.1** was released in 30/5/2023
56
-
57
-
We have constructed a comprehensive LiDAR semantic segmentation benchmark on SemanticKITTI, including Cylinder3D, MinkUNet and SPVCNN methods. Noteworthy, the improved MinkUNetv2 can achieve 70.3 mIoU on the validation set of SemanticKITTI. We have also supported the training of BEVFusion and an occupancy prediction method, TPVFomrer, in our `projects`. More new features about 3D perception are on the way. Please stay tuned!
58
-
59
64
## Introduction
60
65
61
-
English | [简体中文](README_zh-CN.md)
66
+
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the [OpenMMLab](https://openmmlab.com/) project.
62
67
63
68
The main branch works with **PyTorch 1.8+**.
64
69
65
-
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is
66
-
a part of the OpenMMLab project developed by [MMLab](http://mmlab.ie.cuhk.edu.hk/).
67
-
68
70

69
71
70
-
### Major features
72
+
<detailsopen>
73
+
<summary>Major features</summary>
71
74
72
75
-**Support multi-modality/single-modality detectors out of box**
73
76
74
77
It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.
75
78
76
79
-**Support indoor/outdoor 3D detection out of box**
77
80
78
-
It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI.
79
-
For nuScenes dataset, we also support [nuImages dataset](https://github.com/open-mmlab/mmdetection3d/tree/main/configs/nuimages).
81
+
It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support [nuImages dataset](https://github.com/open-mmlab/mmdetection3d/tree/main/configs/nuimages).
80
82
81
83
-**Natural integration with 2D detection**
82
84
@@ -94,19 +96,71 @@ a part of the OpenMMLab project developed by [MMLab](http://mmlab.ie.cuhk.edu.hk
94
96
| SECOND | 40 | 30 | ✗ | ✗ |
95
97
| Part-A2 | 17 | 14 | ✗ | ✗ |
96
98
99
+
</details>
100
+
97
101
Like [MMDetection](https://github.com/open-mmlab/mmdetection) and [MMCV](https://github.com/open-mmlab/mmcv), MMDetection3D can also be used as a library to support different projects on top of it.
98
102
99
-
## License
103
+
## What's New
100
104
101
-
This project is released under the [Apache 2.0 license](LICENSE).
105
+
### Highlight
106
+
107
+
**We have renamed the branch `1.1` to `main` and switched the default branch from `master` to `main`. We encourage users to migrate to the latest version, though it comes with some cost. Please refer to [Migration Guide](docs/en/migration.md) for more details.**
102
108
103
-
## Changelog
109
+
We have constructed a comprehensive LiDAR semantic segmentation benchmark on SemanticKITTI, including Cylinder3D, MinkUNet and SPVCNN methods. Noteworthy, the improved MinkUNetv2 can achieve 70.3 mIoU on the validation set of SemanticKITTI. We have also supported the training of BEVFusion and an occupancy prediction method, TPVFormer, in our `projects`. More new features about 3D perception are on the way. Please stay tuned!
110
+
111
+
**v1.1.1** was released in 30/5/2023:
112
+
113
+
- Support [TPVFormer](https://arxiv.org/pdf/2302.07817.pdf) in `projects`
114
+
- Support the training of BEVFusion in `projects`
115
+
- Support lidar-based 3D semantic segmentation benchmark
116
+
117
+
## Installation
104
118
105
-
**1.1.0** was released in 6/4/2023.
119
+
Please refer to [Installation](https://mmdetection3d.readthedocs.io/en/latest/get_started.html) for installation instructions.
106
120
107
-
Please refer to [changelog.md](docs/en/notes/changelog.md) for details and release history.
121
+
## Getting Started
108
122
109
-
## Benchmark and model zoo
123
+
For detailed user guides and advanced guides, please refer to our [documentation](https://mmdetection3d.readthedocs.io/en/latest/):
Results and models are available in the [model zoo](docs/en/model_zoo.md).
112
166
@@ -284,15 +338,17 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
284
338
285
339
**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase.
286
340
287
-
## Installation
341
+
## FAQ
288
342
289
-
Please refer to [get_started.md](docs/en/get_started.md) for installation.
343
+
Please refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions.
290
344
291
-
## Get Started
345
+
## Contributing
292
346
293
-
Please see [get_started.md](docs/en/get_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/user_guides/train_test.md) and [with new dataset](docs/en/user_guides/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/user_guides/config.md), [customizing dataset](docs/en/advanced_guides/customize_dataset.md), [designing data pipeline](docs/en/user_guides/data_pipeline.md), [customizing models](docs/en/advanced_guides/customize_models.md), [customizing runtime settings](docs/en/advanced_guides/customize_runtime.md) and [Waymo dataset](docs/en/advanced_guides/datasets/waymo_det.md).
347
+
We appreciate all contributions to improve MMDetection3D. Please refer to [CONTRIBUTING.md](docs/en/notes/contribution_guides.md) for the contributing guideline.
294
348
295
-
Please refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/notes/compatibility.md) to be aware of the BC-breaking updates introduced in each version.
349
+
## Acknowledgement
350
+
351
+
MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.
296
352
297
353
## Citation
298
354
@@ -307,14 +363,9 @@ If you find this project useful in your research, please consider cite:
307
363
}
308
364
```
309
365
310
-
## Contributing
311
-
312
-
We appreciate all contributions to improve MMDetection3D. Please refer to [CONTRIBUTING.md](./docs/en/notes/contribution_guides.md) for the contributing guideline.
313
-
314
-
## Acknowledgement
366
+
## License
315
367
316
-
MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks.
317
-
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.
368
+
This project is released under the [Apache 2.0 license](LICENSE).
0 commit comments