Skip to content

Commit 6e1d6a4

Browse files
authored
[Feature] Support CENet in Projects (#2619)
* add cenet in projects * add aux head for cenet * fix some potential bugs * update param_scheduler * finish the readme * remove redundant config * add fps
1 parent 5cd59f7 commit 6e1d6a4

12 files changed

+1414
-1
lines changed

projects/CENet/README.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# CENet: Toward Concise and Efficient LiDAR Semantic Segmentation for Autonomous Driving
2+
3+
> [CENet: Toward Concise and Efficient LiDAR Semantic Segmentation for Autonomous Driving](https://arxiv.org/abs/2207.12691)
4+
5+
<!-- [ALGORITHM] -->
6+
7+
## Abstract
8+
9+
Accurate and fast scene understanding is one of the challenging task for autonomous driving, which requires to take full advantage of LiDAR point clouds for semantic segmentation. In this paper, we present a concise and efficient image-based semantic segmentation network, named CENet. In order to improve the descriptive power of learned features and reduce the computational as well as time complexity, our CENet integrates the convolution with larger kernel size instead of MLP, carefully-selected activation functions, and multiple auxiliary segmentation heads with corresponding loss functions into architecture. Quantitative and qualitative experiments conducted on publicly available benchmarks, SemanticKITTI and SemanticPOSS, demonstrate that our pipeline achieves much better mIoU and inference performance compared with state-of-the-art models. The code will be available at https://github.com/huixiancheng/CENet.
10+
11+
<div align=center>
12+
<img src="https://github.com/open-mmlab/mmdetection3d/assets/55445986/2c268392-0e0c-4e93-bb9d-dc3417c56dad" width="800"/>
13+
</div>
14+
15+
## Introduction
16+
17+
We implement CENet and provide the results and pretrained checkpoints on SemanticKITTI dataset.
18+
19+
## Usage
20+
21+
<!-- For a typical model, this section should contain the commands for training and testing. You are also suggested to dump your environment specification to env.yml by `conda env export > env.yml`. -->
22+
23+
### Training commands
24+
25+
In MMDetection3D's root directory, run the following command to train the model:
26+
27+
```bash
28+
python tools/train.py projects/CENet/configs/cenet-64x512_4xb4_semantickitti.py
29+
```
30+
31+
For multi-gpu training, run:
32+
33+
```bash
34+
python -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node=${NUM_GPUS} --master_port=29506 --master_addr="127.0.0.1" tools/train.py projects/CENet/configs/cenet-64x512_4xb4_semantickitti.py
35+
```
36+
37+
### Testing commands
38+
39+
In MMDetection3D's root directory, run the following command to test the model:
40+
41+
```bash
42+
python tools/test.py projects/CENet/configs/cenet-64x512_4xb4_semantickitti.py ${CHECKPOINT_PATH}
43+
```
44+
45+
## Results and models
46+
47+
### NuScenes
48+
49+
| Backbone | Input resolution | Mem (GB) | Inf time (fps) | mIoU | Download |
50+
| :----------------------------------------------------: | :--------------: | :------: | :------------: | :---: | :----------------------: |
51+
| [CENet](./configs/cenet-64x512_4xb4_semantickitti.py) | 64\*512 | | 41.7 | 61.10 | [model](<>) \| [log](<>) |
52+
| [CENet](./configs/cenet-64x1024_4xb4_semantickitti.py) | 64\*1024 | | 26.8 | 62.20 | [model](<>) \| [log](<>) |
53+
| [CENet](./configs/cenet-64x2048_4xb4_semantickitti.py) | 64\*2048 | | 14.1 | 62.64 | [model](<>) \| [log](<>) |
54+
55+
**Note**
56+
57+
- We report point-based mIoU instead of range-view based mIoU
58+
- The mIoU is the best results during inference after each epoch training, which is consistent with official code
59+
- If your setting is different with our settings, we strongly suggest to enable `auto_scale_lr` to achieve comparable results.
60+
61+
## Citation
62+
63+
```latex
64+
@inproceedings{cheng2022cenet,
65+
title={Cenet: Toward Concise and Efficient Lidar Semantic Segmentation for Autonomous Driving},
66+
author={Cheng, Hui--Xian and Han, Xian--Feng and Xiao, Guo--Qiang},
67+
booktitle={2022 IEEE International Conference on Multimedia and Expo (ICME)},
68+
pages={01--06},
69+
year={2022},
70+
organization={IEEE}
71+
}
72+
```
73+
74+
## Checklist
75+
76+
<!-- Here is a checklist illustrating a usual development workflow of a successful project, and also serves as an overview of this project's progress. The PIC (person in charge) or contributors of this project should check all the items that they believe have been finished, which will further be verified by codebase maintainers via a PR.
77+
OpenMMLab's maintainer will review the code to ensure the project's quality. Reaching the first milestone means that this project suffices the minimum requirement of being merged into 'projects/'. But this project is only eligible to become a part of the core package upon attaining the last milestone.
78+
Note that keeping this section up-to-date is crucial not only for this project's developers but the entire community, since there might be some other contributors joining this project and deciding their starting point from this list. It also helps maintainers accurately estimate time and effort on further code polishing, if needed.
79+
A project does not necessarily have to be finished in a single PR, but it's essential for the project to at least reach the first milestone in its very first PR. -->
80+
81+
- [x] Milestone 1: PR-ready, and acceptable to be one of the `projects/`.
82+
83+
- [x] Finish the code
84+
85+
<!-- The code's design shall follow existing interfaces and convention. For example, each model component should be registered into `mmdet3d.registry.MODELS` and configurable via a config file. -->
86+
87+
- [x] Basic docstrings & proper citation
88+
89+
<!-- Each major object should contain a docstring, describing its functionality and arguments. If you have adapted the code from other open-source projects, don't forget to cite the source project in docstring and make sure your behavior is not against its license. Typically, we do not accept any code snippet under GPL license. [A Short Guide to Open Source Licenses](https://medium.com/nationwide-technology/a-short-guide-to-open-source-licenses-cf5b1c329edd) -->
90+
91+
- [x] Test-time correctness
92+
93+
<!-- If you are reproducing the result from a paper, make sure your model's inference-time performance matches that in the original paper. The weights usually could be obtained by simply renaming the keys in the official pre-trained weights. This test could be skipped though, if you are able to prove the training-time correctness and check the second milestone. -->
94+
95+
- [x] A full README
96+
97+
<!-- As this template does. -->
98+
99+
- [x] Milestone 2: Indicates a successful model implementation.
100+
101+
- [x] Training-time correctness
102+
103+
<!-- If you are reproducing the result from a paper, checking this item means that you should have trained your model from scratch based on the original paper's specification and verified that the final result matches the report within a minor error range. -->
104+
105+
- [ ] Milestone 3: Good to be a part of our core package!
106+
107+
- [ ] Type hints and docstrings
108+
109+
<!-- Ideally *all* the methods should have [type hints](https://www.pythontutorial.net/python-basics/python-type-hints/) and [docstrings](https://google.github.io/styleguide/pyguide.html#381-docstrings). [Example](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/models/detectors/fcos_mono3d.py) -->
110+
111+
- [ ] Unit tests
112+
113+
<!-- Unit tests for each module are required. [Example](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/tests/test_models/test_dense_heads/test_fcos_mono3d_head.py) -->
114+
115+
- [ ] Code polishing
116+
117+
<!-- Refactor your code according to reviewer's comment. -->
118+
119+
- [ ] Metafile.yml
120+
121+
<!-- It will be parsed by MIM and Inferencer. [Example](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/fcos3d/metafile.yml) -->
122+
123+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
124+
125+
<!-- In particular, you may have to refactor this README into a standard one. [Example](/configs/textdet/dbnet/README.md) -->
126+
127+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.

projects/CENet/cenet/__init__.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Copyright (c) OpenMMLab. All rights reserved.
2+
from .boundary_loss import BoundaryLoss
3+
from .cenet_backbone import CENet
4+
from .range_image_head import RangeImageHead
5+
from .range_image_segmentor import RangeImageSegmentor
6+
from .transforms_3d import SemkittiRangeView
7+
8+
__all__ = [
9+
'CENet', 'RangeImageHead', 'RangeImageSegmentor', 'SemkittiRangeView',
10+
'BoundaryLoss'
11+
]

projects/CENet/cenet/boundary_loss.py

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
# Copyright (c) OpenMMLab. All rights reserved.
2+
import torch
3+
from torch import Tensor, nn
4+
from torch.nn import functional as F
5+
6+
from mmdet3d.registry import MODELS
7+
8+
9+
def one_hot(label: Tensor,
10+
n_classes: int,
11+
requires_grad: bool = True) -> Tensor:
12+
"""Return One Hot Label."""
13+
device = label.device
14+
one_hot_label = torch.eye(
15+
n_classes, device=device, requires_grad=requires_grad)[label]
16+
one_hot_label = one_hot_label.transpose(1, 3).transpose(2, 3)
17+
18+
return one_hot_label
19+
20+
21+
@MODELS.register_module()
22+
class BoundaryLoss(nn.Module):
23+
"""Boundary loss."""
24+
25+
def __init__(self, theta0=3, theta=5, loss_weight: float = 1.0) -> None:
26+
super(BoundaryLoss, self).__init__()
27+
self.theta0 = theta0
28+
self.theta = theta
29+
self.loss_weight = loss_weight
30+
31+
def forward(self, pred: Tensor, gt: Tensor) -> Tensor:
32+
"""Forward function.
33+
34+
Args:
35+
pred (Tensor): The output from model.
36+
gt (Tensor): Ground truth map.
37+
38+
Returns:
39+
Tensor: Loss tensor.
40+
"""
41+
pred = F.softmax(pred, dim=1)
42+
n, c, _, _ = pred.shape
43+
44+
# one-hot vector of ground truth
45+
one_hot_gt = one_hot(gt, c)
46+
47+
# boundary map
48+
gt_b = F.max_pool2d(
49+
1 - one_hot_gt,
50+
kernel_size=self.theta0,
51+
stride=1,
52+
padding=(self.theta0 - 1) // 2)
53+
gt_b -= 1 - one_hot_gt
54+
55+
pred_b = F.max_pool2d(
56+
1 - pred,
57+
kernel_size=self.theta0,
58+
stride=1,
59+
padding=(self.theta0 - 1) // 2)
60+
pred_b -= 1 - pred
61+
62+
gt_b = gt_b.view(n, c, -1)
63+
pred_b = pred_b.view(n, c, -1)
64+
65+
# Precision, Recall
66+
P = torch.sum(pred_b * gt_b, dim=2) / (torch.sum(pred_b, dim=2) + 1e-7)
67+
R = torch.sum(pred_b * gt_b, dim=2) / (torch.sum(gt_b, dim=2) + 1e-7)
68+
69+
# Boundary F1 Score
70+
BF1 = 2 * P * R / (P + R + 1e-7)
71+
72+
# summing BF1 Score for each class and average over mini-batch
73+
loss = torch.mean(1 - BF1)
74+
75+
return self.loss_weight * loss

0 commit comments

Comments
 (0)