Skip to content

Commit 0471e80

Browse files
committed
refactoring
1 parent a8ddd22 commit 0471e80

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+6111
-95
lines changed

README.md

Lines changed: 50 additions & 95 deletions
Original file line numberDiff line numberDiff line change
@@ -1,129 +1,84 @@
1-
# BiSeNetV2 is coming
1+
# BiSeNetV1 & BiSeNetV2
22

3-
BiSeNetV2 is faster and requires less memory, you can try BiSeNetV2 on cityscapes like this:
4-
```
5-
$ export CUDA_VISIBLE_DEVICES=0,1
6-
$ python -m torch.distributed.launch --nproc_per_node=2 bisenetv2/train.py --fp16
7-
```
8-
This would train the model and then compute the mIOU on eval set.
3+
My implementation of [BiSeNetV1](https://arxiv.org/abs/1808.00897) and [BiSeNetV2](https://arxiv.org/abs/1808.00897).
94

10-
~~I barely achieve mIOU of around 71. Though I can boost the performace by adding more regularizations and pretraining, as this would be beyond the scope of the paper, let's wait for the official implementation and see how they achieved that mIOU of 73.~~
115

12-
Here is the tips how I achieved 74.39 mIOU:
13-
1. larger training scale range: In the paper, they say the images are first resized to range (0.75, 2), then 1024x2048 patches are cropped and resized to 512x1024, which equals to first resized to (0.375, 1) then crop with 512x1024 patches. In my implementation, I first rescale the image by range of (0.25, 2), and then directly crop 512x1024 patches to train.
6+
The mIOU evaluation result of the models trained and evaluated on cityscapes train/val set is:
7+
| none | ss | ssc | msf | mscf | fps | link |
8+
|------|:--:|:---:|:---:|:----:|:---:|:----:|
9+
| bisenetv1 | 74.85 | 76.46 | 77.36 | 78.72 | - | [download](https://drive.google.com/file/d/1e1_E7OrpjTaD5Rael7Fus5lg-uGZ5TUZ/view?usp=sharing) |
10+
| bisenetv2 | 74.39 | 74.44 | 76.10 | 75.94 | - | [download](https://drive.google.com/file/d/1r_F-KZg-3s2pPcHRIuHZhZ0DQ0wocudk/view?usp=sharing) |
1411

15-
2. original inference scale: In the paper, they first rescale the image into 512x1024 to run inference, then rescale back to original size of 1024x2048. In my implementation, I directly use original size of 1024x2048 to inference.
12+
> Where **ss** means single scale evaluation, **ssc** means single scale crop evaluation, **msf** means multi-scale evaluation with flip augment, and **mscf** means multi-scale crop evaluation with flip evaluation. The eval scales of multi-scales evaluation are `[0.5, 0.75, 1.0, 1.25, 1.5, 1.75]`, and the crop size of crop evaluation is `[1024, 1024]`.
1613
17-
3. colorjitter as augmentations.
14+
Note that the model has a big variance, which means that the results of training for many times would vary within a relatively big margin. For example, if you train bisenetv2 for many times, you will observe that the result of **ss** evaluation of bisenetv2 varies between 72.1-74.4.
1815

19-
Note that, like bisenetv1, bisenetv2 also has a relatively big variance. Here is the mIOU after training 5 times on my platform:
2016

21-
| #No. | 1 | 2 | 3 | 4 | 5 |
22-
|:---|:---|:---|:---|:---|:---|
23-
| mIOU | 74.28 | 72.96 | 73.73 | 74.39 | 73.77 |
17+
## platform
18+
My platform is like this:
19+
* ubuntu 16.04
20+
* cuda 10.1.243
21+
* cudnn 7
22+
* miniconda python 3.6.9
23+
* pytorch 1.6.0
2424

25-
You can download the pretrained model with mIOU of 74.39 following this [link](https://drive.google.com/file/d/1r_F-KZg-3s2pPcHRIuHZhZ0DQ0wocudk/view?usp=sharing).
2625

26+
## get start
27+
With a pretrained weight, you can run inference on an single image like this:
28+
```
29+
$ python tools/demo.py --model bisenetv2 --weight-path /path/to/your/weights.pth --img-path ./example.jpg
30+
```
31+
This would run inference on the image and save the result image to `./res.jpg`.
2732

2833

29-
# BiSeNet
30-
My implementation of [BiSeNet](https://arxiv.org/abs/1808.00897). My environment is pytorch1.0 and python3, the code is not tested with other environments, but it should also work on similar environments.
34+
## prepare dataset
3135

36+
1.cityscapes
3237

33-
### Get cityscapes dataset
34-
Register and download the dataset from the official [website](https://www.cityscapes-dataset.com/). Then decompress them in the `data/` directory:
38+
Register and download the dataset from the official [website](https://www.cityscapes-dataset.com/). Then decompress them into the `datasets/cityscapes` directory:
3539
```
36-
$ mkdir -p data
37-
$ mv /path/to/leftImg8bit_trainvaltest.zip data
38-
$ mv /path/to/gtFine_trainvaltest.zip data
39-
$ cd data
40-
$ unzip leftImg8bit_trainvaltest.zip
41-
$ unzip gtFine_trainvaltest.zip
42-
```
43-
44-
### Train and evaluation
45-
Just run the train script:
46-
```
47-
$ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
48-
```
49-
This would take almost one day on a two 1080ti gpu, and the mIOU will also be computed after training is done.
50-
You can also run the evaluate alone script after training:
51-
```
52-
$ python evaluate.py
40+
$ mv /path/to/leftImg8bit_trainvaltest.zip datasets/cityscapes
41+
$ mv /path/to/gtFine_trainvaltest.zip datasets/cityscapes
42+
$ cd datasets/cityscapes
43+
$ unzip leftImg8bit_trainvaltest.zip
44+
$ unzip gtFine_trainvaltest.zip
5345
```
5446

47+
2.custom dataset
5548

56-
### Pretrained models
57-
In order to prove myself not a cheater, I prepared pretrained models. You may download the model [here](https://pan.baidu.com/s/1z4z01v8kiqyj0fxUB89KNw) with extraction code `4efc`. Download the `model_final.pth` file and put it in the `res/` directory and then run:
49+
If you want to train on your own dataset, you should generate annotation files first with the format like this:
5850
```
59-
$ python evaluate.py
51+
munster_000002_000019_leftImg8bit.png,munster_000002_000019_gtFine_labelIds.png
52+
frankfurt_000001_079206_leftImg8bit.png,frankfurt_000001_079206_gtFine_labelIds.png
53+
...
6054
```
61-
After half a hour, you will see the result of 78.45 mIOU.
55+
Each line is a pair of training sample and ground truth image path, which are separated by a single comma `,`.
56+
Then you need to change the field of `im_root` and `train/val_im_anns` in the configuration files.
6257

63-
I recommend you to use the 'diss' version which does not contain the `spatial path`. This version is faster and lighter without performance reduction. You can download the pretrained model with this [link](https://pan.baidu.com/s/1wWhYZcABWMceZdmJWF_wxQ) and the extraction code is `4fbx`. Put this `model_final_diss.pth` file under your `res/` directory and then you can run this script to test it:
58+
## train
59+
In order to train the model, you can run command like this:
6460
```
65-
$ python diss/evaluate.py
61+
$ export CUDA_VISIBLE_DEVICES=0,1
62+
$ python -m torch.distributed.launch --nproc_per_node=2 tools/train.py --model bisenetv2 # or bisenetv1
6663
```
67-
This model achieves 78.48 mIOU.
68-
6964

70-
Note:
71-
Since I used randomly generated seed for the random operations, the results may fluctuate within the range of [78.17, 78.72], depending on the specific random status during training. I am lucky to have captured a result of 78.4+ mIOU. If you want to train your own model from scratch, please make sure that you are lucky too.
65+
Note that though `bisenetv2` has fewer flops, it requires much more training iterations. The the training time of `bisenetv1` is shorter.
7266

7367

74-
### fp16
75-
If your gpu supports fp16 mode, and you would like to train with in the mixed precision mode, you can do like this:
68+
## finetune from trained model
69+
You can also load the trained model weights and finetune from it:
7670
```
77-
$ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 fp16/train.py
71+
$ export CUDA_VISIBLE_DEVICES=0,1
72+
$ python -m torch.distributed.launch --nproc_per_node=2 tools/train.py --finetune-from ./res/model_final.pth --model bisenetv2 # or bisenetv1
7873
```
79-
Note that, I tested this training in fp16 mode with `pytorch1.3` and `apex` of commit `95d6c007ec9cca4231`. This environment configuration may not be same with training other models(I did not tested training the other model in this environment).
8074

81-
Also, in this fp16 model, I used the `sync-bn` officially provided by pytorch, rather than the `inplace-abn`.
8275

83-
84-
### Demo
85-
You can run inference on a single model like this:
86-
```python
87-
python demo.py --ckpt res/model_final.pth --img_path ./pic.jpg
76+
## eval pretrained models
77+
You can also evaluate a trained model like this:
8878
```
89-
90-
91-
92-
### Tricks:
93-
These are the tricks that I find might be useful:
94-
1. use online hard example mining loss. This let the model be trained more efficiently.
95-
2. do not add weight decay when bn parameters and bias parameters of nn.Conv2d or nn.Linear are tuned.
96-
3. use a 10 times larger lr at the model output layers.
97-
4. use crop evaluation. We do not want the eval scale to be too far away from the train scale, so we crop the chips from the images to do evaluation and then combine the results to make the final prediction.
98-
5. multi-scale training and multi-scale-flip evaluating. On each scale, the scores of the original image and its flipped version are summed up, and then the exponential of the sum is computed to be the prediction of this scale.
99-
6. warmup of 1000 iters to make sure the model better initialized.
100-
101-
102-
103-
## Diss this paper:
104-
105-
#### Old Iron Double Hit 666
106-
107-
<p align='center'>
108-
<img src='pic.jpg'>
109-
</p>
110-
111-
Check it out:
112-
113-
The authors have proposed a new model structure which is claimed to achieve the state of the art of 78.4 mIOU on cityscapes. However, I do not think this two-branch structure is the key to this result. It is the tricks which are used to train the model that really helps.
114-
115-
Yao~ Yao~
116-
117-
If we need some features with a downsample rate of 1/8, we can simply use the resnet feature of the layer `res3b1`, like what the [DeepLabv3+](https://arxiv.org/abs/1802.02611) does. It is actually not necessary to add the so-called spatial path. To prove this, I changed the model a little by replacing the spatial path feature with the resnet `res3b1` feature. The associated code is in the `diss` folder. We can train the modified model like this:
118-
```
119-
$ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 diss/train.py
79+
$ python tools/evaluate.py --model bisenetv1 --weight-path /path/to/your/weight.pth
12080
```
121-
After 20h training, you can see a result of `mIOU=78.48`, which is still close to the result reported in the paper(mIOU=78.4).
122-
123-
What is worth mentioning is that, the modified model can be trained faster than the original version and requires less memory since we have eliminated the cost brought by the spatial path.
12481

125-
Yao Yao Yao~
82+
### Be aware that this is the refactored version of the original codebase. You can go to the `old` directory for original implementation.
12683

127-
From the experiment, we can know that this model proposed in the paper is just some encoder-decoder structure with some attention modules added to improve its complication. By using the u-shape model with the same tricks, we can still achieve the same result. Therefore, I feel that the real contribution of this paper is the successful usage of the training and evaluating tricks, though the authors made little mention of these tricks and only advocates their model structures in the paper.
12884

129-
Skr Skr~

configs/__init__.py

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
2+
from .bisenetv1 import cfg as bisenetv1_cfg
3+
from .bisenetv2 import cfg as bisenetv2_cfg
4+
5+
6+
7+
class cfg_dict(object):
8+
9+
def __init__(self, d):
10+
self.__dict__ = d
11+
12+
13+
cfg_factory = dict(
14+
bisenetv1=cfg_dict(bisenetv1_cfg),
15+
bisenetv2=cfg_dict(bisenetv2_cfg),
16+
)

configs/bisenetv1.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
2+
cfg = dict(
3+
model_type='bisenetv1',
4+
num_aux_heads=2,
5+
lr_start=1e-2,
6+
weight_decay=5e-4,
7+
warmup_iters=1000,
8+
max_iter=80000,
9+
im_root='./datasets/cityscapes',
10+
train_im_anns='./datasets/cityscapes/train.txt',
11+
val_im_anns='./datasets/cityscapes/val.txt',
12+
scales=[0.75, 2.],
13+
cropsize=[1024, 1024],
14+
ims_per_gpu=8,
15+
use_fp16=True,
16+
use_sync_bn=False,
17+
respth='./res',
18+
)

configs/bisenetv2.py

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
2+
## bisenetv2
3+
cfg = dict(
4+
model_type='bisenetv2',
5+
num_aux_heads=4,
6+
lr_start = 5e-2,
7+
weight_decay=5e-4,
8+
warmup_iters = 1000,
9+
max_iter = 150000,
10+
im_root='./datasets/cityscapes',
11+
train_im_anns='./datasets/cityscapes/train.txt',
12+
val_im_anns='./datasets/cityscapes/val.txt',
13+
scales=[0.25, 2.],
14+
cropsize=[512, 1024],
15+
ims_per_gpu=8,
16+
use_fp16=True,
17+
use_sync_bn=False,
18+
respth='./res',
19+
)

datasets/cityscapes/gtFine

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
/data1/zzy/datasets/cityscapes/gtFine/

datasets/cityscapes/leftImg8bit

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
/data1/zzy/datasets/cityscapes/leftImg8bit/

0 commit comments

Comments
 (0)