Skip to content

Commit c372939

Browse files
committed
Code available.
1 parent bcbc3a1 commit c372939

37 files changed

+6407
-4
lines changed

.gitignore

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# PyCharm
2+
.idea
3+
4+
# MAC OS
5+
.DS_Store
6+
7+
# pytest
8+
.coverage
9+
.pytest
10+
.pytest_cache
11+
12+
# Python
13+
*__pycache__*
14+
*.pth
15+
16+
# Redundant files
17+
.nfs*
18+
19+
# Log files
20+
log
21+
22+
# Trash
23+
*.nfs*

LICENSE

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
Copyright (c) 2021 Qualcomm Technologies, Inc.
2+
3+
All rights reserved.
4+
5+
Redistribution and use in source and binary forms, with or without modification, are permitted (subject to the limitations in the disclaimer below) provided that the following conditions are met:
6+
7+
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer:
8+
9+
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
10+
11+
* Neither the name of Qualcomm Technologies, Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
12+
13+
NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS LICENSE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

README.md

Lines changed: 88 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,88 @@
1-
# InverseForm: A Loss Function for Structured Boundary-Aware Segmentation
2-
Paper: [arXiv](https://arxiv.org/abs/2104.02745)
3-
4-
The codebase will be available here soon.
1+
# InverseForm
2+
3+
This repository provides the InverseForm module.
4+
5+
Shubhankar Borse, Ying Wang, Yizhe Zhang, Fatih Porikli, "InverseForm: A Loss Function for Structured Boundary-Aware Segmentation
6+
", CVPR 2021.[[arxiv]](https://arxiv.org/abs/2104.02745)
7+
8+
Qualcomm AI Research (Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc)
9+
10+
## Reference
11+
If you find our work useful for your research, please cite:
12+
```latex
13+
@inproceedings{borse2021inverseform,
14+
title={InverseForm: A Loss Function for Structured Boundary-Aware Segmentation},
15+
author={Borse, Shubhankar and Wang, Ying and Zhang, Yizhe and Porikli, Fatih
16+
},
17+
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
18+
year={2021}
19+
}
20+
```
21+
22+
## Method
23+
InverseForm is a novel boundary-aware loss term for semantic segmentation, which efficiently learns the degree of parametric transformations between estimated and target boundaries.
24+
25+
![! an image](display/inverseform_framework.png)
26+
27+
This plug-in loss term complements the cross-entropy loss in capturing boundary transformations and allows consistent and significant performance improvement on segmentation backbone models without increasing their size and computational complexity.
28+
29+
Here is an example demo from our state-of-the-art model trained on the Cityscapes benchmark.
30+
31+
<img src="display/if_photos_gif.gif " width="425"/> <img src="display/if_labels_gif.gif " width="425"/>
32+
33+
This repository contains the implementation of InverseForm module presented in the paper. It can also run inference on Cityscapes validation set with models trained using the InverseForm framework. The same models can be validated by removing the InverseForm framework such that no additional compute is added during inference. Here are some of the models over which you can run inference with and without the InverseForm block (right-most column of the table below):
34+
35+
36+
37+
| Model | mIoU (trained w/o InverseForm) | mIoU (trained w/ InverseForm) |
38+
| :-------------: | :-----------------------------: | :-----------------------------: |
39+
| HRNet-18 | 77.0% | 77.6% |
40+
| OCRNet-48 | 86.0% | 86.3% |
41+
| OCRNet-48-HMS | 86.7% | 87.0% |
42+
43+
44+
## Setup environment
45+
46+
Code has been tested with pytorch 1.3 and NVIDIA Apex. The Dockerfile is available under docker/ folder.
47+
48+
## Cityscapes path
49+
50+
utils/config.py has the dataset/directory information. Please update CITYSCAPES_DIR as the preferred Cityscapes directory. You can download this dataset from https://www.cityscapes-dataset.com/.
51+
52+
## Inference on cityscapes
53+
54+
To run inference, this directory path needs to be added to your pythonpath. Here is the command for this:
55+
56+
```bash
57+
export PYTHONPATH="${PYTHONPATH}:/path/to/this/dir"
58+
```
59+
60+
Here are code snippets to run inference on the models shown above. These examples show usage with 8 GPUs. You could run the inference command with 1/2/4 GPUs by updating the nproc_per_node argument.
61+
62+
*Checkpoints coming soon!*
63+
64+
* HRNet-18-IF
65+
```bash
66+
python -m torch.distributed.launch --nproc_per_node=8 experiment/validation.py --output_dir "/path/to/output/dir" --model_path "checkpoints/hrnet18_IF_checkpoint.pth" --has_edge True
67+
```
68+
* OCRNet-48-IF
69+
```bash
70+
python -m torch.distributed.launch --nproc_per_node=8 experiment/validation.py --output_dir "/path/to/output/dir" --model_path checkpoints/hrnet48_OCR_IF_checkpoint.pth --arch "ocrnet.HRNet" --hrnet_base "48" --has_edge True
71+
```
72+
* HMS-OCRNet-48-IF
73+
```bash
74+
python -m torch.distributed.launch --nproc_per_node=8 experiment/validation.py --output_dir "/path/to/output/dir" --model_path checkpoints/hrnet48_OCR_HMS_IF_checkpoint.pth --arch "ocrnet.HRNet_Mscale" --hrnet_base "48" --has_edge True
75+
```
76+
77+
To remove the InverseForm operation during inference, simply run without the has_edge flag. You will notice no drop in performance as compared to running with the operation.
78+
79+
## Acknowledgements:
80+
81+
This repository shares code with the following repositories:
82+
83+
* Hierarchical Multi-Scale Attention(HMS):
84+
https://github.com/NVIDIA/semantic-segmentation
85+
* HRNet-OCR: https://github.com/HRNet/HRNet-Semantic-Segmentation
86+
87+
We would like to acknowledge the researchers who made these repositories open-source.
88+

display/if_labels_gif.gif

1.91 MB
Loading

display/if_photos_gif.gif

8.48 MB
Loading

display/inverseform_framework.png

530 KB
Loading

docker/Dockerfile

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
FROM nvcr.io/nvidia/pytorch:19.10-py3
2+
3+
RUN pip install numpy
4+
RUN pip install runx==0.0.6
5+
RUN pip install sklearn
6+
RUN pip install h5py
7+
RUN pip install jupyter
8+
RUN pip install scikit-image
9+
RUN pip install pillow
10+
RUN pip install piexif
11+
RUN pip install cffi
12+
RUN pip install tqdm
13+
RUN pip install dominate
14+
RUN pip install opencv-python
15+
RUN pip install nose
16+
RUN pip install ninja
17+
RUN pip install fire
18+
19+
RUN apt-get update
20+
RUN apt-get install libgtk2.0-dev -y && rm -rf /var/lib/apt/lists/*
21+
22+
# Install Apex
23+
RUN cd /home/ && git clone https://github.com/NVIDIA/apex.git apex && cd apex && python setup.py install --cuda_ext --cpp_ext
24+
WORKDIR /home/
25+
26+
RUN apt-get update \
27+
&& apt-get install -y wget curl sudo software-properties-common
28+
29+
# Add sudo support
30+
RUN echo "%users ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers

experiment/validation.py

Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# Copyright (c) 2021 Qualcomm Technologies, Inc.
2+
3+
# All Rights Reserved.
4+
5+
from __future__ import absolute_import
6+
from __future__ import division
7+
from apex import amp
8+
from runx.logx import logx
9+
import numpy as np
10+
import torch
11+
import argparse
12+
import os
13+
import sys
14+
import time
15+
import fire
16+
from utils.config import assert_and_infer_cfg, cfg
17+
from utils.misc import AverageMeter, eval_metrics
18+
from utils.misc import ImageDumper
19+
from utils.trnval_utils import eval_minibatch
20+
from utils.progress_bar import printProgressBar
21+
from models.loss.utils import get_loss
22+
from models.model_loader import load_model
23+
from library.datasets.get_dataloaders import return_dataloader
24+
import models
25+
import warnings
26+
27+
if not sys.warnoptions:
28+
warnings.simplefilter("ignore")
29+
30+
torch.backends.cudnn.benchmark = True
31+
32+
33+
def set_apex_params(local_rank):
34+
"""
35+
Setting distributed parameters for Apex
36+
"""
37+
if 'WORLD_SIZE' in os.environ:
38+
world_size = int(os.environ['WORLD_SIZE'])
39+
global_rank = int(os.environ['RANK'])
40+
41+
print('GPU {} has Rank {}'.format(
42+
local_rank, global_rank))
43+
torch.cuda.set_device(local_rank)
44+
torch.distributed.init_process_group(backend='nccl',
45+
init_method='env://')
46+
return world_size, global_rank
47+
48+
49+
def inference(val_loader, net, arch, loss_fn, epoch, calc_metrics=True):
50+
"""
51+
Inference over dataloader on network
52+
"""
53+
54+
len_dataset = len(val_loader)
55+
net.eval()
56+
val_loss = AverageMeter()
57+
iou_acc = 0
58+
59+
for val_idx, data in enumerate(val_loader):
60+
input_images, labels, edge, img_names, _ = data
61+
62+
# Run network
63+
assets, _iou_acc = \
64+
eval_minibatch(data, net, loss_fn, val_loss, calc_metrics,
65+
val_idx)
66+
iou_acc += _iou_acc
67+
if val_idx+1 < len_dataset:
68+
printProgressBar(val_idx + 1, len_dataset, 'Progress')
69+
70+
logx.msg("\n")
71+
if calc_metrics:
72+
eval_metrics(iou_acc, net, val_loss, epoch, arch)
73+
74+
75+
def main(output_dir, model_path, has_edge=False, model_summary=False, arch='ocrnet.AuxHRNet',
76+
hrnet_base='18', num_workers=4, split='val', batch_size=2, crop_size='1024,2048',
77+
apex=True, syncbn=True, fp16=True, local_rank=0):
78+
79+
#Distributed processing
80+
if apex:
81+
world_size, global_rank = set_apex_params(local_rank)
82+
else:
83+
world_size = 1
84+
global_rank = 0
85+
local_rank = 0
86+
87+
#Logging
88+
logx.initialize(logdir=output_dir,
89+
tensorboard=True,
90+
global_rank=global_rank)
91+
92+
#Build config
93+
assert_and_infer_cfg(output_dir, global_rank, apex, syncbn, arch, hrnet_base,
94+
fp16, has_edge)
95+
96+
#Dataloader
97+
val_loader = return_dataloader(num_workers, batch_size)
98+
99+
#Loss function
100+
loss_fn = get_loss(has_edge)
101+
102+
assert model_path is not None, 'need pytorch model for inference'
103+
104+
#Load Network
105+
checkpoint = torch.load(model_path, map_location=torch.device('cpu'))
106+
logx.msg("Loading weights from: {}".format(model_path))
107+
net = models.get_net(arch, loss_fn)
108+
if fp16:
109+
net = amp.initialize(net, opt_level='O1', verbosity=0)
110+
net = models.wrap_network_in_dataparallel(net, apex)
111+
#restore_net(net, checkpoint, arch)
112+
load_model(net, checkpoint)
113+
#Summary of MAC/#param
114+
if model_summary:
115+
from thop import profile
116+
img = torch.randn(1, 3, 1024, 2048).cuda()
117+
mask = torch.randn(1, 1, 1024, 2048).cuda()
118+
macs, params = profile(net, inputs=({'images': img, 'gts': mask}, ))
119+
print(f'macs {macs} params {params}')
120+
sys.exit()
121+
122+
123+
torch.cuda.empty_cache()
124+
125+
#Run inference
126+
inference(val_loader, net, arch, loss_fn, epoch=0)
127+
128+
129+
if __name__ == '__main__':
130+
fire.Fire(main)

library/data/cityscapes.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
import os
2+
import os.path as path
3+
from utils.config import cfg
4+
import library.data.cityscapes_labels as cityscapes_labels
5+
6+
7+
def find_directories(root):
8+
"""
9+
Find folders in validation set.
10+
"""
11+
trn_path = path.join(root, 'leftImg8bit', 'train')
12+
val_path = path.join(root, 'leftImg8bit', 'val')
13+
14+
trn_directories = ['train/' + c for c in os.listdir(trn_path)]
15+
trn_directories = sorted(trn_directories) # sort to insure reproducibility
16+
val_directories = ['val/' + c for c in os.listdir(val_path)]
17+
18+
return val_directories

0 commit comments

Comments
 (0)