Skip to content

Commit 3340504

Browse files
author
BIT-MJY
committed
Merge branch 'BIT-MJY-pytorch_version' into pytorch-version
This is pytorch-implemention for OverlapNet (delta_head_only) developed by Junyi Ma. In many cases, we only utilize OverlapNet for place recognition and use the other methods such as ICP to estimate poses. Thus, this version OverlapNet is convenience for your own tasks.
2 parents e40df50 + 82e30c4 commit 3340504

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+1456
-3132
lines changed

README.md

Lines changed: 39 additions & 170 deletions
Original file line numberDiff line numberDiff line change
@@ -2,26 +2,13 @@
22

33
### OverlapNet was nominated as the Best System Paper at Robotics: Science and Systems (RSS) 2020
44

5-
This repo contains the code for our RSS2020 paper, OverlapNet.
6-
5+
**This repo contains the pytorch implemention of OverlapNet.**
6+
77
OverlapNet is modified Siamese Network that predicts the overlap and relative yaw angle of a pair of range images generated by 3D LiDAR scans.
88

99
Developed by [Xieyuanli Chen](https://www.ipb.uni-bonn.de/people/xieyuanli-chen/) and [Thomas Läbe](https://www.ipb.uni-bonn.de/people/thomas-laebe/).
1010

11-
12-
<img src="pics/architecture.png" width="800">
13-
14-
Pipeline overview of OverlapNet.
15-
16-
17-
### Table of Contents
18-
0. [Introduction](#OverlapNet-was-nominated-as-the-Best-System-Paper-at-Robotics:-Science-and-Systems-(RSS)-2020)
19-
0. [Publication](#Publication)
20-
0. [Logs](#Logs)
21-
0. [Dependencies](#Dependencies)
22-
0. [How to use](#How-to-use)
23-
0. [Application](#Application)
24-
0. [License](#License)
11+
This pytorch-implemention (delta_head_only) is developed by [Junyi Ma](https://github.com/BIT-MJY)
2512

2613

2714
## Publication
@@ -45,51 +32,9 @@ The extended journal version of OverlapNet is [here](http://www.ipb.uni-bonn.de/
4532
issn = {1573-7527}
4633
}
4734

48-
## Logs
49-
### Version 1.1
50-
- Added a method to the Infer class for inference with multiple frames versus multiple frames.
51-
- Updated TensorFlow version in dependencies.
52-
- Fixed bugs in generating ground truth overlap and yaw.
53-
- Added an application and a link to our overlap-based MCL implementation.
54-
### Version 1.0
55-
Open source initial submission
56-
57-
## Dependencies
58-
59-
We are using standalone keras with a tensorflow backend as a library for neural networks.
60-
61-
In order to do training and testing on a whole dataset, you need a nvidia GPU.
62-
The demos still are fast enough when using the neural network on CPU.
63-
64-
To use a GPU, first you need to install the nvidia driver and CUDA, so have fun!
65-
66-
- CUDA Installation guide: [link](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)
67-
68-
- System dependencies:
69-
70-
```bash
71-
sudo apt-get update
72-
sudo apt-get install -y python3-pip python3-tk
73-
sudo -H pip3 install --upgrade pip
74-
```
75-
76-
- Python dependencies (may also work with different versions than mentioned in the requirements file)
77-
78-
```bash
79-
sudo -H pip3 install -r requirements.txt
80-
```
81-
8235

8336

84-
## How to use
85-
86-
This repository contains the neural network for doing a detection of loop closing candidates.
87-
88-
For a complete pipline for online LiDAR preprocessing, one could find the fast implementation in our [SuMa++](https://github.com/PRBonn/semantic_suma).
89-
90-
In this repository, we provide demos to show the functionaly. Additional to that, we explain
91-
how to train a model.
92-
37+
<<<<<<< HEAD
9338
### Demos
9439

9540
##### Demo 1: generate different types of data from the LiDAR scan
@@ -133,132 +78,56 @@ If you follow the recommended [data structure](#data-structure) below, you extra
13378
Otherwise, you need to specify the paths of data in both `config/network.yml` and `config/demo.yml` accordingly,
13479

13580
and then run the third demo script with one command line:
81+
=======
82+
>>>>>>> 2895d50a954918fc301b84c3e11d811ff67d7e25
13683
84+
#### Training
13785
```bash
138-
python3 demo/demo3_lcd.py
86+
cd tools
87+
python training.py
13988
```
14089

141-
You will get an animation like this:
142-
143-
<img src="pics/demo3.gif" width="600">
144-
145-
146-
##### Demo 4: Generate ground truth overlap and yaw for training and testing
147-
To run demo 4, you need only the raw KITTI odometry data. We are using the same
148-
setup as in demo 3.
149-
150-
Run the fourth demo script with one command line:
151-
90+
#### Testing on KITTI00
15291
```bash
153-
python3 demo/demo4_gen_gt_files.py
92+
cd tools
93+
python gen_feature_map_kitti00.py
94+
python testing_kitti00.py
15495
```
15596

156-
You will generated the ground truth data in `data/preprocess_data_demo/ground_truth` and get a plot like this:
157-
158-
<img src="pics/demo4.png" width="600">
159-
160-
The colors represent the ground truth overlap value of each frame with respect to the given current frame which is located at (0.0, 0.0).
161-
162-
163-
### Train and test a model
164-
165-
For a quick test of the training and testing procedures, you could use our preprocessed data
166-
as used in [demo3](#demo-3-loop-closure-detection).
167-
168-
We only provide the geometric-based preprocessed data. But it will also be possible to generate other inputs
169-
(semantics, intensity) by yourself.
170-
171-
A simple example to generate different types of data from LiDAR scan is given in [demo1](#demos).
172-
173-
For 3D LiDAR semantic segmentation, we provide a fast c++ inferring library
174-
[rangenetlib](https://github.com/PRBonn/rangenet_lib).
17597

17698
#### Data structure
17799

178-
For training a new model with OverlapNet, you need to first generate preprocessed data and ground truth overlap and yaw angle which you could find examples in [demo1](#demos) and [demo4](#demo-4-generate-ground-truth-overlap-and-yaw-for-training-and-testing).
179-
180100
The recommended data structure is as follows:
181101

182102
```bash
183-
data
184-
├── 07
185-
│ ├── calib.txt
186-
│ ├── covariance.txt
187-
│ ├── poses.txt
188-
│ ├── depth
189-
│ │ ├── 000000.npy
190-
│ │ ├── 000001.npy
191-
│ │ └── ...
192-
│ ├── normal
193-
│ │ ├── 000000.npy
194-
│ │ ├── 000001.npy
195-
│ │ └── ...
196-
│ ├── velodyne
197-
│ │ ├── 000000.bin
198-
│ │ ├── 000001.bin
199-
│ │ └── ...
200-
│ └── ground_truth
201-
│ ├── ground_truth_overlap_yaw.npz
202-
│ ├── test_set.npz
203-
│ └── train_set.npz
204-
└── model_geo.weight
205-
```
206-
207-
208-
#### Training
209-
210-
The code for training can be found in `src/two_heads/training.py`.
211-
212-
If you download our preprocessed data, please put the data into the folder `data`.
213-
If you want to use another directory, please change the parameter `data_root_folder` in
214-
the configuration file `network.yml`.
215-
216-
Notice that default weight file is set in the configuration file with
217-
parameter `pretrained_weightsfilename`.
218-
If you want to train a completely new model from scratch, leave this parameter empty.
219-
Otherwise you will fine-tune the provided model.
220-
221-
Then you can start the training with
222-
223-
```
224-
python3 src/two_heads/training.py config/network.yml
103+
├── config
104+
├── dataset_full
105+
├── 00
106+
├── depth_map
107+
├── 000000.png
108+
├── ...
109+
└── 004541.png
110+
├── intensity_map
111+
├── 000000.npz
112+
├── ...
113+
└── 004541.npz
114+
├── normal_map
115+
├── 000000.png
116+
├── ...
117+
└── 004541.png
118+
├── overlaps
119+
├── train_set.npz
120+
└── test_set.npz
121+
   ├── 01
122+
   ├── ...
123+
   └── 10
124+
├── loop_gt_seq00_0.3overlap_inactive.npz
125+
├── modules
126+
└── tools
127+
├── amodel1.pth.tar
128+
└── feature_map_kitti00
225129
```
226130

227-
All configuration data is in the yml file. You will find path definitions and training parameters there. The
228-
main path settings are:
229-
- `experiments_path`: the folder where all the training data and results (log files, tensorboard logs, network weights)
230-
will be saved. Default is `/tmp`. Change this according to your needs
231-
- `data_root_folder`: the dataset folder. Is should contain the sequence folders of the dataset e.g. as `00`, `01`, ..., For the provided preproecessed data, it should be `07`.
232-
233-
We provide tensorboard logs in `experiment_path/testname/tblog` for visualizing training and validation details.
234-
235-
#### Testing
236-
237-
Once a model has been trained (thus a file `.weight` with the network weights is available), the performance of the network
238-
can be evaluated. Therefore you can start the testing script in the same manner as the training with the testing script:
239-
240-
```
241-
python3 src/two_heads/testing.py config/network.yml
242-
```
243-
244-
The configuration file should have the following additional settings:
245-
- `pretrained_weightsfilename`: the weight filename mentioned as parameter
246-
- `testing_seqs`: sequences to test on, e.g. `00 01`. (Please comment out `training_seqs`.) The pairs where the tests are computed are in
247-
the file `ground_truth/ground_truth_overlap_yaw.npz`. If one still uses the parameter `training_seqs`, the validation
248-
is done on the test sets of the sequences (`ground_truth/validation_set.npz`) which contain only a small amount of data used
249-
for validation during training.
250-
251-
Note that: the provided pre-trained model and preprocessed ground truth are with the constraint that the current frame only finds loop closures in the previous frames.
252-
253-
254-
## Application
255-
### [Overlap-based Monte Carlo Localization](https://github.com/PRBonn/overlap_localization)
256-
This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D LiDAR Localization.
257-
258-
It uses the OverlapNet to train an observation model for Monte Carlo Localization and achieves global localization with 3D LiDAR scans.
259-
260-
261-
262131
## License
263132

264133
Copyright 2020, Xieyuanli Chen, Thomas Läbe, Cyrill Stachniss, Photogrammetry and Robotics Lab, University of Bonn.

config/config.yml

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
dataHandler:
2+
# the folder of KITTI datasets
3+
dataset_folder: "../dataset_full/"
4+
# channels
5+
use_depth: True
6+
use_intensity: True
7+
use_normals: True
8+
9+
10+
trainHandler:
11+
train_seqs: ["08", "03", "04", "05","06", "07", "09"]
12+
valid_seqs: ["02"]
13+
14+
testHandler:
15+
# the path of pre-trained model
16+
pretrained_model: "../tools/amodel1.pth.tar"
17+
# the folder of KITTI00 feature maps
18+
# dst in gen_feature_map_kitti00.pytools
19+
# src in testing_kitti00.py
20+
features_folder: "../tools/feature_map_kitti00"
21+
# the path of ground truth mapping (containing gt loops for each frame)
22+
gt_path: "../loop_gt_seq00_0.3overlap_inactive.npz"
23+

config/demo.yml

Lines changed: 0 additions & 57 deletions
This file was deleted.

0 commit comments

Comments
 (0)