You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+70Lines changed: 70 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,3 +5,73 @@ All code related to the ["DROW: Real-Time Deep Learning based Wheelchair Detecti
5
5
We already created this repo to allow for sharing links in advance and possibly have discussion in the issues. We're waiting for feedback on the paper and will publish code for running and training DROW in the near future.
6
6
7
7
Preliminary, guarantee-free, AS-IS code for running in ROS is available in the [STRANDS repositories](https://github.com/strands-project/strands_perception_people/tree/indigo-devel/wheelchair_detector).
8
+
9
+
10
+
# DROW Laser Dataset
11
+
12
+
You can obtain our full dataset here on GitHub in the releases section.
13
+
14
+
The laser-based detection dataset released with the paper "DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range Data" available at https://arxiv.org/abs/1603.02636 and published at ICRA'17.
15
+
16
+
PLEASE read the paper carefully before asking about the dataset, as we describe it at length in Section III.A.
17
+
18
+
## Citations
19
+
20
+
If you use this dataset in your work, please cite the following:
21
+
22
+
> Beyer, L., Hermans, A., & Leibe, B. (2017). DROW: Real-Time Deep Learning-Based Wheelchair Detection in 2-D Range Data. IEEE Robotics and Automation Letters, 2(2), 585-592.
23
+
24
+
BibTex:
25
+
26
+
```
27
+
@article{BeyerHermans2016RAL,
28
+
title = {{DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range Data}},
29
+
author = {Beyer*, Lucas and Hermans*, Alexander and Leibe, Bastian},
30
+
journal = {{IEEE Robotics and Automation Letters (RA-L)}},
31
+
year = {2016}
32
+
}
33
+
```
34
+
35
+
## License
36
+
37
+
The whole dataset is published under the MIT license, [roughly meaning](https://tldrlegal.com/license/mit-license) you can use it for whatever you want as long as you credit us.
38
+
However, we encourage you to contribute any extensions back, so that this repository stays the central place.
39
+
40
+
One exception to this licensing terms is the `reha` subset of the dataset, which we have converted from TU Ilmenau's data.
41
+
The [original dataset](https://www.tu-ilmenau.de/de/neurob/data-sets-code/people-detection-in-2d-laser-range-data/) was released under [CC-BY-NC-SA 3.0 Unported License](http://creativecommons.org/licenses/by-nc-sa/3.0/), and our conversion of it included herein keeps that license.
42
+
43
+
## Data Recording Setup
44
+
45
+
The exact recording setup is described in Section III.A of our paper.
46
+
In short, it was recorded using a SICK S300 spanning 225° in 450 poins at 37cm height.
47
+
Recording happened in an elderly care facility, the test-set is completely disjoint from the train and validation sets, as it was recorded in a different aisle of the facility.
48
+
49
+
## Data Annotation Setup
50
+
51
+
Again, the exact setup is described in the paper.
52
+
We used [this annotator](https://github.com/lucasb-eyer/laser-detection-annotator) to create the annotations.
53
+
Instead of all the laser scans, we annotate small batches throughout every sequence as follows:
54
+
A batch consists of 100 frames, out of which we annotate every 5th frame, resulting in 20 annotated frames per batch.
55
+
Within a sequence, we only annotate every 4th batch, leading to a total of 5 % of the laser scans being annotated.
56
+
57
+
## Dataset Use and Format
58
+
59
+
We highly recommend you use the `load_scan` and `load_dets` functions in `utils.py` for loading raw laser scans and detection annotations, respectively.
60
+
Please see the code's doc-comments or the DROW reference code for details on how to use them.
61
+
Please note that each scan (or frame), as well as detections, comes with a **sequence number that is only unique within a file, but not across files**.
62
+
63
+
### Detailed format description
64
+
65
+
If you want to load the files yourself regardless, this is their format:
66
+
67
+
One recording consists of a `.csv` file which contains all raw laser-scans, and one file per type of annotation, currently `.wc` for wheelchairs and `.wa` for walking-aids, with more to come.
68
+
69
+
The `.csv` files contain one line per scan, the first value is the sequence number of that scan, followed by 450 floating-point values representing the distance at which the laser-points hit something.
70
+
There is at least one "magic value" for that distance at `29.96` which means N/A.
71
+
Note that the laser values go from left-to-right, i.e. the first value corresponds to the leftmost laser point, from the robot's point of view.
72
+
73
+
The files `.wa`/`.wc` again contain one line per frame and start with a sequence number which should be used to match the detections to the scan **in the corresponding `.csv` file only**.
74
+
Then follows a json-encoded list of `(r,φ)` pairs, which are the detections in polar coordinates.
75
+
For each detection, `r` represents the distance from the laser scanner and `φ ∈ [-π,π]` the angle in radians, zero being right in the front centered of the scanner ("up"), positive values going to the left and negative ones to the right.
76
+
There's an important difference between an empty frame and an un-annotated one:
77
+
An empty frame is present in the data as `123456,[]` and means that no detection of that type (person/wheelchair/walker) is present in the frame, whereas an un-annotated frame is simply not present in the file: the sequence number is skipped.
0 commit comments