You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The **py-motmetrics** library provides a Python implementation of metrics for benchmarking multiple object trackers (MOT).
8
6
@@ -17,9 +15,9 @@ While benchmarking single object trackers is rather straightforward, measuring t
17
15
18
16
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
19
17
20
-
###Features at a glance
18
+
## Features at a glance
21
19
-*Variety of metrics* <br/>
22
-
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks.
20
+
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks[(*1)](#asterixcompare).
23
21
-*Distance agnostic* <br/>
24
22
Supports Euclidean, Intersection over Union and other distances measures.
25
23
-*Complete event history* <br/>
@@ -30,7 +28,7 @@ Support for switching minimum assignment cost solvers. Supports `scipy`, `ortool
30
28
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
31
29
32
30
<aname="Metrics"></a>
33
-
###Metrics
31
+
## Metrics
34
32
35
33
**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][MOTChallenge] benchmarks.
36
34
@@ -74,9 +72,9 @@ id_global_assignment| `dict` ID measures: Global min-cost assignment for ID meas
74
72
75
73
76
74
<aname="MOTChallengeCompatibility"></a>
77
-
###MOTChallenge compatibility
75
+
## MOTChallenge compatibility
78
76
79
-
**py-motmetrics** produces results compatible with popular [MOTChallenge][MOTChallenge] benchmarks. Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
77
+
**py-motmetrics** produces results compatible with popular [MOTChallenge][MOTChallenge] benchmarks[(*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
Besides naming conventions, the only obvious differences are
99
+
<aname="asterixcompare"></a>(*1) Besides naming conventions, the only obvious differences are
102
100
- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.
103
101
- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][MOTChallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).
104
102
@@ -111,25 +109,31 @@ For MOT16/17, you can run
111
109
python -m motmetrics.apps.evaluateTracking --help
112
110
```
113
111
114
-
### Installation
112
+
## Installation
113
+
114
+
To install latest development version of **py-motmetrics** (usually a bit more recent than PyPi below)
Python 3.5/3.6 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.
128
+
Python 3.5/3.6/3.9 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.
125
129
126
130
Alternatively for developing, clone or fork this repository and install in editing mode.
127
131
128
132
```
129
133
pip install -e <path/to/setup.py>
130
134
```
131
135
132
-
####Conda
136
+
###Install via Conda
133
137
In case you are using Conda, a simple way to run **py-motmetrics** is to create a virtual environment with all the necessary dependencies
# Call update once for per frame. For now, assume distances between
168
172
# frame objects / hypotheses are given.
169
173
acc.update(
170
-
['a', 'b'],# Ground truth objects in this frame
174
+
[1, 2], # Ground truth objects in this frame
171
175
[1, 2, 3], # Detector hypotheses in this frame
172
176
[
173
-
[0.1, np.nan, 0.3], # Distances from object 'a' to hypotheses 1, 2, 3
174
-
[0.5, 0.2, 0.3] # Distances from object 'b' to hypotheses 1, 2, 3
177
+
[0.1, np.nan, 0.3], # Distances from object 1 to hypotheses 1, 2, 3
178
+
[0.5, 0.2, 0.3] # Distances from object 2 to hypotheses 1, 2, 3
175
179
]
176
180
)
177
181
```
178
182
179
-
The code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note `np.nan` inside the distance matrix. It signals that `a` cannot be paired with hypothesis `2`. To inspect the current event history simple print the events associated with the accumulator.
183
+
The code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note `np.nan` inside the distance matrix. It signals that object `1` cannot be paired with hypothesis `2`. To inspect the current event history simple print the events associated with the accumulator.
180
184
181
185
```python
182
186
print(acc.events) # a pandas DataFrame containing all events
183
187
184
188
"""
185
189
Type OId HId D
186
190
FrameId Event
187
-
0 0 RAW a 1 0.1
188
-
1 RAW a 2 NaN
189
-
2 RAW a 3 0.3
190
-
3 RAW b 1 0.5
191
-
4 RAW b 2 0.2
192
-
5 RAW b 3 0.3
193
-
6 MATCH a 1 0.1
194
-
7 MATCH b 2 0.2
191
+
0 0 RAW 1 1 0.1
192
+
1 RAW 1 2 NaN
193
+
2 RAW 1 3 0.3
194
+
3 RAW 2 1 0.5
195
+
4 RAW 2 2 0.2
196
+
5 RAW 2 3 0.3
197
+
6 MATCH 1 1 0.1
198
+
7 MATCH 2 2 0.2
195
199
8 FP NaN 3 NaN
196
200
"""
197
201
```
@@ -204,19 +208,19 @@ print(acc.mot_events) # a pandas DataFrame containing MOT only events
204
208
"""
205
209
Type OId HId D
206
210
FrameId Event
207
-
0 6 MATCH a 1 0.1
208
-
7 MATCH b 2 0.2
211
+
0 6 MATCH 1 1 0.1
212
+
7 MATCH 2 2 0.2
209
213
8 FP NaN 3 NaN
210
214
"""
211
215
```
212
216
213
-
Meaning object `a` was matched to hypothesis `1` with distance 0.1. Similarily, `b` was matched to `2` with distance 0.2. Hypothesis `3` could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).
217
+
Meaning object `1` was matched to hypothesis `1` with distance 0.1. Similarily, object `2` was matched to hypothesis`2` with distance 0.2. Hypothesis `3` could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).
`b` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(a, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
261
+
Object `2` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(1, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
258
262
259
-
####Computing metrics
263
+
### Computing metrics
260
264
Once the accumulator has been populated you can compute and display metrics. Continuing the example from above
Up until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use `motmetrics.distance` module as shown below.
355
359
356
-
#####Euclidean norm squared on points
360
+
#### Euclidean norm squared on points
357
361
358
362
```python
359
363
# Object related points
@@ -378,7 +382,7 @@ C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
378
382
"""
379
383
```
380
384
381
-
#####Intersection over union norm for 2D rectangles
385
+
#### Intersection over union norm for 2D rectangles
For large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box
@@ -422,7 +426,7 @@ with lap.set_default_solver(mysolver):
422
426
...
423
427
```
424
428
425
-
###Running tests
429
+
## Running tests
426
430
**py-motmetrics** uses the pytest framework. To run the tests, simply `cd` into the source directly and run `pytest`.
427
431
428
432
<aname="References"></a>
@@ -434,31 +438,31 @@ EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
434
438
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
435
439
4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.
436
440
437
-
###Docker
441
+
## Docker
438
442
439
-
####Update ground truth and test data:
443
+
### Update ground truth and test data:
440
444
/data/train directory should contain MOT 2D 2015 Ground Truth files.
0 commit comments