You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Readme.md
+48-39Lines changed: 48 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,22 +10,23 @@ While benchmarking single object trackers is rather straightforward, measuring t
10
10
11
11
<divstyle="text-align:center;">
12
12
13
-
<br/>
13
+
<br/>
14
+
14
15
*Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)*
15
16
</div>
16
17
17
-
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](http://vision.cs.duke.edu/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
18
+
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
18
19
19
20
### Features at a glance
20
21
-*Variety of metrics* <br/>
21
22
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks.
22
23
-*Distance agnostic* <br/>
23
24
Supports Euclidean, Intersection over Union and other distances measures.
24
-
-*Complete event history* <br/>
25
+
-*Complete event history* <br/>
25
26
Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
26
-
-*Flexible solver backend* <br/>
27
+
-*Flexible solver backend* <br/>
27
28
Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).
28
-
-*Easy to extend* <br/>
29
+
-*Easy to extend* <br/>
29
30
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
`b` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(a, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
257
+
`b` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(a, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
251
258
252
259
#### Computing metrics
253
260
Once the accumulator has been populated you can compute and display metrics. Continuing the example from above
@@ -267,9 +274,9 @@ Computing metrics for multiple accumulators or accumulator views is also possibl
267
274
268
275
```python
269
276
summary = mh.compute_many(
270
-
[acc, acc.events.loc[0:1]],
271
-
metrics=['num_frames', 'mota', 'motp'],
272
-
names=['full', 'part'])
277
+
[acc, acc.events.loc[0:1]],
278
+
metrics=['num_frames', 'mota', 'motp'],
279
+
names=['full', 'part'])
273
280
print(summary)
274
281
275
282
"""
@@ -279,34 +286,34 @@ part 2 0.5 0.166667
279
286
"""
280
287
```
281
288
282
-
Finally, you may want to reformat column names and how column values are displayed.
289
+
Finally, you may want to reformat column names and how column values are displayed.
283
290
284
291
```python
285
292
strsummary = mm.io.render_summary(
286
-
summary,
287
-
formatters={'mota' : '{:.2%}'.format},
293
+
summary,
294
+
formatters={'mota' : '{:.2%}'.format},
288
295
namemap={'mota': 'MOTA', 'motp' : 'MOTP'}
289
296
)
290
297
print(strsummary)
291
298
292
299
"""
293
300
num_frames MOTA MOTP
294
301
full 3 50.00% 0.340000
295
-
part 2 50.00% 0.166667
302
+
part 2 50.00% 0.166667
296
303
"""
297
304
```
298
305
299
306
For MOTChallenge **py-motmetrics** provides predefined metric selectors, formatters and metric names, so that the result looks alike what is provided via their Matlab `devkit`.
300
307
301
308
```python
302
309
summary = mh.compute_many(
303
-
[acc, acc.events.loc[0:1]],
304
-
metrics=mm.metrics.motchallenge_metrics,
310
+
[acc, acc.events.loc[0:1]],
311
+
metrics=mm.metrics.motchallenge_metrics,
305
312
names=['full', 'part'])
306
313
307
314
strsummary = mm.io.render_summary(
308
-
summary,
309
-
formatters=mh.formatters,
315
+
summary,
316
+
formatters=mh.formatters,
310
317
namemap=mm.io.motchallenge_metric_names
311
318
)
312
319
print(strsummary)
@@ -322,15 +329,15 @@ In order to generate an overall summary that computes the metrics jointly over a
322
329
323
330
```python
324
331
summary = mh.compute_many(
325
-
[acc, acc.events.loc[0:1]],
326
-
metrics=mm.metrics.motchallenge_metrics,
332
+
[acc, acc.events.loc[0:1]],
333
+
metrics=mm.metrics.motchallenge_metrics,
327
334
names=['full', 'part'],
328
335
generate_overall=True
329
336
)
330
337
331
338
strsummary = mm.io.render_summary(
332
-
summary,
333
-
formatters=mh.formatters,
339
+
summary,
340
+
formatters=mh.formatters,
334
341
namemap=mm.io.motchallenge_metric_names
335
342
)
336
343
print(strsummary)
@@ -359,7 +366,7 @@ o = np.array([
359
366
# Hypothesis related points
360
367
h = np.array([
361
368
[0., 0],
362
-
[1., 1],
369
+
[1., 1],
363
370
])
364
371
365
372
C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
@@ -374,7 +381,7 @@ C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
374
381
##### Intersection over union norm for 2D rectangles
For large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box
By default **py-motmetrics** will try to find a LAP solver in the order of the list above. In order to temporarly replace the default solver use
@@ -411,7 +418,7 @@ By default **py-motmetrics** will try to find a LAP solver in the order of the l
411
418
costs =...
412
419
mysolver =lambdax: ...# solver code that returns pairings
413
420
414
-
with lap.set_default_solver(mysolver):
421
+
with lap.set_default_solver(mysolver):
415
422
...
416
423
```
417
424
@@ -420,20 +427,20 @@ with lap.set_default_solver(mysolver):
420
427
421
428
<aname="References"></a>
422
429
### References
423
-
1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics."
430
+
1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics."
424
431
EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
425
432
2. Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).
426
-
3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene."
433
+
3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene."
427
434
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
428
435
4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.
429
436
430
-
### Docker
437
+
### Docker
431
438
432
439
#### Update ground truth and test data:
433
-
/data/train directory should contain MOT 2D 2015 Ground Truth files.
440
+
/data/train directory should contain MOT 2D 2015 Ground Truth files.
0 commit comments