Skip to content

Commit 447c622

Browse files
author
Heindl Christoph
committed
Merge branch 'release/1.2'
2 parents 125304f + 18b769d commit 447c622

34 files changed

+3334
-792
lines changed

.travis.yml

Lines changed: 18 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,15 @@
11
language: python
2+
23
env:
3-
- PYTHON=3.5 PANDAS>=0.19.2
4-
- PYTHON=3.6 PANDAS>=0.19.2
4+
- PYTHON=2.7
5+
- PYTHON=3.5
6+
- PYTHON=3.6
7+
- PYTHON=3.7
8+
59
install:
610
# Install conda
7-
- if [[ "$PYTHON" == "2.7" ]]; then
8-
wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh;
9-
else
10-
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
11-
fi
11+
- wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh # Python 2.7
12+
- wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
1213
- bash miniconda.sh -b -p $HOME/miniconda
1314
- export PATH="$HOME/miniconda/bin:$PATH"
1415
- hash -r
@@ -18,12 +19,18 @@ install:
1819
- conda info -a
1920

2021
# Install deps
21-
- deps='pip numpy scipy cython'
22-
- conda create -q -n pyenv python=$PYTHON pandas=$PANDAS $deps
22+
- conda create -q -n pyenv python=$PYTHON pip
2323
- source activate pyenv
2424
- python -m pip install -U pip
25+
- pip install -r requirements.txt
2526
- pip install pytest
27+
- pip install pytest-benchmark
2628
- pip install .
27-
- pip install ortools
29+
# Install solvers for tests.
30+
- pip install lap scipy ortools
31+
# lapsolver does not provide a version for python 2
32+
- pip install "lapsolver; python_version >= '3'"
33+
# munkres no longer supports python 2
34+
- pip install "munkres; python_version >= '3'"
2835

29-
script: pytest
36+
script: pytest

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM ubuntu:latest
1+
FROM ubuntu:latest
22

33
MAINTAINER Avgerinos Christos <[email protected]>
44

LICENSE

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
MIT License
22

3-
Copyright (c) 2017 Christoph Heindl
3+
Copyright (c) 2017-2020 Christoph Heindl
4+
Copyright (c) 2018 Toka
5+
Copyright (c) 2019-2020 Jack Valmadre
46

57
Permission is hereby granted, free of charge, to any person obtaining a copy
68
of this software and associated documentation files (the "Software"), to deal

Readme.md

Lines changed: 48 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -10,22 +10,23 @@ While benchmarking single object trackers is rather straightforward, measuring t
1010

1111
<div style="text-align:center;">
1212

13-
![](motmetrics/etc/mot.png)<br/>
13+
![](./motmetrics/etc/mot.png)<br/>
14+
1415
*Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)*
1516
</div>
1617

17-
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](http://vision.cs.duke.edu/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
18+
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
1819

1920
### Features at a glance
2021
- *Variety of metrics* <br/>
2122
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks.
2223
- *Distance agnostic* <br/>
2324
Supports Euclidean, Intersection over Union and other distances measures.
24-
- *Complete event history* <br/>
25+
- *Complete event history* <br/>
2526
Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
26-
- *Flexible solver backend* <br/>
27+
- *Flexible solver backend* <br/>
2728
Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).
28-
- *Easy to extend* <br/>
29+
- *Easy to extend* <br/>
2930
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
3031

3132
<a name="Metrics"></a>
@@ -43,8 +44,6 @@ print(mh.list_metrics_markdown())
4344
Name|Description
4445
:---|:---
4546
num_frames|Total number of frames.
46-
obj_frequencies|Total number of occurrences of individual objects over all frames.
47-
pred_frequencies|Total number of occurrences of individual predictions over all frames.
4847
num_matches|Total number matches.
4948
num_switches|Total number of track switches.
5049
num_false_positives|Total number of false positives (false-alarms).
@@ -53,7 +52,6 @@ num_detections|Total number of detected objects including matches and switches.
5352
num_objects|Total number of unique object appearances over all frames.
5453
num_predictions|Total number of unique prediction appearances over all frames.
5554
num_unique_objects|Total number of unique object ids encountered.
56-
track_ratios|Ratio of assigned to total appearance count per unique object id.
5755
mostly_tracked|Number of objects tracked for at least 80 percent of lifespan.
5856
partially_tracked|Number of objects tracked between 20 and 80 percent of lifespan.
5957
mostly_lost|Number of objects tracked less than 20 percent of lifespan.
@@ -62,13 +60,18 @@ motp|Multiple object tracker precision.
6260
mota|Multiple object tracker accuracy.
6361
precision|Number of detected objects over sum of detected and false positives.
6462
recall|Number of detections over number of objects.
65-
id_global_assignment|ID measures: Global min-cost assignment for ID measures.
6663
idfp|ID measures: Number of false positive matches after global min-cost matching.
6764
idfn|ID measures: Number of false negatives matches after global min-cost matching.
6865
idtp|ID measures: Number of true positives matches after global min-cost matching.
6966
idp|ID measures: global min-cost precision.
7067
idr|ID measures: global min-cost recall.
7168
idf1|ID measures: global min-cost F1 score.
69+
obj_frequencies|`pd.Series` Total number of occurrences of individual objects over all frames.
70+
pred_frequencies|`pd.Series` Total number of occurrences of individual predictions over all frames.
71+
track_ratios|`pd.Series` Ratio of assigned to total appearance count per unique object id.
72+
id_global_assignment| `dict` ID measures: Global min-cost assignment for ID measures.
73+
74+
7275

7376
<a name="MOTChallengeCompatibility"></a>
7477
### MOTChallenge compatibility
@@ -78,11 +81,11 @@ idf1|ID measures: global min-cost F1 score.
7881
```
7982
8083
TUD-Campus
81-
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
84+
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
8285
55.8 73.0 45.1| 58.2 94.1 0.18| 8 1 6 1| 13 150 7 7| 52.6 72.3 54.3
8386
8487
TUD-Stadtmitte
85-
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
88+
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
8689
64.5 82.0 53.1| 60.9 94.0 0.25| 10 5 4 1| 45 452 7 6| 56.4 65.4 56.9
8790
8891
```
@@ -103,6 +106,10 @@ You can compare tracker results to ground truth in MOTChallenge format by
103106
```
104107
python -m motmetrics.apps.eval_motchallenge --help
105108
```
109+
For MOT16/17, you can run
110+
```
111+
python -m motmetrics.apps.evaluateTracking --help
112+
```
106113

107114
### Installation
108115

@@ -212,7 +219,7 @@ frameid = acc.update(
212219
['a', 'b'],
213220
[1],
214221
[
215-
[0.2],
222+
[0.2],
216223
[0.4]
217224
]
218225
)
@@ -247,7 +254,7 @@ Event
247254
"""
248255
```
249256

250-
`b` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(a, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
257+
`b` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(a, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
251258

252259
#### Computing metrics
253260
Once the accumulator has been populated you can compute and display metrics. Continuing the example from above
@@ -267,9 +274,9 @@ Computing metrics for multiple accumulators or accumulator views is also possibl
267274

268275
```python
269276
summary = mh.compute_many(
270-
[acc, acc.events.loc[0:1]],
271-
metrics=['num_frames', 'mota', 'motp'],
272-
names=['full', 'part'])
277+
[acc, acc.events.loc[0:1]],
278+
metrics=['num_frames', 'mota', 'motp'],
279+
names=['full', 'part'])
273280
print(summary)
274281

275282
"""
@@ -279,34 +286,34 @@ part 2 0.5 0.166667
279286
"""
280287
```
281288

282-
Finally, you may want to reformat column names and how column values are displayed.
289+
Finally, you may want to reformat column names and how column values are displayed.
283290

284291
```python
285292
strsummary = mm.io.render_summary(
286-
summary,
287-
formatters={'mota' : '{:.2%}'.format},
293+
summary,
294+
formatters={'mota' : '{:.2%}'.format},
288295
namemap={'mota': 'MOTA', 'motp' : 'MOTP'}
289296
)
290297
print(strsummary)
291298

292299
"""
293300
num_frames MOTA MOTP
294301
full 3 50.00% 0.340000
295-
part 2 50.00% 0.166667
302+
part 2 50.00% 0.166667
296303
"""
297304
```
298305

299306
For MOTChallenge **py-motmetrics** provides predefined metric selectors, formatters and metric names, so that the result looks alike what is provided via their Matlab `devkit`.
300307

301308
```python
302309
summary = mh.compute_many(
303-
[acc, acc.events.loc[0:1]],
304-
metrics=mm.metrics.motchallenge_metrics,
310+
[acc, acc.events.loc[0:1]],
311+
metrics=mm.metrics.motchallenge_metrics,
305312
names=['full', 'part'])
306313

307314
strsummary = mm.io.render_summary(
308-
summary,
309-
formatters=mh.formatters,
315+
summary,
316+
formatters=mh.formatters,
310317
namemap=mm.io.motchallenge_metric_names
311318
)
312319
print(strsummary)
@@ -322,15 +329,15 @@ In order to generate an overall summary that computes the metrics jointly over a
322329

323330
```python
324331
summary = mh.compute_many(
325-
[acc, acc.events.loc[0:1]],
326-
metrics=mm.metrics.motchallenge_metrics,
332+
[acc, acc.events.loc[0:1]],
333+
metrics=mm.metrics.motchallenge_metrics,
327334
names=['full', 'part'],
328335
generate_overall=True
329336
)
330337

331338
strsummary = mm.io.render_summary(
332-
summary,
333-
formatters=mh.formatters,
339+
summary,
340+
formatters=mh.formatters,
334341
namemap=mm.io.motchallenge_metric_names
335342
)
336343
print(strsummary)
@@ -359,7 +366,7 @@ o = np.array([
359366
# Hypothesis related points
360367
h = np.array([
361368
[0., 0],
362-
[1., 1],
369+
[1., 1],
363370
])
364371

365372
C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
@@ -374,7 +381,7 @@ C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
374381
##### Intersection over union norm for 2D rectangles
375382
```python
376383
a = np.array([
377-
[0, 0, 20, 100], # Format X, Y, Width, Height
384+
[0, 0, 1, 2], # Format X, Y, Width, Height
378385
[0, 0, 0.8, 1.5],
379386
])
380387

@@ -396,13 +403,13 @@ mm.distances.iou_matrix(a, b, max_iou=0.5)
396403
For large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box
397404
- `lapsolver` - https://github.com/cheind/py-lapsolver
398405
- `lapjv` - https://github.com/gatagat/lap
399-
- `scipy` - https://github.com/scipy/scipy/tree/master/scipy
406+
- `scipy` - https://github.com/scipy/scipy/tree/master/scipy
400407
- `ortools` - https://github.com/google/or-tools
401408
- `munkres` - http://software.clapper.org/munkres/
402409

403410
A comparison for different sized matrices is shown below (taken from [here](https://github.com/cheind/py-lapsolver#benchmarks))
404411

405-
Please note that the x-axis is scaled logarithmically. Missing bars indicate excessive runtime or errors in returned result.
412+
Please note that the x-axis is scaled logarithmically. Missing bars indicate excessive runtime or errors in returned result.
406413
![](https://github.com/cheind/py-lapsolver/raw/master/lapsolver/etc/benchmark-dtype-numpy.float32.png)
407414

408415
By default **py-motmetrics** will try to find a LAP solver in the order of the list above. In order to temporarly replace the default solver use
@@ -411,7 +418,7 @@ By default **py-motmetrics** will try to find a LAP solver in the order of the l
411418
costs = ...
412419
mysolver = lambda x: ... # solver code that returns pairings
413420

414-
with lap.set_default_solver(mysolver):
421+
with lap.set_default_solver(mysolver):
415422
...
416423
```
417424

@@ -420,20 +427,20 @@ with lap.set_default_solver(mysolver):
420427

421428
<a name="References"></a>
422429
### References
423-
1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics."
430+
1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics."
424431
EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
425432
2. Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).
426-
3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene."
433+
3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene."
427434
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
428435
4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.
429436

430-
### Docker
437+
### Docker
431438

432439
#### Update ground truth and test data:
433-
/data/train directory should contain MOT 2D 2015 Ground Truth files.
440+
/data/train directory should contain MOT 2D 2015 Ground Truth files.
434441
/data/test directory should contain your results.
435442

436-
You can check usage and directory listing at
443+
You can check usage and directory listing at
437444
https://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py
438445

439446
#### Build Image
@@ -449,7 +456,9 @@ docker run desired-image-name
449456
```
450457
MIT License
451458
452-
Copyright (c) 2017 Christoph Heindl
459+
Copyright (c) 2017-2020 Christoph Heindl
460+
Copyright (c) 2018 Toka
461+
Copyright (c) 2019-2020 Jack Valmadre
453462
454463
Permission is hereby granted, free of charge, to any person obtaining a copy
455464
of this software and associated documentation files (the "Software"), to deal

appveyor.yml

Lines changed: 20 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ environment:
77
matrix:
88
- PYTHON: "C:\\Miniconda36-x64"
99
PYTHON_VERSION: "3.6"
10-
PYTHON_ARCH: "64"
10+
PYTHON_ARCH: "64"
1111
- PYTHON: "C:\\Miniconda36"
1212
PYTHON_VERSION: "3.6"
13-
PYTHON_ARCH: "32"
13+
PYTHON_ARCH: "32"
1414
- PYTHON: "C:\\Miniconda35-x64"
1515
PYTHON_VERSION: "3.5"
16-
PYTHON_ARCH: "64"
16+
PYTHON_ARCH: "64"
1717
- PYTHON: "C:\\Miniconda35"
1818
PYTHON_VERSION: "3.5"
1919
PYTHON_ARCH: "32"
@@ -24,23 +24,31 @@ install:
2424
- conda config --set always_yes yes
2525
- conda update -q conda
2626
- conda config --set auto_update_conda no
27-
- conda install -q pip pytest numpy cython
27+
- conda install -q pip pytest pytest-benchmark numpy cython
2828
- python -m pip install -U pip
2929
- pip install wheel
3030
- pip install --upgrade --ignore-installed setuptools
31-
- pip install lapsolver
32-
31+
# Install solvers for testing.
32+
- pip install lap lapsolver munkres
33+
# OR-Tools does not support 32-bit.
34+
# https://developers.google.com/optimization/install/python/windows
35+
- ps: >-
36+
if ($env:PYTHON_ARCH -eq "64") {
37+
cmd /c 'pip install ortools 2>&1'
38+
}
39+
3340
build_script:
3441
- python setup.py sdist
3542
- python setup.py bdist_wheel
36-
43+
3744
test_script:
38-
# Try building source wheel and install
45+
# Try building source wheel and install
46+
# Redirect stderr of pip within powershell.
3947
- ps: >-
4048
$wheel = cmd /r dir .\dist\*.tar.gz /b/s;
41-
pip install --verbose $wheel
49+
cmd /c "pip install --verbose $wheel 2>&1"
4250
- pytest --pyargs motmetrics
43-
51+
4452
on_success:
4553
ps: >-
4654
if ($env:APPVEYOR_REPO_BRANCH -eq "master") {
@@ -52,7 +60,7 @@ on_success:
5260
} else {
5361
Write-Output "Not deploying as this is not a tagged commit or commit on master"
5462
}
55-
63+
5664
artifacts:
5765
- path: "dist\\*.whl"
5866
- path: "dist\\*.tar.gz"
@@ -68,4 +76,4 @@ notifications:
6876
branches:
6977
only:
7078
- master
71-
- develop
79+
- develop

0 commit comments

Comments
 (0)