Skip to content

Commit 1d26b5a

Browse files
author
Christoph.Heindl
committed
Merge branch 'release/1.3.0'
2 parents 447c622 + 2871463 commit 1d26b5a

File tree

14 files changed

+702
-406
lines changed

14 files changed

+702
-406
lines changed
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
2+
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
3+
4+
name: Python package
5+
6+
on:
7+
push:
8+
branches: [ develop ]
9+
pull_request:
10+
branches: [ develop ]
11+
12+
jobs:
13+
build:
14+
15+
runs-on: ubuntu-latest
16+
strategy:
17+
fail-fast: false
18+
matrix:
19+
python-version: ["3.8", "3.9", "3.10"]
20+
21+
steps:
22+
- uses: actions/checkout@v3
23+
- name: Set up Python ${{ matrix.python-version }}
24+
uses: actions/setup-python@v3
25+
with:
26+
python-version: ${{ matrix.python-version }}
27+
- name: Install dependencies
28+
run: |
29+
python -m pip install --upgrade pip
30+
python -m pip install flake8 pytest pytest-benchmark
31+
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
32+
pip install lap scipy ortools lapsolver munkres
33+
- name: Lint with flake8
34+
run: |
35+
# stop the build if there are Python syntax errors or undefined names
36+
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
37+
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
38+
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
39+
- name: Test with pytest
40+
run: |
41+
pytest

.travis.yml

Lines changed: 0 additions & 36 deletions
This file was deleted.

Readme.md

Lines changed: 56 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
1+
[![PyPI version](https://badge.fury.io/py/motmetrics.svg)](https://badge.fury.io/py/motmetrics) [![Build Status](https://github.com/cheind/py-motmetrics/actions/workflows/python-package.yml/badge.svg)](https://github.com/cheind/py-motmetrics/actions/workflows/python-package.yml)
12

2-
3-
[![PyPI version](https://badge.fury.io/py/motmetrics.svg)](https://badge.fury.io/py/motmetrics) [![](https://travis-ci.org/cheind/py-motmetrics.svg?branch=master)](https://travis-ci.org/cheind/py-motmetrics)
4-
5-
## py-motmetrics
3+
# py-motmetrics
64

75
The **py-motmetrics** library provides a Python implementation of metrics for benchmarking multiple object trackers (MOT).
86

@@ -17,9 +15,9 @@ While benchmarking single object trackers is rather straightforward, measuring t
1715

1816
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
1917

20-
### Features at a glance
18+
## Features at a glance
2119
- *Variety of metrics* <br/>
22-
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks.
20+
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks [(*1)](#asterixcompare).
2321
- *Distance agnostic* <br/>
2422
Supports Euclidean, Intersection over Union and other distances measures.
2523
- *Complete event history* <br/>
@@ -30,7 +28,7 @@ Support for switching minimum assignment cost solvers. Supports `scipy`, `ortool
3028
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
3129

3230
<a name="Metrics"></a>
33-
### Metrics
31+
## Metrics
3432

3533
**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][MOTChallenge] benchmarks.
3634

@@ -74,9 +72,9 @@ id_global_assignment| `dict` ID measures: Global min-cost assignment for ID meas
7472

7573

7674
<a name="MOTChallengeCompatibility"></a>
77-
### MOTChallenge compatibility
75+
## MOTChallenge compatibility
7876

79-
**py-motmetrics** produces results compatible with popular [MOTChallenge][MOTChallenge] benchmarks. Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
77+
**py-motmetrics** produces results compatible with popular [MOTChallenge][MOTChallenge] benchmarks [(*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
8078

8179
```
8280
@@ -98,7 +96,7 @@ TUD-Campus 55.8% 73.0% 45.1% 58.2% 94.1% 8 1 6 1 13 150 7 7 52.6% 0.
9896
TUD-Stadtmitte 64.5% 82.0% 53.1% 60.9% 94.0% 10 5 4 1 45 452 7 6 56.4% 0.346
9997
```
10098

101-
Besides naming conventions, the only obvious differences are
99+
<a name="asterixcompare"></a>(*1) Besides naming conventions, the only obvious differences are
102100
- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.
103101
- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][MOTChallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).
104102

@@ -111,25 +109,31 @@ For MOT16/17, you can run
111109
python -m motmetrics.apps.evaluateTracking --help
112110
```
113111

114-
### Installation
112+
## Installation
113+
114+
To install latest development version of **py-motmetrics** (usually a bit more recent than PyPi below)
115+
116+
```
117+
pip install git+https://github.com/cheind/py-motmetrics.git
118+
```
115119

116-
#### PyPi and development installs
117120

121+
### Install via PyPi
118122
To install **py-motmetrics** use `pip`
119123

120124
```
121125
pip install motmetrics
122126
```
123127

124-
Python 3.5/3.6 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.
128+
Python 3.5/3.6/3.9 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.
125129

126130
Alternatively for developing, clone or fork this repository and install in editing mode.
127131

128132
```
129133
pip install -e <path/to/setup.py>
130134
```
131135

132-
#### Conda
136+
### Install via Conda
133137
In case you are using Conda, a simple way to run **py-motmetrics** is to create a virtual environment with all the necessary dependencies
134138

135139
```
@@ -153,9 +157,9 @@ pip install .
153157
pytest
154158
```
155159

156-
### Usage
160+
## Usage
157161

158-
#### Populating the accumulator
162+
### Populating the accumulator
159163

160164
```python
161165
import motmetrics as mm
@@ -167,31 +171,31 @@ acc = mm.MOTAccumulator(auto_id=True)
167171
# Call update once for per frame. For now, assume distances between
168172
# frame objects / hypotheses are given.
169173
acc.update(
170-
['a', 'b'], # Ground truth objects in this frame
174+
[1, 2], # Ground truth objects in this frame
171175
[1, 2, 3], # Detector hypotheses in this frame
172176
[
173-
[0.1, np.nan, 0.3], # Distances from object 'a' to hypotheses 1, 2, 3
174-
[0.5, 0.2, 0.3] # Distances from object 'b' to hypotheses 1, 2, 3
177+
[0.1, np.nan, 0.3], # Distances from object 1 to hypotheses 1, 2, 3
178+
[0.5, 0.2, 0.3] # Distances from object 2 to hypotheses 1, 2, 3
175179
]
176180
)
177181
```
178182

179-
The code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note `np.nan` inside the distance matrix. It signals that `a` cannot be paired with hypothesis `2`. To inspect the current event history simple print the events associated with the accumulator.
183+
The code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note `np.nan` inside the distance matrix. It signals that object `1` cannot be paired with hypothesis `2`. To inspect the current event history simple print the events associated with the accumulator.
180184

181185
```python
182186
print(acc.events) # a pandas DataFrame containing all events
183187

184188
"""
185189
Type OId HId D
186190
FrameId Event
187-
0 0 RAW a 1 0.1
188-
1 RAW a 2 NaN
189-
2 RAW a 3 0.3
190-
3 RAW b 1 0.5
191-
4 RAW b 2 0.2
192-
5 RAW b 3 0.3
193-
6 MATCH a 1 0.1
194-
7 MATCH b 2 0.2
191+
0 0 RAW 1 1 0.1
192+
1 RAW 1 2 NaN
193+
2 RAW 1 3 0.3
194+
3 RAW 2 1 0.5
195+
4 RAW 2 2 0.2
196+
5 RAW 2 3 0.3
197+
6 MATCH 1 1 0.1
198+
7 MATCH 2 2 0.2
195199
8 FP NaN 3 NaN
196200
"""
197201
```
@@ -204,19 +208,19 @@ print(acc.mot_events) # a pandas DataFrame containing MOT only events
204208
"""
205209
Type OId HId D
206210
FrameId Event
207-
0 6 MATCH a 1 0.1
208-
7 MATCH b 2 0.2
211+
0 6 MATCH 1 1 0.1
212+
7 MATCH 2 2 0.2
209213
8 FP NaN 3 NaN
210214
"""
211215
```
212216

213-
Meaning object `a` was matched to hypothesis `1` with distance 0.1. Similarily, `b` was matched to `2` with distance 0.2. Hypothesis `3` could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).
217+
Meaning object `1` was matched to hypothesis `1` with distance 0.1. Similarily, object `2` was matched to hypothesis `2` with distance 0.2. Hypothesis `3` could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).
214218

215219
Continuing from above
216220

217221
```python
218222
frameid = acc.update(
219-
['a', 'b'],
223+
[1, 2],
220224
[1],
221225
[
222226
[0.2],
@@ -228,16 +232,16 @@ print(acc.mot_events.loc[frameid])
228232
"""
229233
Type OId HId D
230234
Event
231-
2 MATCH a 1 0.2
232-
3 MISS b NaN NaN
235+
2 MATCH 1 1 0.2
236+
3 MISS 2 NaN NaN
233237
"""
234238
```
235239

236-
While `a` was matched, `b` couldn't be matched because no hypotheses are left to pair with.
240+
While object `1` was matched, object `2` couldn't be matched because no hypotheses are left to pair with.
237241

238242
```python
239243
frameid = acc.update(
240-
['a', 'b'],
244+
[1, 2],
241245
[1, 3],
242246
[
243247
[0.6, 0.2],
@@ -249,14 +253,14 @@ print(acc.mot_events.loc[frameid])
249253
"""
250254
Type OId HId D
251255
Event
252-
4 MATCH a 1 0.6
253-
5 SWITCH b 3 0.6
256+
4 MATCH 1 1 0.6
257+
5 SWITCH 2 3 0.6
254258
"""
255259
```
256260

257-
`b` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(a, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
261+
Object `2` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(1, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
258262

259-
#### Computing metrics
263+
### Computing metrics
260264
Once the accumulator has been populated you can compute and display metrics. Continuing the example from above
261265

262266
```python
@@ -350,10 +354,10 @@ OVERALL 80.0% 80.0% 80.0% 80.0% 80.0% 4 2 2 0 2 2 1 1 50.0% 0.275
350354
"""
351355
```
352356

353-
#### Computing distances
357+
### Computing distances
354358
Up until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use `motmetrics.distance` module as shown below.
355359

356-
##### Euclidean norm squared on points
360+
#### Euclidean norm squared on points
357361

358362
```python
359363
# Object related points
@@ -378,7 +382,7 @@ C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
378382
"""
379383
```
380384

381-
##### Intersection over union norm for 2D rectangles
385+
#### Intersection over union norm for 2D rectangles
382386
```python
383387
a = np.array([
384388
[0, 0, 1, 2], # Format X, Y, Width, Height
@@ -399,7 +403,7 @@ mm.distances.iou_matrix(a, b, max_iou=0.5)
399403
```
400404

401405
<a name="SolverBackends"></a>
402-
#### Solver backends
406+
### Solver backends
403407
For large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box
404408
- `lapsolver` - https://github.com/cheind/py-lapsolver
405409
- `lapjv` - https://github.com/gatagat/lap
@@ -422,7 +426,7 @@ with lap.set_default_solver(mysolver):
422426
...
423427
```
424428

425-
### Running tests
429+
## Running tests
426430
**py-motmetrics** uses the pytest framework. To run the tests, simply `cd` into the source directly and run `pytest`.
427431

428432
<a name="References"></a>
@@ -434,31 +438,31 @@ EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
434438
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
435439
4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.
436440

437-
### Docker
441+
## Docker
438442

439-
#### Update ground truth and test data:
443+
### Update ground truth and test data:
440444
/data/train directory should contain MOT 2D 2015 Ground Truth files.
441445
/data/test directory should contain your results.
442446

443447
You can check usage and directory listing at
444448
https://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py
445449

446-
#### Build Image
450+
### Build Image
447451
docker build -t desired-image-name -f Dockerfile .
448452

449-
#### Run Image
453+
### Run Image
450454
docker run desired-image-name
451455

452456
(credits to [christosavg](https://github.com/christosavg))
453457

454-
### License
458+
## License
455459

456460
```
457461
MIT License
458462
459-
Copyright (c) 2017-2020 Christoph Heindl
463+
Copyright (c) 2017-2022 Christoph Heindl
460464
Copyright (c) 2018 Toka
461-
Copyright (c) 2019-2020 Jack Valmadre
465+
Copyright (c) 2019-2022 Jack Valmadre
462466
463467
Permission is hereby granted, free of charge, to any person obtaining a copy
464468
of this software and associated documentation files (the "Software"), to deal

0 commit comments

Comments
 (0)