You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-21Lines changed: 17 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -173,23 +173,23 @@ Here is an overview of the important TrackLab classes:
173
173
1.`video_metadatas`: contains one row of information per video (e.g. fps, width, height, etc).
174
174
2.`image_metadatas`: contains one row of information per image (e.g. frame_id, video_id, etc).
175
175
3.`detections_gt`: contains one row of information per ground truth detection (e.g. frame_id, video_id, bbox_ltwh, track_id, etc).
176
-
-**[TrackerState](tracklab/datastruct/tracker_state.py)**: Core class that contains all the information about the current state of the tracker. All modules in the tracking pipeline update the tracker_state sequentially. The tracker_state contains one key dataframe:
176
+
-**[TrackerState](tracklab/datastruct/tracker_state.py)**: Core class that contains all the information about the current state of the tracker. All modules in the tracking pipeline update the `tracker_state` sequentially. The `tracker_state` contains one key dataframe:
177
177
1.`detections_pred`: contains one row of information per predicted detection (e.g. frame_id, video_id, bbox_ltwh, track_id, reid embedding, etc).
178
-
-**[TrackingEngine](tracklab/engine/engine.py)**: This class is responsible for executing the entire tracking pipeline on the dataset. It loops over all videos of the dataset and calls all modules defined in the pipeline sequentially. The exact execution order (e.g. online/offline/...) is defined by the TrackingEngine subclass.
179
-
- Example: **[OfflineTrackingEngine](tracklab/engine/offline.py)**. The offline tracking engine performs tracking one module after another to speed up inference by leveraging large batch sizes and maximum GPU utilization. For instance, YoloV8 is first applied on an entire video by batching multiple images, then the re-identification model is applied on all detections in the video, etc.
180
-
-**[Pipeline](tracklab/pipeline/module.py)**: Define the order in which modules are executed by the TrackingEngine. If a tracker_state is loaded from disk, modules that should not be executed again must be removed.
178
+
-**[TrackingEngine](tracklab/engine/engine.py)**: This class is responsible for executing the entire tracking pipeline on the dataset. It loops over all videos of the dataset and calls all modules defined in the pipeline sequentially. The exact execution order (e.g. online/offline/...) is defined by the `TrackingEngine` subclass.
179
+
- Example: **[OfflineTrackingEngine](tracklab/engine/offline.py)**. The offline tracking engine performs tracking one module after another to speed up inference by leveraging large batch sizes and maximum GPU utilization. For instance, `YoloV11` is first applied on an entire video by batching multiple images, then the re-identification model is applied on all detections in the video, etc.
180
+
-**[Pipeline](tracklab/pipeline/module.py)**: Define the order in which modules are executed by the `TrackingEngine`. If a `tracker_state` is loaded from disk, modules that should not be executed again must be removed.
181
181
- Example: `[bbox_detector, reid, track]`.
182
182
-**[VideoLevelModule](tracklab/pipeline/videolevel_module.py)**: Abstract class to be instantiated when adding a new tracking module that operates on all frames simultaneously. Can be used to implement offline tracking strategies, tracklet level voting mechanisms, etc.
183
183
- Example: [VotingTrackletJerseyNumber](tracklab/wrappers/tracklet_agg/majority_vote_api.py). To perform majority voting within each tracklet and compute a consistent tracklet level attribute (an attribute can be, for instance, the result of a detection level classification task).
184
184
-**[ImageLevelModule](tracklab/pipeline/imagelevel_module.py)**: Abstract class to be instantiated when adding a new tracking module that operates on a single frame. Can be used to implement online tracking strategies, pose/segmentation/bbox detectors, etc.
185
185
- Example 1: [YOLOv11](tracklab/wrappers/bbox_detector/yolo_ultralytics_api.py). To perform object detection on each image with [YOLOv11](https://github.com/ultralytics/ultralytics). Creates a new row (i.e. detection) within `detections_pred`.
186
-
- Example 2: [StrongSORT](tracklab/wrappers/track/strong_sort_api.py). To perform online tracking with [StrongSORT](https://github.com/dyhBUPT/StrongSORT). Creates a new "track_id" column for each detection within `detections_pred`.
186
+
- Example 2: [StrongSORT](tracklab/wrappers/track/strong_sort_api.py). To perform online tracking with [StrongSORT](https://github.com/dyhBUPT/StrongSORT). Creates a new `track_id` column for each detection within `detections_pred`.
187
187
-**[DetectionLevelModule](tracklab/pipeline/detectionlevel_module.py)**: Abstract class to be instantiated when adding a new tracking module that operates on a single detection. Can be used to implement pose estimation for top-down strategies, re-identification, attributes recognition, etc.
188
188
- Example 1: [RTMPose](tracklab/wrappers/pose_estimator/rtmlib_api.py). To perform pose estimation on each detection with [RTMPose](https://github.com/Tau-J/rtmlib).
189
-
- Example 2: [BPBReId](tracklab/wrappers/reid/bpbreid_api.py). To perform person re-identification on each detection with [BPBReID](https://github.com/VlSomers/bpbreid). Creates a new "embedding" column within `detections_pred`.
189
+
- Example 2: [BPBReId](tracklab/wrappers/reid/bpbreid_api.py). To perform person re-identification on each detection with [BPBReID](https://github.com/VlSomers/bpbreid). Creates a new `embedding` column within `detections_pred`.
190
190
-**[Callback](tracklab/callbacks/callback.py)**: Implement this class to add a callback that is triggered at a specific point during the tracking process, e.g. when dataset/video/module processing starts/ends.
191
191
- Example: [VisualizationEngine](tracklab/visualization/visualization_engine.py). Implements `on_video_loop_end` to save each video tracking results as a .mp4 or a list of .jpg.
192
-
-**[Evaluator](tracklab/pipeline/evaluator.py)**: Implement this class to add a new evaluation metric, such as MOTA, HOTA, or any other (non-tracking related) metrics.
192
+
-**[Evaluator](tracklab/pipeline/evaluator.py)**: Implement this class to add a new evaluation metric, such as MOTA, HOTA, IDF1 or any other (non-tracking related) metrics.
193
193
- Example: [TrackEvalEvaluator](tracklab/wrappers/eval/trackeval_evaluator.py). Evaluate the performance of a tracker using the official [TrackEval library](https://github.com/JonathonLuiten/TrackEval).
194
194
195
195
### Execution Flow
@@ -198,39 +198,35 @@ Here is an overview of what happens when you run TrackLab:
198
198
[tracklab/main.py](tracklab/main.py) is usually called via the following command through the root [main.py](main.py) file: `python main.py`.
199
199
Within [tracklab/main.py](tracklab/main.py), all modules are first instantiated.
200
200
Then training any tracking module (e.g. the re-identification model) on the tracking training set is supported by calling the "train" method of the corresponding module.
201
-
Tracking is then performed on the validation or test set (depending on the configuration) via the TrackingEngine.run() function.
202
-
For each video in the evaluated set, the TrackingEngine calls the "run" method of each module (e.g. detector, re-identifier, tracker, ...) sequentially.
203
-
The TrackingEngine is responsible for batching the input data (e.g. images, detections, ...) before calling the "run" method of each module with the correct input data.
204
-
After a module has been called with a batch of input data, the TrackingEngine then updates the TrackerState object with the module outputs.
205
-
At the end of the tracking process, the TrackerState object contains the tracking results of each video.
206
-
Visualizations (e.g. `.mp4` results videos) are generated during the TrackingEngine.run() call, after a video has been tracked and before the next video is processed.
207
-
Finally, evaluation is performed via the evaluator.run() function once the TrackingEngine.run() call is completed, i.e. after all videos have been processed.
201
+
Tracking is then performed on the validation or test set (depending on the configuration) via the `TrackingEngine.run()` function.
202
+
For each video in the evaluated set, the `TrackingEngine` calls the "run" method of each module (e.g. detector, re-identifier, tracker, ...) sequentially.
203
+
The `TrackingEngine` is responsible for batching the input data (e.g. images, detections, ...) before calling the "run" method of each module with the correct input data.
204
+
After a module has been called with a batch of input data, the `TrackingEngine` then updates the `TrackerState` object with the module outputs.
205
+
At the end of the tracking process, the `TrackerState` object contains the tracking results of each video.
206
+
Visualizations (e.g. `.mp4` results videos) are generated during the `TrackingEngine.run()` call, after a video has been tracked and before the next video is processed.
207
+
Finally, evaluation is performed via the `evaluator.run()` function once the `TrackingEngine.run()` call is completed, i.e. after all videos have been processed.
208
208
209
209
## 🧐 Tutorials
210
210
### Dump and load the tracker state to save computation time
211
211
When developing a new module, it is often useful to dump the tracker state to disk to save computation time and avoid running the other modules several times.
212
212
Here is how to do it:
213
213
1. First, save the tracker state by using the corresponding configuration in the config.yaml file:
214
214
```yaml
215
-
defaults:
216
-
- state: save
217
215
# ...
218
216
state:
219
-
save_file: "states/${experiment_name}.pklz"# 'null' to disable saving. This is the save path for the tracker_state object that contains all modules outputs (bboxes, reid embeddings, jersey numbers, roles, teams, etc)
217
+
save_file: "states/${experiment_name}.pklz"# 'null' to disable saving. This is the save path for the tracker_state object that contains all modules outputs (bboxes, reid embeddings, jersey numbers, roles, teams, etc.)
220
218
load_file: null
221
219
```
222
220
2. Run Tracklab. The tracker state will be saved in the experiment folder as a .pklz file.
223
-
3. Then modify the load_file key in "config.yaml" to specify the path to the tracker state file that has just been created (`load_file: "..."` config).
221
+
3. Then modify the `load_file` key in "config.yaml" to specify the path to the tracker state file that has just been created (`load_file: "..."` config).
224
222
5. In config.yaml, remove from the pipeline all modules that should not be executed again. For instance, if you want to use the detections and reid embeddings from the saved tracker state, remove the "bbox_detector" and "reid" modules from the pipeline. Use `pipeline: []` if no module should be run again.
225
223
```yaml
226
-
defaults:
227
-
- state: save
228
224
# ...
229
225
pipeline:
230
226
- track
231
227
# ...
232
228
state:
233
-
save_file: null # 'null' to disable saving. This is the save path for the tracker_state object that contains all modules outputs (bboxes, reid embeddings, jersey numbers, roles, teams, etc)
229
+
save_file: null # 'null' to disable saving. This is the save path for the tracker_state object that contains all modules outputs (bboxes, reid embeddings, jersey numbers, roles, teams, etc.)
0 commit comments