Skip to content

Commit e601bc3

Browse files
committed
v2.1.0: Extract bugspot core library, add detection-only mode and track composites
- Created bugspot: standalone detection/tracking core (opencv, numpy, scipy) https://github.com/orlandocloss/bugspot - inference.py now imports from bugspot (single source of truth) - Removed detector.py and tracker.py from bplusplus (live in bugspot) - Added classify=False for detection-only mode (NaN classification) - Added track_composites=True for temporal trail images - Consolidated inference docs in README and notebook
1 parent 371e15d commit e601bc3

File tree

7 files changed

+248
-917
lines changed

7 files changed

+248
-917
lines changed

CHANGELOG.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,19 @@ All notable changes to this project will be documented in this file.
44
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7+
## [2.1.0] - 2025-02-18
8+
9+
### Added
10+
- **[BugSpot](https://github.com/orlandocloss/bugspot) core library**: Extracted motion detection, tracking, and path topology into standalone package (opencv + numpy + scipy only, no ML frameworks)
11+
- **Detection-only mode**: `classify=False` skips model loading — outputs NaN for classification fields
12+
- **Track composite images**: `track_composites=True` generates per-track temporal trail images (lighten blend on darkened background)
13+
14+
### Changed
15+
- **Inference now depends on bugspot** for detection, tracking, topology analysis, crop extraction, and composite rendering
16+
- Removed `detector.py` and `tracker.py` from bplusplus — single source of truth in bugspot
17+
- Consolidated inference documentation in README and notebook into one clean section
18+
- `video_path` and `output_dir` are now the first two parameters in `inference()`
19+
720
## [2.0.5] - 2025-02-04
821

922
### Added

README.md

Lines changed: 26 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ Using the `Bplusplus` library, this pipeline automates the entire machine learni
1818
- **Intelligent Data Preparation**: Uses a pre-trained model to automatically find, crop, and resize insects from raw images, ensuring high-quality training data.
1919
- **Hierarchical Classification**: Trains a model to identify insects at three taxonomic levels: **family, genus, and species**.
2020
- **Video Inference & Tracking**: Processes video files to detect, classify, and track individual insects over time, providing aggregated predictions.
21+
2122
## Pipeline Overview
2223

2324
The process is broken down into five main steps, all detailed in the `full_pipeline.ipynb` notebook:
@@ -132,49 +133,45 @@ results = bplusplus.validate(
132133
```
133134
134135
#### Step 5: Run Inference on Video
135-
Process a video file to detect, classify, and track insects using motion-based detection. The pipeline uses background subtraction (GMM) to detect moving insects, tracks them across frames, and classifies confirmed tracks.
136136
137-
**Note:** The species list and taxonomy are automatically loaded from the model checkpoint, so you don't need to provide them again.
137+
Processes a video through a multi-phase pipeline: motion-based detection (GMM), Hungarian tracking, path topology confirmation, and hierarchical classification. Detection and tracking are powered by [BugSpot](bugspot/), a lightweight core that runs on any platform including edge devices.
138138
139-
**Output files generated in `output_dir`:**
140-
- `{video}_annotated.mp4` - Video showing confirmed tracks with classifications
141-
- `{video}_debug.mp4` - Debug video with motion mask and all detections
142-
- `{video}_results.csv` - Aggregated results per confirmed track
143-
- `{video}_detections.csv` - Frame-by-frame detection data
139+
The species list is automatically loaded from the model checkpoint.
144140
145141
```python
146-
VIDEO_INPUT_PATH = Path("my_video.mp4")
147-
OUTPUT_DIR = Path("./output")
148142
HIERARCHICAL_MODEL_PATH = TRAINED_MODEL_DIR / "best_multitask.pt"
149143
150144
results = bplusplus.inference(
145+
video_path="my_video.mp4",
146+
output_dir="./output",
151147
hierarchical_model_path=HIERARCHICAL_MODEL_PATH,
152-
video_path=VIDEO_INPUT_PATH,
153-
output_dir=OUTPUT_DIR,
154-
# species_list=names, # Optional: override species from checkpoint
155-
fps=None, # None = process all frames
156-
backbone="resnet50", # Must match training
157-
save_video=True, # Set to False to skip video rendering (only CSV output)
158-
img_size=60, # Must match training
148+
backbone="resnet50", # Must match training
149+
img_size=60, # Must match training
150+
# --- Optional ---
151+
# species_list=names, # Override species from checkpoint
152+
# fps=None, # None = all frames, or set target FPS
153+
# config="config.yaml", # Custom detection parameters (YAML/JSON)
154+
# classify=False, # Detection only, NaN for classification
155+
# save_video=True, # Annotated + debug videos
156+
# crops=False, # Save crop per detection per track
157+
# track_composites=False, # Composite image per track (temporal trail)
159158
)
160159
161-
print(f"Detected {results['tracks']} tracks ({results['confirmed_tracks']} confirmed)")
160+
print(f"Confirmed: {results['confirmed_tracks']} / {results['tracks']} tracks")
162161
```
163162
164-
**Note:** Set `save_video=False` to skip generating the annotated and debug videos, which speeds up processing when you only need the CSV detection data.
165-
166-
**Custom Detection Configuration:**
167-
168-
For advanced control over detection parameters, provide a YAML config file:
163+
**Output files:**
169164
170-
```python
171-
results = bplusplus.inference(
172-
...,
173-
config="detection_config.yaml"
174-
)
175-
```
165+
| File | Description | Flag |
166+
|------|-------------|------|
167+
| `{video}_results.csv` | Aggregated results per confirmed track | Always |
168+
| `{video}_detections.csv` | Frame-by-frame detections | Always |
169+
| `{video}_annotated.mp4` | Video with detection boxes and paths | `save_video=True` |
170+
| `{video}_debug.mp4` | Side-by-side with GMM motion mask | `save_video=True` |
171+
| `{video}_crops/` | Crop images per track | `crops=True` |
172+
| `{video}_composites/` | Composite images per track | `track_composites=True` |
176173
177-
Download a template config from the [releases page](https://github.com/Tvenver/Bplusplus/releases).
174+
**Detection configuration** can be customized via a YAML/JSON file passed as `config=`. Download a template from the [releases page](https://github.com/Tvenver/Bplusplus/releases).
178175
179176
<details>
180177
<summary><b>Full Configuration Parameters</b> (click to expand)</summary>

notebooks/full_pipeline.ipynb

Lines changed: 14 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -279,17 +279,9 @@
279279
"source": [
280280
"## Step 5: Run Video Inference\n",
281281
"\n",
282-
"Runs motion-based insect detection and hierarchical classification on video files. Detects moving insects using background subtraction (GMM), tracks them across frames, classifies each detection, and aggregates predictions per track.\n",
282+
"Processes a video through a multi-phase pipeline: motion-based detection (GMM), Hungarian tracking, path topology confirmation, and hierarchical classification. Detection and tracking are powered by [BugSpot](../bugspot/), a lightweight core that runs on any platform.\n",
283283
"\n",
284-
"**Note:** The species list and taxonomy are automatically loaded from the model checkpoint, so you don't need to provide them again.\n",
285-
"\n",
286-
"**Output files generated:**\n",
287-
"- `{video}_annotated.mp4` - Video with detection boxes and track paths (if `save_video=True`)\n",
288-
"- `{video}_debug.mp4` - Side-by-side view with GMM motion mask (if `save_video=True`)\n",
289-
"- `{video}_results.csv` - Aggregated results per track\n",
290-
"- `{video}_detections.csv` - Frame-by-frame detections\n",
291-
"\n",
292-
"**Tip:** Set `save_video=False` to skip video rendering and only generate CSV output (faster processing)."
284+
"The species list is automatically loaded from the model checkpoint. All detection parameters can be customized via `config=` (YAML/JSON file). See [`detection_config.yaml`](../detection_config.yaml) for all parameters and defaults."
293285
]
294286
},
295287
{
@@ -299,69 +291,29 @@
299291
"outputs": [],
300292
"source": [
301293
"results = bplusplus.inference(\n",
302-
" hierarchical_model_path=RESNET_MULTITASK_WEIGHTS,\n",
303294
" video_path=\"./10.mp4\",\n",
304295
" output_dir=\"./output\",\n",
305-
" # species_list=names, # Optional: override species from checkpoint\n",
306-
" fps=None, # None = all frames\n",
307-
" backbone=\"resnet50\", # Must match training\n",
308-
" save_video=True, # Set to False to skip video rendering (only CSV output)\n",
309-
" img_size=60, # Must match training\n",
296+
" hierarchical_model_path=RESNET_MULTITASK_WEIGHTS,\n",
297+
" backbone=\"resnet50\", # Must match training\n",
298+
" img_size=60, # Must match training\n",
299+
" # --- Optional ---\n",
300+
" # species_list=names, # Override species from checkpoint\n",
301+
" # fps=None, # None = all frames, or set target FPS\n",
302+
" # config=\"config.yaml\", # Custom detection parameters (YAML/JSON)\n",
303+
" # classify=False, # Detection only, NaN for classification\n",
304+
" # save_video=True, # Annotated + debug videos\n",
305+
" # crops=False, # Save crop per detection per track\n",
306+
" # track_composites=False, # Composite image per track (temporal trail)\n",
310307
")\n",
311308
"\n",
312-
"print(f\"Detected {results['tracks']} tracks ({results['confirmed_tracks']} confirmed)\")"
309+
"print(f\"Confirmed: {results['confirmed_tracks']} / {results['tracks']} tracks\")"
313310
]
314311
},
315312
{
316313
"cell_type": "markdown",
317314
"metadata": {},
318315
"source": [
319-
"### Custom Detection Configuration\n",
320-
"\n",
321-
"The inference uses motion-based detection with configurable parameters for filtering detections. You can customize these by providing a YAML or JSON config file.\n",
322-
"\n",
323-
"Download a template config from: https://github.com/Tvenver/Bplusplus/releases/download/weights/detection_config.yaml\n",
324-
"\n",
325-
"```python\n",
326-
"results = bplusplus.inference(\n",
327-
" ...,\n",
328-
" config=\"detection_config.yaml\"\n",
329-
")\n",
330-
"```\n",
331-
"\n",
332-
"#### Full Configuration Parameters\n",
333316
"\n",
334-
"| Parameter | Default | Description |\n",
335-
"|-----------|---------|-------------|\n",
336-
"| **GMM Background Subtractor** | | *Motion detection model* |\n",
337-
"| `gmm_history` | 500 | Frames to build background model |\n",
338-
"| `gmm_var_threshold` | 16 | Variance threshold for foreground detection |\n",
339-
"| **Morphological Filtering** | | *Noise removal* |\n",
340-
"| `morph_kernel_size` | 3 | Morphological kernel size (NxN) |\n",
341-
"| **Cohesiveness** | | *Filters scattered motion (plants) vs compact motion (insects)* |\n",
342-
"| `min_largest_blob_ratio` | 0.80 | Min ratio of largest blob to total motion |\n",
343-
"| `max_num_blobs` | 5 | Max separate blobs allowed in detection |\n",
344-
"| `min_motion_ratio` | 0.15 | Min ratio of motion pixels to bbox area |\n",
345-
"| **Shape** | | *Filters by contour properties* |\n",
346-
"| `min_area` | 200 | Min detection area (px²) |\n",
347-
"| `max_area` | 40000 | Max detection area (px²) |\n",
348-
"| `min_density` | 3.0 | Min area/perimeter ratio |\n",
349-
"| `min_solidity` | 0.55 | Min convex hull fill ratio |\n",
350-
"| **Tracking** | | *Controls track behavior* |\n",
351-
"| `min_displacement` | 50 | Min net movement for confirmation (px) |\n",
352-
"| `min_path_points` | 10 | Min points before path analysis |\n",
353-
"| `max_frame_jump` | 100 | Max jump between frames (px) |\n",
354-
"| `max_lost_frames` | 45 | Frames before lost track deleted (e.g., 45 @ 30fps = 1.5s) |\n",
355-
"| `max_area_change_ratio` | 3.0 | Max area change ratio between frames |\n",
356-
"| **Tracker Matching** | | *Hungarian algorithm cost function* |\n",
357-
"| `tracker_w_dist` | 0.6 | Weight for distance cost (0-1) |\n",
358-
"| `tracker_w_area` | 0.4 | Weight for area cost (0-1) |\n",
359-
"| `tracker_cost_threshold` | 0.3 | Max cost for valid match (0-1) |\n",
360-
"| **Path Topology** | | *Confirms insect-like movement patterns* |\n",
361-
"| `max_revisit_ratio` | 0.30 | Max ratio of revisited positions |\n",
362-
"| `min_progression_ratio` | 0.70 | Min forward progression |\n",
363-
"| `max_directional_variance` | 0.90 | Max heading variance |\n",
364-
"| `revisit_radius` | 50 | Radius (px) for revisit detection |\n",
365317
"\n"
366318
]
367319
}

pyproject.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "bplusplus"
3-
version = "2.0.5"
3+
version = "2.1.0"
44
description = "A simple method to create AI models for biodiversity, with collect and prepare pipeline"
55
authors = ["Titus Venverloo <tvenver@mit.edu>", "Deniz Aydemir <deniz@aydemir.us>", "Orlando Closs <orlandocloss@pm.me>", "Ase Hatveit <aase@mit.edu>"]
66
license = "MIT"
@@ -14,6 +14,7 @@ ultralytics = "8.3.173"
1414
pyyaml = "6.0.1"
1515
tqdm = "4.66.4"
1616
prettytable = "3.7.0"
17+
bugspot = {git = "https://github.com/orlandocloss/bugspot.git"}
1718
# Pillow with platform-specific compatibility
1819
pillow = [
1920
# Windows - stable version

0 commit comments

Comments
 (0)