Skip to content

Commit ec20ff3

Browse files
authored
Merge pull request #319 from JdeRobot/docs-update
Updating Docs
2 parents e9ea02e + ece1f81 commit ec20ff3

File tree

11 files changed

+329
-29
lines changed

11 files changed

+329
-29
lines changed

README.md

Lines changed: 36 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,15 @@
99
1010
*DetectionMetrics* is a toolkit designed to unify and streamline the evaluation of perception models across different frameworks and datasets. Looking for our published ***DetectionMetrics v1***? Check out all the [relevant links](#v1) below.
1111

12-
Now, we're excited to introduce ***DetectionMetrics v2***! While retaining the flexibility of our previous release, *DetectionMetrics* has been redesigned with an expanded focus on image and LiDAR segmentation. As we move forward, *v2* will be the actively maintained version, featuring continued updates and enhancements to keep pace with evolving AI and computer vision technologies.
12+
Now, we're excited to introduce ***DetectionMetrics v2***! While retaining the flexibility of our previous release, *DetectionMetrics* has been redesigned with an expanded focus on image and LiDAR segmentation, and now includes **image object detection** capabilities. As we move forward, *v2* will be the actively maintained version, featuring continued updates and enhancements to keep pace with evolving AI and computer vision technologies.
1313

1414
<table style='font-size:100%; margin: auto;'>
1515
<tr>
1616
<th>&#128187; <a href="https://github.com/JdeRobot/DetectionMetrics">Code</a></th>
1717
<th>&#128295; <a href="https://jderobot.github.io/DetectionMetrics/v2/installation">Installation</a></th>
1818
<th>&#129513; <a href="https://jderobot.github.io/DetectionMetrics/v2/compatibility">Compatibility</a></th>
1919
<th>&#128214; <a href="https://jderobot.github.io/DetectionMetrics/py_docs/_build/html/index.html">Docs</a></th>
20+
<th>&#128187; <a href="https://jderobot.github.io/DetectionMetrics/v2/gui">GUI</a></th>
2021
</tr>
2122
</table>
2223

@@ -45,8 +46,8 @@ Now, we're excited to introduce ***DetectionMetrics v2***! While retaining the f
4546
<tr>
4647
<td>Object detection</td>
4748
<td>Image</td>
48-
<td>Check <a href="https://jderobot.github.io/DetectionMetrics/v1"><i>DetectionMetrics v1</i></a></td>
49-
<td>Check <a href="https://jderobot.github.io/DetectionMetrics/v1"><i>DetectionMetrics v1</i></a></td>
49+
<td>COCO, custom formats</td>
50+
<td>PyTorch</td>
5051
</tr>
5152
</tbody>
5253
</table>
@@ -94,16 +95,44 @@ Install your deep learning framework of preference in your environment. We have
9495
If you are using LiDAR, Open3D currently requires `torch==2.2*`.
9596

9697
# Usage
97-
As of now, *DetectionMetrics* can either be used as a Python library or as a command-line application.
98+
DetectionMetrics can be used in three ways: through the **interactive GUI** (detection only), as a **Python library**, or via the **command-line interface** (segmentation and detection).
9899

99-
### Library
100+
## Interactive GUI
101+
The easiest way to get started with DetectionMetrics is through the GUI (detection tasks only):
102+
103+
```bash
104+
# From the project root directory
105+
streamlit run app.py
106+
```
107+
108+
The GUI provides:
109+
- **Dataset Viewer**: Browse and visualize your datasets
110+
- **Inference**: Run real-time inference on images
111+
- **Evaluator**: Perform comprehensive model evaluation
112+
113+
For detailed GUI documentation, see our [GUI guide](https://jderobot.github.io/DetectionMetrics/v2/gui).
114+
115+
## Library
100116

101117
🧑‍🏫️ [Image Segmentation Tutorial](https://github.com/JdeRobot/DetectionMetrics/blob/master/examples/tutorial_image_segmentation.ipynb)
102118

119+
🧑‍🏫️ [Image Detection Tutorial](https://github.com/JdeRobot/DetectionMetrics/blob/master/examples/tutorial_image_detection.ipynb)
120+
103121
You can check the `examples` directory for further inspiration. If you are using *poetry*, you can run the scripts provided either by activating the created environment using `poetry shell` or directly running `poetry run python examples/<some_python_script.py>`.
104122

105-
### Command-line interface
106-
DetectionMetrics currently provides a CLI with two commands, `dm_evaluate` and `dm_batch`. Thanks to the configuration in the `pyproject.toml` file, we can simply run `poetry install` from the root directory and use them without explicitly invoking the Python files. More details are provided in [DetectionMetrics website](https://jderobot.github.io/DetectionMetrics/v2/usage/#command-line-interface).
123+
## Command-line interface
124+
DetectionMetrics provides a CLI with two commands, `dm_evaluate` and `dm_batch`. Thanks to the configuration in the `pyproject.toml` file, we can simply run `poetry install` from the root directory and use them without explicitly invoking the Python files. More details are provided in [DetectionMetrics website](https://jderobot.github.io/DetectionMetrics/v2/usage/#command-line-interface).
125+
126+
### Example Usage
127+
**Segmentation:**
128+
```bash
129+
dm_evaluate segmentation image --model_format torch --model /path/to/model.pt --model_ontology /path/to/ontology.json --model_cfg /path/to/cfg.json --dataset_format rellis3d --dataset_dir /path/to/dataset --dataset_ontology /path/to/ontology.json --out_fname /path/to/results.csv
130+
```
131+
132+
**Detection:**
133+
```bash
134+
dm_evaluate detection image --model_format torch --model /path/to/model.pt --model_ontology /path/to/ontology.json --model_cfg /path/to/cfg.json --dataset_format coco --dataset_dir /path/to/coco/dataset --out_fname /path/to/results.csv
135+
```
107136

108137
<h1 id="v1">DetectionMetrics v1</h1>
109138

detectionmetrics/cli/__init__.py

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,10 @@ def get_dataset(
8383
if labels_dir is None:
8484
raise ValueError("--labels_dir is required for 'rugd' format")
8585

86+
elif dataset_format == "coco":
87+
if dataset_dir is None:
88+
raise ValueError("--dataset_dir is required for 'coco' format")
89+
8690
else:
8791
raise ValueError(f"Dataset format not supported: {dataset_format}")
8892

@@ -116,6 +120,19 @@ def get_dataset(
116120
"labels_dir": labels_dir,
117121
"ontology_fname": ontology,
118122
}
123+
elif dataset_format == "coco":
124+
# For COCO, we need to construct the annotation file path and image directory
125+
# Assuming standard COCO structure: dataset_dir/annotations/instances_split.json and dataset_dir/images/split/
126+
if len(split) > 1:
127+
raise ValueError("COCO format currently supports only one split at a time")
128+
split_name = split[0]
129+
annotation_file = f"{dataset_dir}/annotations/instances_{split_name}2017.json"
130+
image_dir = f"{dataset_dir}/images/{split_name}2017"
131+
dataset_args = {
132+
"annotation_file": annotation_file,
133+
"image_dir": image_dir,
134+
"split": split_name,
135+
}
119136
else:
120137
raise ValueError(f"Dataset format not supported: {dataset_format}")
121138

detectionmetrics/cli/evaluate.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ def parse_split(ctx, param, value):
1717

1818

1919
@click.command(name="evaluate", help="Evaluate model on dataset")
20-
@click.argument("task", type=click.Choice(["segmentation"], case_sensitive=False))
20+
@click.argument("task", type=click.Choice(["segmentation", "detection"], case_sensitive=False))
2121
@click.argument(
2222
"input_type", type=click.Choice(["image", "lidar"], case_sensitive=False)
2323
)
@@ -53,7 +53,7 @@ def parse_split(ctx, param, value):
5353
@click.option(
5454
"--dataset_format",
5555
type=click.Choice(
56-
["gaia", "rellis3d", "goose", "generic", "rugd"], case_sensitive=False
56+
["gaia", "rellis3d", "goose", "generic", "rugd", "coco"], case_sensitive=False
5757
),
5858
show_default=True,
5959
default="gaia",
@@ -67,7 +67,7 @@ def parse_split(ctx, param, value):
6767
@click.option(
6868
"--dataset_dir",
6969
type=click.Path(exists=True, file_okay=False, dir_okay=True),
70-
help="Dataset directory (used for 'Rellis3D' and 'Wildscenes' formats)",
70+
help="Dataset directory (used for 'Rellis3D', 'Wildscenes', and 'COCO' formats)",
7171
)
7272
@click.option(
7373
"--split_dir",

detectionmetrics/models/__init__.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,13 @@
1111
except ImportError:
1212
print("Torch not available")
1313

14+
try:
15+
from detectionmetrics.models.torch_detection import TorchImageDetectionModel
16+
17+
REGISTRY["torch_image_detection"] = TorchImageDetectionModel
18+
except ImportError:
19+
print("Torch detection not available")
20+
1421
try:
1522
from detectionmetrics.models.tensorflow import TensorflowImageSegmentationModel
1623

docs/_config.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -277,6 +277,9 @@ compress_html:
277277

278278
# Collections
279279
collections:
280+
pages:
281+
output: true
282+
permalink: /:path/
280283
portfolio:
281284
output: true
282285
permalink: /:collection/:path/

docs/_data/navigation.yml

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,16 @@ main_v2:
6767
url: /v2/compatibility/#image-semantic-segmentation
6868
- title: "LiDAR semantic segmentation"
6969
url: /v2/compatibility/#lidar-semantic-segmentation
70-
- title: "Object detection"
71-
url: /v2/compatibility/#object-detection
70+
- title: "Image object detection"
71+
url: /v2/compatibility/#image-object-detection
7272
- title: Usage
7373
url: /v2/usage
7474
children:
75+
- title: "Interactive GUI"
76+
url: /v2/usage/#interactive-gui
7577
- title: "Library"
7678
url: /v2/usage/#library
7779
- title: "Command-line interface"
78-
url: /v2/usage/#command-line-interface
80+
url: /v2/usage/#command-line-interface
81+
- title: GUI
82+
url: /v2/gui

docs/_pages/home.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,15 @@ excerpt:
1515
# What is DetectionMetrics?
1616
*DetectionMetrics* is a toolkit designed to unify and streamline the evaluation of perception models across different frameworks and datasets. Looking for our published ***DetectionMetrics v1***? Check out all the [relevant links](#v1) below.
1717

18-
Now, we're excited to introduce ***DetectionMetrics v2***! While retaining the flexibility of our previous release, *DetectionMetrics* has been redesigned with an expanded focus expanded focus on image and LiDAR segmentation. As we move forward, *v2* will be the actively maintained version, featuring continued updates and enhancements to keep pace with evolving AI and computer vision technologies.
18+
Now, we're excited to introduce ***DetectionMetrics v2***! While retaining the flexibility of our previous release, *DetectionMetrics* has been redesigned with an expanded focus on image and LiDAR segmentation, and now includes **image object detection** capabilities with an interactive GUI. As we move forward, *v2* will be the actively maintained version, featuring continued updates and enhancements to keep pace with evolving AI and computer vision technologies.
1919

2020
<table class='centered-table'>
2121
<tr>
2222
<th>&#128187; <a href="https://github.com/JdeRobot/DetectionMetrics">Code</a></th>
23-
<th>&#128295; <a href="https://jderobot.github.io/DetectionMetrics/v2/installation">Installation</a></th>
24-
<th>&#129513; <a href="https://jderobot.github.io/DetectionMetrics/v2/compatibility">Compatibility</a></th>
25-
<th>&#128214; <a href="https://jderobot.github.io/DetectionMetrics/py_docs/_build/html/index.html">Docs</a></th>
23+
<th>&#128295; <a href="/v2/installation/">Installation</a></th>
24+
<th>&#129513; <a href="/v2/compatibility/">Compatibility</a></th>
25+
<th>&#128214; <a href="/py_docs/_build/html/index.html">Docs</a></th>
26+
<th>&#128421; <a href="/v2/gui/">GUI</a></th>
2627
</tr>
2728
</table>
2829

@@ -53,8 +54,8 @@ Now, we're excited to introduce ***DetectionMetrics v2***! While retaining the f
5354
<tr>
5455
<td>Object detection</td>
5556
<td>Image</td>
56-
<td>Check <a href="https://jderobot.github.io/DetectionMetrics/v1"><i>DetectionMetrics v1</i></a></td>
57-
<td>Check <a href="https://jderobot.github.io/DetectionMetrics/v1"><i>DetectionMetrics v1</i></a></td>
57+
<td>COCO</td>
58+
<td>PyTorch</td>
5859
</tr>
5960
</tbody>
6061
</table>

docs/_pages/v2/compatibility.md

Lines changed: 38 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,5 +91,41 @@ sidebar:
9191
- Computational cost:
9292
- Number of parameters, average inference time, model size
9393

94-
## Object detection
95-
Coming soon.
94+
## Image object detection
95+
- Datasets:
96+
- **[COCO](https://cocodataset.org/)**: Standard COCO format with JSON annotations and image directory structure
97+
- Models:
98+
- **PyTorch ([TorchScript](https://pytorch.org/docs/stable/jit.html) compiled format and native modules)**:
99+
- Input shape: `(batch, channels, height, width)`
100+
- Output shape: `(batch, num_detections, 6)` where each detection contains `[x1, y1, x2, y2, confidence, class_id]`
101+
- JSON configuration file format:
102+
103+
```json
104+
{
105+
"normalization": {
106+
"mean": [<r>, <g>, <b>],
107+
"std": [<r>, <g>, <b>]
108+
},
109+
"resize": { # optional
110+
"width": <px>,
111+
"height": <px>
112+
},
113+
"confidence_threshold": <float>,
114+
"nms_threshold": <float>,
115+
"max_detections_per_image": <int>,
116+
"batch_size": <n>,
117+
"device": "<cpu|cuda|mps>",
118+
"evaluation_step": <int> # for live progress updates during evaluation
119+
}
120+
```
121+
- Metrics:
122+
- Mean Average Precision (mAP), including COCO-style mAP@[0.5:0.95:0.05]
123+
- Area Under the Precision-Recall Curve (AUC-PR)
124+
- Precision, Recall, F1-Score
125+
- Per-class metrics and confusion matrices
126+
- Computational cost:
127+
- Number of parameters, average inference time, model size
128+
- GUI Support:
129+
- Real-time inference visualization
130+
- Interactive dataset browsing
131+
- Progress tracking during evaluation

0 commit comments

Comments
 (0)