diff --git a/examples/models/yolo12/.gitignore b/examples/models/yolo12/.gitignore
new file mode 100644
index 00000000000..02deda29710
--- /dev/null
+++ b/examples/models/yolo12/.gitignore
@@ -0,0 +1,3 @@
+*.pt
+*.pte
+*.ptd
diff --git a/examples/models/yolo12/README.md b/examples/models/yolo12/README.md
index 2260afa5dde..1a54f1a4a16 100644
--- a/examples/models/yolo12/README.md
+++ b/examples/models/yolo12/README.md
@@ -1,10 +1,11 @@
# YOLO12 Detection C++ Inference with ExecuTorch
-This example demonstrates how to perform inference of [Ultralytics YOLO12 family](https://docs.ultralytics.com/models/yolo12/) detection models in C++ leveraging the Executorch backends:
+This example demonstrates how to perform inference of [YOLO12 family](https://docs.ultralytics.com/models/yolo12/) detection models in C++ leveraging the Executorch backends:
+
- [OpenVINO](../../../backends/openvino/README.md)
- [XNNPACK](../../../backends/xnnpack/README.md)
-# Performance Evaluation
+## Performance Evaluation
| CPU | Model | Backend | Device | Precision | Average Latency, ms |
|--------------------------------|---------|----------|--------|-----------|---------------------|
@@ -17,8 +18,7 @@ This example demonstrates how to perform inference of [Ultralytics YOLO12 family
| Intel(R) Core(TM) Ultra 7 155H | yolo12s | xnnpack | CPU | FP32 | 169.36 |
| Intel(R) Core(TM) Ultra 7 155H | yolo12l | xnnpack | CPU | FP32 | 436.876 |
-
-# Instructions
+## Instructions
### Step 1: Install ExecuTorch
@@ -31,35 +31,36 @@ To install ExecuTorch, follow this [guide](https://pytorch.org/executorch/stable
### Step 3: Install the demo requirements
-
Python demo requirements:
+
```bash
python -m pip install -r examples/models/yolo12/requirements.txt
```
Demo infenrece dependency - OpenCV library:
-https://opencv.org/get-started/
-
-
-### Step 4: Export the Yolo12 model to the ExecuTorch
+
+### Step 4: Export the YOLO12 model to the ExecuTorch
OpenVINO:
+
```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend openvino --device CPU
```
OpenVINO quantized model:
+
```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend openvino --quantize --video_input /path/to/calibration/video --device CPU
```
XNNPACK:
+
```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend xnnpack
```
-> **_NOTE:_** Quantization for XNNPACK backend is WIP. Please refere to https://github.com/pytorch/executorch/issues/11523 for more details.
+> **_NOTE:_** Quantization for XNNPACK backend is WIP. Please refere to for more details.
Exported model could be validated using the `--validate` key:
@@ -70,8 +71,8 @@ python export_and_validate.py --model_name yolo12s --backend ... --validate data
A list of available datasets and instructions on how to use a custom dataset can be found [here](https://docs.ultralytics.com/datasets/detect/).
Validation only supports the default `--input_dims`; please do not specify this parameter when using the `--validate` flag.
-
To get a full parameters description please use the following command:
+
```bash
python export_and_validate.py --help
```
@@ -103,11 +104,11 @@ make -j$(nproc)
```
To get a full parameters description please use the following command:
-```
+
+```bash
./build/Yolo12DetectionDemo --help
```
+## Credits
-# Credits:
-
-Ultralytics examples: https://github.com/ultralytics/ultralytics/tree/main/examples
+Ultralytics examples:
diff --git a/examples/models/yolo12/export_and_validate.py b/examples/models/yolo12/export_and_validate.py
index e2349fb6434..ccd0db76d7d 100644
--- a/examples/models/yolo12/export_and_validate.py
+++ b/examples/models/yolo12/export_and_validate.py
@@ -35,7 +35,7 @@
from ultralytics.data.utils import check_det_dataset
from ultralytics.engine.validator import BaseValidator as Validator
-from ultralytics.utils.torch_utils import de_parallel
+from ultralytics.utils.torch_utils import unwrap_model
class CV2VideoIter:
@@ -293,7 +293,7 @@ def _prepare_validation(
stride = 32 # default stride
validator.stride = stride # used in get_dataloader() for padding
validator.data = check_det_dataset(dataset_yaml_path)
- validator.init_metrics(de_parallel(model))
+ validator.init_metrics(unwrap_model(model))
data_loader = validator.get_dataloader(
validator.data.get(validator.args.split), validator.args.batch