Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions examples/models/yolo12/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
*.pt
*.pte
*.ptd
31 changes: 16 additions & 15 deletions examples/models/yolo12/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# YOLO12 Detection C++ Inference with ExecuTorch

This example demonstrates how to perform inference of [Ultralytics YOLO12 family](https://docs.ultralytics.com/models/yolo12/) detection models in C++ leveraging the Executorch backends:
This example demonstrates how to perform inference of [YOLO12 family](https://docs.ultralytics.com/models/yolo12/) detection models in C++ leveraging the Executorch backends:

- [OpenVINO](../../../backends/openvino/README.md)
- [XNNPACK](../../../backends/xnnpack/README.md)

# Performance Evaluation
## Performance Evaluation

| CPU | Model | Backend | Device | Precision | Average Latency, ms |
|--------------------------------|---------|----------|--------|-----------|---------------------|
Expand All @@ -17,8 +18,7 @@ This example demonstrates how to perform inference of [Ultralytics YOLO12 family
| Intel(R) Core(TM) Ultra 7 155H | yolo12s | xnnpack | CPU | FP32 | 169.36 |
| Intel(R) Core(TM) Ultra 7 155H | yolo12l | xnnpack | CPU | FP32 | 436.876 |


# Instructions
## Instructions

### Step 1: Install ExecuTorch

Expand All @@ -31,35 +31,36 @@ To install ExecuTorch, follow this [guide](https://pytorch.org/executorch/stable

### Step 3: Install the demo requirements


Python demo requirements:

```bash
python -m pip install -r examples/models/yolo12/requirements.txt
```

Demo infenrece dependency - OpenCV library:
https://opencv.org/get-started/


### Step 4: Export the Yolo12 model to the ExecuTorch
<https://opencv.org/get-started/>

### Step 4: Export the YOLO12 model to the ExecuTorch

OpenVINO:

```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend openvino --device CPU
```

OpenVINO quantized model:

```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend openvino --quantize --video_input /path/to/calibration/video --device CPU
```

XNNPACK:

```bash
python export_and_validate.py --model_name yolo12s --input_dims=[1920,1080] --backend xnnpack
```

> **_NOTE:_** Quantization for XNNPACK backend is WIP. Please refere to https://github.com/pytorch/executorch/issues/11523 for more details.
> **_NOTE:_** Quantization for XNNPACK backend is WIP. Please refere to <https://github.com/pytorch/executorch/issues/11523> for more details.
Exported model could be validated using the `--validate` key:

Expand All @@ -70,8 +71,8 @@ python export_and_validate.py --model_name yolo12s --backend ... --validate data
A list of available datasets and instructions on how to use a custom dataset can be found [here](https://docs.ultralytics.com/datasets/detect/).
Validation only supports the default `--input_dims`; please do not specify this parameter when using the `--validate` flag.


To get a full parameters description please use the following command:

```bash
python export_and_validate.py --help
```
Expand Down Expand Up @@ -103,11 +104,11 @@ make -j$(nproc)
```

To get a full parameters description please use the following command:
```

```bash
./build/Yolo12DetectionDemo --help
```

## Credits

# Credits:

Ultralytics examples: https://github.com/ultralytics/ultralytics/tree/main/examples
Ultralytics examples: <https://github.com/ultralytics/ultralytics/tree/main/examples>
4 changes: 2 additions & 2 deletions examples/models/yolo12/export_and_validate.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@

from ultralytics.data.utils import check_det_dataset
from ultralytics.engine.validator import BaseValidator as Validator
from ultralytics.utils.torch_utils import de_parallel
from ultralytics.utils.torch_utils import unwrap_model


class CV2VideoIter:
Expand Down Expand Up @@ -293,7 +293,7 @@ def _prepare_validation(
stride = 32 # default stride
validator.stride = stride # used in get_dataloader() for padding
validator.data = check_det_dataset(dataset_yaml_path)
validator.init_metrics(de_parallel(model))
validator.init_metrics(unwrap_model(model))

data_loader = validator.get_dataloader(
validator.data.get(validator.args.split), validator.args.batch
Expand Down
Loading