You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note: The OpenVINO backend is not yet supported with the current OpenVINO release packages. It is recommended to build from source. The instructions for using OpenVINO release packages will be added soon.
Copy file name to clipboardExpand all lines: docs/source/build-run-openvino.md
+12-69Lines changed: 12 additions & 69 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,13 +31,14 @@ OpenVINO backend supports the following hardware:
31
31
- Intel discrete GPUs
32
32
- Intel NPUs
33
33
34
+
For more information on the supported hardware, please refer to [OpenVINO System Requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html) page.
35
+
34
36
## Instructions for Building OpenVINO Backend
35
37
36
38
### Prerequisites
37
39
38
40
Before you begin, ensure you have openvino installed and configured on your system:
39
41
40
-
#### TODO: Add instructions for support with OpenVINO release package
Note: The OpenVINO backend is not yet supported with the current OpenVINO release packages. It is recommended to build from source. The instructions for using OpenVINO release packages will be added soon.
55
57
56
58
### Setup
57
59
@@ -67,7 +69,7 @@ Follow the steps below to setup your build environment:
67
69
68
70
3. Navigate to `scripts/` directory.
69
71
70
-
4.**Build OpenVINO Backend**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process, OpenVINO backend will be built under `cmake-openvino-out/backends/openvino/` as `libopenvino_backend.so`
72
+
4.**Build OpenVINO Backend**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process, OpenVINO backend will be built under `cmake-out/backends/openvino/` as `libopenvino_backend.a`
71
73
72
74
```bash
73
75
./openvino_build.sh
@@ -76,94 +78,35 @@ Follow the steps below to setup your build environment:
76
78
## Build Instructions for Examples
77
79
78
80
### AOT step:
79
-
Refer to the [README.md](../../examples/openvino/aot/README.md) in the `executorch/examples/openvino/aot` folder for detailed instructions on exporting deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to openvino backend using Executorch. Users can dynamically specify the model, input shape, and target device.
81
+
Refer to the [README.md](../../examples/openvino/README.md) in the `executorch/examples/openvino` folder for detailed instructions on exporting deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to openvino backend using Executorch. Users can dynamically specify the model, input shape, and target device.
80
82
81
83
Below is an example to export a ResNet50 model from Torchvision model suite for CPU device with an input shape of `[1, 3, 256, 256]`
The exported model will be saved as 'resnet50.pte' in the current directory.
88
90
89
-
#### **Arguments**
90
-
-**`--suite`** (required):
91
-
Specifies the model suite to use.
92
-
Supported values:
93
-
-`timm` (e.g., VGG16, ResNet50)
94
-
-`torchvision` (e.g., resnet18, mobilenet_v2)
95
-
-`huggingface` (e.g., bert-base-uncased)
96
-
97
-
-**`--model`** (required):
98
-
Name of the model to export.
99
-
Examples:
100
-
- For `timm`: `vgg16`, `resnet50`
101
-
- For `torchvision`: `resnet18`, `mobilenet_v2`
102
-
- For `huggingface`: `bert-base-uncased`, `distilbert-base-uncased`
103
-
104
-
-**`--input_shape`** (required):
105
-
Input shape for the model. Provide this as a **list** or **tuple**.
106
-
Examples:
107
-
-`[1, 3, 224, 224]` (Zsh users: wrap in quotes)
108
-
-`(1, 3, 224, 224)`
109
-
110
-
-**`--device`** (optional):
111
-
Target device for the compiled model. Default is `CPU`.
112
-
Examples: `CPU`, `GPU`
113
-
114
91
### Build C++ OpenVINO Examples
115
-
Build the backend and the examples by executing the script:
116
-
```bash
117
-
./openvino_build_example.sh
118
-
```
119
-
The executable is saved in `<executorch_root>/cmake-openvino-out/examples/openvino/`
120
-
121
-
Now, run the example using the executable generated in the above step. The executable requires a model file (`.pte` file generated in the aot step), number of inference iterations, and optional input/output paths.
122
-
123
-
#### Command Syntax:
124
-
125
-
```
126
-
cd ../../cmake-openvino-out/examples/openvino
127
92
128
-
./openvino_executor_runner \
129
-
--model_path=<path_to_model> \
130
-
--num_iter=<iterations> \
131
-
[--input_list_path=<path_to_input_list>] \
132
-
[--output_folder_path=<path_to_output_folder>]
133
-
```
134
-
#### Command-Line Arguments
93
+
After building the OpenVINO backend following the [instructions](#setup) above, the executable will be saved in `<executorch_root>/cmake-out/backends/openvino/`.
135
94
136
-
-`--model_path`: (Required) Path to the model serialized in `.pte` format.
137
-
-`--num_iter`: (Optional) Number of times to run inference (default: 1).
138
-
-`--input_list_path`: (Optional) Path to a file containing the list of raw input tensor files.
139
-
-`--output_folder_path`: (Optional) Path to a folder where output tensor files will be saved.
95
+
The executable requires a model file (`.pte` file generated in the aot step) and the number of inference executions.
140
96
141
97
#### Example Usage
142
98
143
-
Run inference with a given model for 10 iterations and save outputs:
144
-
145
-
```
146
-
./openvino_executor_runner \
147
-
--model_path=model.pte \
148
-
--num_iter=10 \
149
-
--output_folder_path=outputs/
150
-
```
151
-
152
-
Run inference with an input tensor file:
99
+
Run inference with a given model for 10 executions:
153
100
154
101
```
155
102
./openvino_executor_runner \
156
103
--model_path=model.pte \
157
-
--num_iter=5 \
158
-
--input_list_path=input_list.txt \
159
-
--output_folder_path=outputs/
104
+
--num_executions=10
160
105
```
161
106
162
-
## Supported model list
163
107
164
-
### TODO
165
108
166
-
## FAQ
109
+
## Support
167
110
168
111
If you encounter any issues while reproducing the tutorial, please file a github
169
112
issue on ExecuTorch repo and tag use `#openvino` tag
0 commit comments