Skip to content

Commit dc2c72e

Browse files
authored
Merge pull request #36 from cavusmustafa/updates_for_second_review
Updates for second review
2 parents 82866db + 15178ce commit dc2c72e

File tree

2 files changed

+13
-71
lines changed

2 files changed

+13
-71
lines changed

backends/openvino/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,6 @@ executorch
4242

4343
Before you begin, ensure you have openvino installed and configured on your system:
4444

45-
## TODO: Add instructions for support with OpenVINO release package
46-
4745
```bash
4846
git clone https://github.com/openvinotoolkit/openvino.git
4947
cd openvino && git checkout releases/2025/1
@@ -57,6 +55,7 @@ cmake --install build --prefix <your_preferred_install_location>
5755
cd <your_preferred_install_location>
5856
source setupvars.sh
5957
```
58+
Note: The OpenVINO backend is not yet supported with the current OpenVINO release packages. It is recommended to build from source. The instructions for using OpenVINO release packages will be added soon.
6059

6160
### Setup
6261

docs/source/build-run-openvino.md

Lines changed: 12 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -31,13 +31,14 @@ OpenVINO backend supports the following hardware:
3131
- Intel discrete GPUs
3232
- Intel NPUs
3333

34+
For more information on the supported hardware, please refer to [OpenVINO System Requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html) page.
35+
3436
## Instructions for Building OpenVINO Backend
3537

3638
### Prerequisites
3739

3840
Before you begin, ensure you have openvino installed and configured on your system:
3941

40-
#### TODO: Add instructions for support with OpenVINO release package
4142

4243
```bash
4344
git clone https://github.com/openvinotoolkit/openvino.git
@@ -52,6 +53,7 @@ cmake --install build --prefix <your_preferred_install_location>
5253
cd <your_preferred_install_location>
5354
source setupvars.sh
5455
```
56+
Note: The OpenVINO backend is not yet supported with the current OpenVINO release packages. It is recommended to build from source. The instructions for using OpenVINO release packages will be added soon.
5557

5658
### Setup
5759

@@ -67,7 +69,7 @@ Follow the steps below to setup your build environment:
6769

6870
3. Navigate to `scripts/` directory.
6971

70-
4. **Build OpenVINO Backend**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process, OpenVINO backend will be built under `cmake-openvino-out/backends/openvino/` as `libopenvino_backend.so`
72+
4. **Build OpenVINO Backend**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process, OpenVINO backend will be built under `cmake-out/backends/openvino/` as `libopenvino_backend.a`
7173

7274
```bash
7375
./openvino_build.sh
@@ -76,94 +78,35 @@ Follow the steps below to setup your build environment:
7678
## Build Instructions for Examples
7779

7880
### AOT step:
79-
Refer to the [README.md](../../examples/openvino/aot/README.md) in the `executorch/examples/openvino/aot` folder for detailed instructions on exporting deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to openvino backend using Executorch. Users can dynamically specify the model, input shape, and target device.
81+
Refer to the [README.md](../../examples/openvino/README.md) in the `executorch/examples/openvino` folder for detailed instructions on exporting deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to openvino backend using Executorch. Users can dynamically specify the model, input shape, and target device.
8082

8183
Below is an example to export a ResNet50 model from Torchvision model suite for CPU device with an input shape of `[1, 3, 256, 256]`
8284

8385
```bash
84-
cd executorch/examples/openvino/aot
85-
python aot_openvino_compiler.py --suite torchvision --model resnet50 --input_shape "(1, 3, 256, 256)" --device CPU
86+
cd executorch/examples/openvino
87+
python aot_optimize_and_infer.py --export --suite torchvision --model resnet50 --input_shape "(1, 3, 256, 256)" --device CPU
8688
```
8789
The exported model will be saved as 'resnet50.pte' in the current directory.
8890

89-
#### **Arguments**
90-
- **`--suite`** (required):
91-
Specifies the model suite to use.
92-
Supported values:
93-
- `timm` (e.g., VGG16, ResNet50)
94-
- `torchvision` (e.g., resnet18, mobilenet_v2)
95-
- `huggingface` (e.g., bert-base-uncased)
96-
97-
- **`--model`** (required):
98-
Name of the model to export.
99-
Examples:
100-
- For `timm`: `vgg16`, `resnet50`
101-
- For `torchvision`: `resnet18`, `mobilenet_v2`
102-
- For `huggingface`: `bert-base-uncased`, `distilbert-base-uncased`
103-
104-
- **`--input_shape`** (required):
105-
Input shape for the model. Provide this as a **list** or **tuple**.
106-
Examples:
107-
- `[1, 3, 224, 224]` (Zsh users: wrap in quotes)
108-
- `(1, 3, 224, 224)`
109-
110-
- **`--device`** (optional):
111-
Target device for the compiled model. Default is `CPU`.
112-
Examples: `CPU`, `GPU`
113-
11491
### Build C++ OpenVINO Examples
115-
Build the backend and the examples by executing the script:
116-
```bash
117-
./openvino_build_example.sh
118-
```
119-
The executable is saved in `<executorch_root>/cmake-openvino-out/examples/openvino/`
120-
121-
Now, run the example using the executable generated in the above step. The executable requires a model file (`.pte` file generated in the aot step), number of inference iterations, and optional input/output paths.
122-
123-
#### Command Syntax:
124-
125-
```
126-
cd ../../cmake-openvino-out/examples/openvino
12792

128-
./openvino_executor_runner \
129-
--model_path=<path_to_model> \
130-
--num_iter=<iterations> \
131-
[--input_list_path=<path_to_input_list>] \
132-
[--output_folder_path=<path_to_output_folder>]
133-
```
134-
#### Command-Line Arguments
93+
After building the OpenVINO backend following the [instructions](#setup) above, the executable will be saved in `<executorch_root>/cmake-out/backends/openvino/`.
13594

136-
- `--model_path`: (Required) Path to the model serialized in `.pte` format.
137-
- `--num_iter`: (Optional) Number of times to run inference (default: 1).
138-
- `--input_list_path`: (Optional) Path to a file containing the list of raw input tensor files.
139-
- `--output_folder_path`: (Optional) Path to a folder where output tensor files will be saved.
95+
The executable requires a model file (`.pte` file generated in the aot step) and the number of inference executions.
14096

14197
#### Example Usage
14298

143-
Run inference with a given model for 10 iterations and save outputs:
144-
145-
```
146-
./openvino_executor_runner \
147-
--model_path=model.pte \
148-
--num_iter=10 \
149-
--output_folder_path=outputs/
150-
```
151-
152-
Run inference with an input tensor file:
99+
Run inference with a given model for 10 executions:
153100

154101
```
155102
./openvino_executor_runner \
156103
--model_path=model.pte \
157-
--num_iter=5 \
158-
--input_list_path=input_list.txt \
159-
--output_folder_path=outputs/
104+
--num_executions=10
160105
```
161106

162-
## Supported model list
163107

164-
### TODO
165108

166-
## FAQ
109+
## Support
167110

168111
If you encounter any issues while reproducing the tutorial, please file a github
169112
issue on ExecuTorch repo and tag use `#openvino` tag

0 commit comments

Comments
 (0)