You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: backends/openvino/README.md
+44-26Lines changed: 44 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,11 @@ For more information on the supported hardware, please refer to [OpenVINO System
18
18
executorch
19
19
├── backends
20
20
│ └── openvino
21
+
│ ├── quantizer
22
+
│ ├── observers
23
+
│ └── nncf_observers.py
24
+
│ ├── __init__.py
25
+
│ └── quantizer.py
21
26
│ ├── runtime
22
27
│ ├── OpenvinoBackend.cpp
23
28
│ └── OpenvinoBackend.h
@@ -42,11 +47,23 @@ executorch
42
47
43
48
Before you begin, ensure you have openvino installed and configured on your system.
44
49
45
-
### Build OpenVINO from Source
50
+
### Use OpenVINO from Release Packages
51
+
52
+
1. Download the OpenVINO release package from [here](https://docs.openvino.ai/2025/get-started/install-openvino.html). Make sure to select your configuration and click on **OpenVINO Archives** under the distribution section to download the appropriate archive for your platform.
53
+
54
+
2. Extract the release package from the archive and set the environment variables.
55
+
56
+
```bash
57
+
tar -zxf openvino_toolkit_<your_release_configuration>.tgz
cd openvino&& git checkout b16b776ac119dafda51f69a80f1e6b7376d02c3b
66
+
cd openvino
50
67
git submodule update --init --recursive
51
68
sudo ./install_build_dependencies.sh
52
69
mkdir build &&cd build
@@ -59,44 +76,45 @@ cd <your_preferred_install_location>
59
76
source setupvars.sh
60
77
```
61
78
62
-
### Use OpenVINO from Release Packages
63
-
64
-
1. Download the OpenVINO release package from [here](https://docs.openvino.ai/2025/get-started/install-openvino.html). Make sure to select your configuration and click on **OpenVINO Archives** under the distribution section to download the appropriate archive for your platform.
65
-
66
-
2. Extract the release package from the archive and set the environment variables.
67
-
68
-
```bash
69
-
tar -zxf openvino_toolkit_<your_release_configuration>.tgz
70
-
cd openvino_toolkit_<your_release_configuration>
71
-
source setupvars.sh
72
-
```
73
-
74
79
For more information about OpenVINO build, refer to the [OpenVINO Build Instructions](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build_linux.md).
75
80
76
81
### Setup
77
82
78
83
Follow the steps below to setup your build environment:
79
84
80
-
1.**Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/main/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
81
85
82
-
2.**Setup OpenVINO Backend Environment**
83
-
-Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory
86
+
1.**Create a Virtual Environment**
87
+
-Create a virtual environment and activate it by executing the commands below.
84
88
```bash
85
-
pip install -r requirements.txt
89
+
python -m venv env
90
+
source env/bin/activate
86
91
```
87
-
Note: To achieve optimal performance with NNCF quantization, you should install the latest development version of NNCF (version 2.16.0.dev0+191b53d9 or higher).
88
-
3. Navigate to `scripts/` directory.
89
-
90
-
4.**Build OpenVINO Backend C++ Libraries and Executor Runner**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process. By default, OpenVINO backend will be built under `cmake-out/backends/openvino/` as `libopenvino_backend.a`
91
-
92
+
2.**Clone ExecuTorch Repository from Github**
93
+
- Clone Executorch repository by executing the command below.
**Build OpenVINO Backend Python Package with Pybindings**: To build and install the OpenVINO backend Python package with Python bindings, run the `openvino_build.sh` script with the `--enable_python` argument. This will compile and install the ExecuTorch Python package with the OpenVINO backend into your Python environment. This option will also enable python bindings required to execute OpenVINO backend tests and `aot_optimize_and_infer.py` script inside `executorch/examples/openvino` folder.
96
-
97
+
3.**Build ExecuTorch with OpenVINO Backend**
98
+
- Ensure that you are inside `executorch/backends/openvino/scripts` directory. The following command builds and installs ExecuTorch with the OpenVINO backend, also compiles the C++ runtime libraries and binaries into `<executorch_root>/cmake-out` for quick inference testing.
97
99
```bash
100
+
openvino_build.sh
101
+
```
102
+
- Optionally, `openvino_build.sh` script can be used to build python package or C++ libraries/binaries seperately.
103
+
104
+
**Build OpenVINO Backend Python Package with Pybindings**: To build and install the OpenVINO backend Python package with Python bindings, run the `openvino_build.sh` script with the `--enable_python` argument as shown in the below command. This will compile and install the ExecuTorch Python package with the OpenVINO backend into your Python environment. This option will also enable python bindings required to execute OpenVINO backend tests and `aot_optimize_and_infer.py` script inside `executorch/examples/openvino` folder.
105
+
```bash
98
106
./openvino_build.sh --enable_python
99
107
```
108
+
**Build C++ Runtime Libraries for OpenVINO Backend**: Run the `openvino_build.sh` script with the `--cpp_runtime` flag to build the C++ runtime libraries as shown in the below command. The compiled libraries files and binaries can be found in the `<executorch_root>/cmake-out` directory. The binary located at `<executorch_root>/cmake-out/executor_runner` can be used to run inference with vision models.
109
+
```bash
110
+
./openvino_build.sh --cpp_runtime
111
+
```
112
+
**Build C++ Llama Runner**: First, ensure the C++ runtime libraries are built by following the earlier instructions. Then, run the `openvino_build.sh` script with the `--llama_runner flag` to compile the LlaMA runner as shown the below command, which enables executing inference with models exported using export_llama. The resulting binary is located at: `<executorch_root>/cmake-out/examples/models/llama/llama_main`
113
+
```bash
114
+
./openvino_build.sh --llama_runner
115
+
```
116
+
117
+
For more information about ExecuTorch environment setup, refer to the [Environment Setup](https://pytorch.org/executorch/main/getting-started-setup#environment-setup) guide.
0 commit comments