OpenVINO Execution Provider for OnnxRuntime 5.0
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.0 Release based on the latest OpenVINO™ 2023.0 Release and OnnxRuntime 1.15 Release.
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
- OpenVINO™ version upgraded to 2023.0.0. This provides functional bug fixes, and capability changes from the previous 2022.3.0 release.
- This release supports ONNXRuntime 1.15 with the latest OpenVINO™ 2023.0 release.
- Hassle free user experience for OVEP Python developers on windows platform. Just PIP install is all you required on windows now.
- Complete full model support for stable Diffusion with dynamic shapes on CPU/GPU.
- Improved FIL with custom OpenVINO API for model loading.
- Model caching is now generic across all accelerators. Kernel caching is enabled for partially supported models.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
pip install openvino
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options