Skip to content

OpenVINO EP v4.3 Release for ONNX Runtime & OpenVINO 2022.3

Choose a tag to compare

@preetha-intel preetha-intel released this 03 Apr 11:58

Description:
OpenVINO™ Execution Provider For ONNXRuntime v4.3 Release based on the latest OpenVINO™ 2022.3 Release.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

  • OpenVINO™ version upgraded to 2022.3.0. This provides functional bug fixes, and capability changes from the previous 2022.2.0 release.
  • This release supports ONNXRuntime with the latest OpenVINO™ 2022.3 release.
  • Improvement in the First Inference Latency for OnnxRuntime OpenVino Execution Provider.
  • Model caching along with Kernel caching is enabled for GPU.
  • Minor bug fixes and code refactoring is done
  • Migrated to OpenVINO™ 2.0 API's. Removed support for OpenVINO™ 1.0 ( v2021.3 and v2021.4)
  • Backward compatibility support for older OpenVINO™ versions (OV 2022.2, OV 2022.2) is available.
  • Replacing the API's for model caching use_compile_network and blob_dump_path with single cache_dir in session creation API.

Build steps:
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

Samples:
https://github.com/microsoft/onnxruntime-inference-examples

Python Package:
https://pypi.org/project/onnxruntime-openvino/

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options