Skip to content

OpenVINO EP v4.2 Release for ONNX Runtime & OpenVINO 2022.2

Choose a tag to compare

@sfatimar sfatimar released this 04 Oct 03:07
· 6732 commits to master since this release
6c63c1c

Description:
OpenVINO™ Execution Provider For ONNXRuntime v4.2 Release based on the latest OpenVINO™ 2022.2 Release.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

  • OpenVINO™ version upgraded to 2022.2.0. This provides functional bug fixes, and capability changes from the previous 2022.1.0 release.
    This release supports ONNXRuntime with the latest OpenVINO™ 2022.2 release.

  • NEW: Preview support for Intel’s discrete graphics cards, Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads.

  • NEW: Support for Intel 13th Gen Core Processor for desktop (code-named Raptor Lake).

  • Exhaustive Coverage of ONNX Operators Unit and Python Tests for GPU Plugin.

  • Support for Int8 QDQ Model from NNCF

  • Backward compatibility support for older OpenVINO™ versions (OV 2022.1, OV 2021.4) is available.

New features added:
CPU FP16: Support for CPU FP16 Precision type

Build steps:
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

Samples:
https://github.com/microsoft/onnxruntime-inference-examples

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options