Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions c_cxx/OpenVINO_EP/Windows/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ link_directories("${ONNXRUNTIME_ROOTDIR}/lib")
if(OPENCV_ROOTDIR)
set(OPENCV_FOUND true)
set(OPENCV_INCLUDE_DIRS "${OPENCV_ROOTDIR}/include")
set(OPENCV_LIBDIR "${OPENCV_ROOTDIR}/lib")
file(GLOB OPENCV_DEBUG_LIBRARIES ${OPENCV_LIBDIR}/opencv_imgcodecs*d.lib ${OPENCV_LIBDIR}/opencv_dnn*d.lib ${OPENCV_LIBDIR}/opencv_core*d.lib ${OPENCV_LIBDIR}/opencv_imgproc*d.lib)
file(GLOB OPENCV_RELEASE_LIBRARIES ${OPENCV_LIBDIR}/opencv_imgcodecs*.lib ${OPENCV_LIBDIR}/opencv_dnn*.lib ${OPENCV_LIBDIR}/opencv_core*.lib ${OPENCV_LIBDIR}/opencv_imgproc*.lib)
set(OPENCV_LIBDIR "${OPENCV_ROOTDIR}/x64/vc16/lib")
file(GLOB OPENCV_DEBUG_LIBRARIES ${OPENCV_LIBDIR}/opencv_world470d.lib)
file(GLOB OPENCV_RELEASE_LIBRARIES ${OPENCV_LIBDIR}/opencv_world470.lib)
list(FILTER OPENCV_RELEASE_LIBRARIES EXCLUDE REGEX ".*d\\.lib")
endif()

Expand Down
44 changes: 40 additions & 4 deletions c_cxx/OpenVINO_EP/Windows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,14 @@

## How to build

## Prerequisites
1. [The Intel<sup>®</sup> Distribution of OpenVINO toolkit](https://docs.openvinotoolkit.org/latest/index.html)
2. Use opencv
3. Use opencl for IO buffer sample (squeezenet_cpp_app_io.cpp).
4. Use any sample image as input to the sample.
5. Download the latest Squeezenet model from the ONNX Model Zoo.
This example was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [Squeezenet](https://github.com/onnx/models/tree/master/vision/classification/squeezenet) model from here.

#### Build ONNX Runtime
Open x64 Native Tools Command Prompt for VS 2019.
For running the sample with IO Buffer optimization feature, make sure you set the OpenCL paths. For example if you are setting the path from openvino source build folder, the paths will be like:
Expand Down Expand Up @@ -51,7 +59,7 @@ Choose required opencv path. Skip the opencv flag if you don't want to build squ
To get the squeezenet sample with IO buffer feature enabled, pass opencl paths as well:
```bat
mkdir build && cd build
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="path\to\opencv -DOPENCL_LIB=path\to\openvino\folder\bin\intel64\Release\ -DOPENCL_INCLUDE=path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include"
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="path\to\opencv" -DOPENCL_LIB=path\to\openvino\folder\bin\intel64\Release\ -DOPENCL_INCLUDE="path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include;path\to\openvino\folder\thirdparty\ocl\cl_headers"
```

**Note:**
Expand All @@ -63,10 +71,38 @@ If you are using the opencv from openvino package, below are the paths:
For the squeezenet IO buffer sample:
Make sure you are creating the opencl context for the right GPU device in a multi-GPU environment.

Build samples using msbuild either for Debug or Release configuration.
Build samples using msbuild for Debug configuration. For Release configuration replace Debug with Release.

```bat
msbuild onnxruntime_samples.sln /p:Configuration=Debug|Release
msbuild onnxruntime_samples.sln /p:Configuration=Debug
```

To run the samples make sure you source openvino variables using setupvars.bat. Also add opencv dll paths to $PATH.
To run the samples make sure you source openvino variables using setupvars.bat.

To run the samples download and install(extract) OpenCV from: [download OpenCV](https://github.com/opencv/opencv/releases/download/4.7.0/opencv-4.7.0-windows.exe). Also copy OpenCV dll (opencv_world470.dll which is located at: "path\to\opencv\build\x64\vc16\bin") to the location of the application exe file(Release dll for Release build and debug dll for debug build).

#### Run the sample

- To Run the general sample
(using Intel OpenVINO-EP)
```
run_squeezenet.exe --use_openvino <path_to_onnx_model> <path_to_sample_image> <path_to_labels_file>
```
Example:
```
run_squeezenet.exe --use_openvino squeezenet1.1-7.onnx demo.jpeg synset.txt (using Intel OpenVINO-EP)
```
(using Default CPU)
```
run_squeezenet.exe --use_cpu <path_to_onnx_model> <path_to_sample_image> <path_to_labels_file>
```
- To Run the sample for IO Buffer Optimization feature
```
run_squeezenet.exe <path_to_onnx_model> <path_to_sample_image> <path_to_labels_file>
```

## References:

[OpenVINO Execution Provider](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/faster-inferencing-with-one-line-of-code.html)

[Other ONNXRT Reference Samples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx)
10 changes: 3 additions & 7 deletions python/OpenVINO_EP/tiny_yolo_v2_object_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,14 @@ The source code for this sample is available [here](https://github.com/microsoft
# How to build

## Prerequisites
1. For Windows, [The Intel<sup>®</sup> Distribution of OpenVINO™ toolkit](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows_header.html#doxid-openvino-docs-install-guides-installing-openvino-windows-header).
Please select Install OpenVINO™ from PyPI.
```
pip3 install openvino
```
1. For Windows, [download OpenVINO package](https://storage.openvinotoolkit.org/repositories/openvino/packages) select appropriate OpenVINO version, windows os and extract it. Run setupvars.bat file which is in the root directory of the extracted OpenVINO package to add OpenVINO libraries to PATH.
2. Download the latest tinyYOLOv2 model from the ONNX Model Zoo.
This model was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [tinyYOLOv2](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny-yolov2) model from here.

## Install ONNX Runtime for OpenVINO™ Execution Provider
Please install the onnxruntime-openvino python package from [here](https://pypi.org/project/onnxruntime-openvino/1.11.0/). The package for Linux contains prebuilt OpenVINO Libs with ABI 0.
```
pip3 install onnxruntime-openvino==1.11.0
pip3 install onnxruntime-openvino
```

## Optional Build steps for ONNX Runtime
Expand Down Expand Up @@ -68,7 +64,7 @@ Just press the letter 'q' or Ctrl+C if on Windows

## References:

[Download OpenVINO™ Eexecution Provider Latest pip wheels from here](https://pypi.org/project/onnxruntime-openvino/1.11.0/)
[Download OpenVINO™ Eexecution Provider Latest pip wheels from here](https://pypi.org/project/onnxruntime-openvino/)

[OpenVINO™ Execution Provider](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/faster-inferencing-with-one-line-of-code.html)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
"!pip -q install opencv-python==4.5.5.64\n",
"!pip -q install scipy==1.7.3\n",
"!pip -q install typing-extensions==4.1.1\n",
"!pip -q install onnxruntime-openvino==1.11.0"
"!pip -q install onnxruntime-openvino"
]
},
{
Expand Down Expand Up @@ -646,4 +646,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
10 changes: 3 additions & 7 deletions python/OpenVINO_EP/yolov4_object_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,13 @@ The source code for this sample is available [here](https://github.com/microsoft
# How to build

## Prerequisites
1. For Windows, [The Intel<sup>®</sup> Distribution of OpenVINO™ toolkit](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows_header.html#doxid-openvino-docs-install-guides-installing-openvino-windows-header).
Please select Install OpenVINO™ from PyPI.
```
pip3 install openvino
```
1. For Windows, [download OpenVINO package](https://storage.openvinotoolkit.org/repositories/openvino/packages) select appropriate version, windows os and extract it. Run setupvars.bat file which is in the root directory of the extracted OpenVINO package to add OpenVINO libraries to PATH.
2. Download the latest version of the [YOLOv4](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov4) model from here.

## Install ONNX Runtime for OpenVINO™ Execution Provider
Please install the onnxruntime-openvino python package from [here](https://pypi.org/project/onnxruntime-openvino/1.11.0/). The package for Linux contains prebuilt OpenVINO Libs with ABI 0.
```
pip3 install onnxruntime-openvino==1.11.0
pip3 install onnxruntime-openvino
```

## Optional Build steps for ONNX Runtime
Expand Down Expand Up @@ -99,7 +95,7 @@ Just press the letter 'q' or Ctrl+C if on Windows

## References:

[Download OpenVINO™ Execution Provider Latest pip wheels from here](https://pypi.org/project/onnxruntime-openvino/1.11.0/)
[Download OpenVINO™ Execution Provider Latest pip wheels from here](https://pypi.org/project/onnxruntime-openvino/)

[OpenVINO™ Execution Provider](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/faster-inferencing-with-one-line-of-code.html)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
"!pip -q install opencv-python==4.5.5.64\n",
"!pip -q install scipy==1.7.3\n",
"!pip -q install typing-extensions==4.1.1\n",
"!pip -q install onnxruntime-openvino==1.11.0"
"!pip -q install onnxruntime-openvino"
]
},
{
Expand Down Expand Up @@ -976,4 +976,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}