Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion c_cxx/OpenVINO_EP/Linux/squeezenet_classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@

2. The sample involves presenting an image to the ONNX Runtime (RT), which uses the OpenVINO Execution Provider for ONNX RT to run inference on various Intel hardware devices like Intel CPU, GPU, VPU and more. The sample uses OpenCV for image processing and ONNX Runtime OpenVINO EP for inference. After the sample image is inferred, the terminal will output the predicted label classes in order of their confidence.

The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Linux/squeezenet_classification).
The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app.cpp).

3. There is one more sample [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app_io.cpp) with IO Buffer optimization enabled. With IO Buffer interfaces we can avoid any memory copy overhead when plugging OpenVINO™ inference into an existing GPU pipeline. It also enables OpenCL kernels to participate in the pipeline to become native buffer consumers or producers of the OpenVINO™ inference. Refer [here](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API.html) for more details. This sample is for GPUs only.

# How to build

Expand Down Expand Up @@ -65,6 +67,10 @@ export OPENCL_INCS=path/to/your/directory/openvino/thirdparty/ocl/clhpp_headers/
If you are using the opencv from openvino package, below are the paths:
* For latest version (2022.1.0), run download_opencv.sh in /path/to/openvino/extras/script and the opencv folder will be downloaded at /path/to/openvino/extras.
* For older openvino version, opencv folder is available at openvino directory itself.
* The current cmake files are adjusted with the opencv folders coming along with openvino packages. Plase make sure you are updating the opencv paths according to your custom builds.

For the squeezenet IO buffer sample:
Make sure you are creating the opencl context for the right GPU device in a multi-GPU environment.

4. Run the sample

Expand Down
11 changes: 11 additions & 0 deletions c_cxx/OpenVINO_EP/Windows/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ string(APPEND CMAKE_CXX_FLAGS " /W4")
option(onnxruntime_USE_OPENVINO "Build with OpenVINO support" OFF)
option(OPENCV_ROOTDIR "OpenCV root dir")
option(ONNXRUNTIME_ROOTDIR "onnxruntime root dir")
option(OPENCL_LIB "OpenCL lib dir")
option(OPENCL_INCLUDE "OpenCL header dir")

if(NOT ONNXRUNTIME_ROOTDIR)
set(ONNXRUNTIME_ROOTDIR "C:/Program Files (x86)/onnxruntime")
Expand All @@ -27,11 +29,20 @@ if(OPENCV_ROOTDIR)
list(FILTER OPENCV_RELEASE_LIBRARIES EXCLUDE REGEX ".*d\\.lib")
endif()

if(OPENCL_LIB AND OPENCL_INCLUDE)
set(OPENCL_FOUND true)
endif()

if(onnxruntime_USE_OPENVINO)
add_definitions(-DUSE_OPENVINO)
endif()

if(OPENCV_FOUND)
add_subdirectory(squeezenet_classification)
endif()

if(OPENCL_FOUND)
add_subdirectory(squeezenet_classification_io_buffer)
endif()

add_subdirectory(model-explorer)
32 changes: 29 additions & 3 deletions c_cxx/OpenVINO_EP/Windows/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,28 @@
# Windows C++ sample with OVEP:

1. model-explorer
2. Squeezenet classification

This sample application demonstrates how to use components of the experimental C++ API to query for model inputs/outputs and how to run inferrence using OpenVINO Execution Provider for ONNXRT on a model. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/model-explorer).

2. Squeezenet classification sample

The sample involves presenting an image to the ONNX Runtime (RT), which uses the OpenVINO Execution Provider for ONNXRT to run inference on various Intel hardware devices like Intel CPU, GPU, VPU and more. The sample uses OpenCV for image processing and ONNX Runtime OpenVINO EP for inference. After the sample image is inferred, the terminal will output the predicted label classes in order of their confidence. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/squeezenet_classification).

3. Squeezenet classification sample with IO Buffer feature

This sample is also doing the same process but with IO Buffer optimization enabled. With IO Buffer interfaces we can avoid any memory copy overhead when plugging OpenVINO™ inference into an existing GPU pipeline. It also enables OpenCL kernels to participate in the pipeline to become native buffer consumers or producers of the OpenVINO™ inference. Refer [here](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API.html) for more details. This sample is for GPUs only. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/squeezenet_classification_io_buffer).

## How to build

#### Build ONNX Runtime
Open x64 Native Tools Command Prompt for VS 2019.
For running the sample with IO Buffer optimization feature, make sure you set the OpenCL paths. For example if you are setting the path from openvino source build folder, the paths will be like:

```
set OPENCL_LIBS=\path\to\openvino\folder\bin\intel64\Release\OpenCL.lib
set OPENCL_INCS=\path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include
```

```
build.bat --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel --cmake_extra_defines CMAKE_INSTALL_PREFIX=c:\dev\ort_install --skip_tests
```
Expand All @@ -32,15 +48,25 @@ cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=
```
Choose required opencv path. Skip the opencv flag if you don't want to build squeezenet sample.

To get the squeezenet sample with IO buffer feature enabled, pass opencl paths as well:
```bat
mkdir build && cd build
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="path\to\opencv -DOPENCL_LIB=path\to\openvino\folder\bin\intel64\Release\ -DOPENCL_INCLUDE=path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include"
```

**Note:**
If you are using the opencv from openvino package, below are the paths:
* For latest version (2022.1.0), run download_opencv.ps1 in \path\to\openvino\extras\script and the opencv folder will be downloaded at \path\to\openvino\extras.
* For openvino version 2022.1.0, run download_opencv.ps1 in \path\to\openvino\extras\script and the opencv folder will be downloaded at \path\to\openvino\extras.
* For older openvino version, opencv folder is available at openvino directory itself.
* The current cmake files are adjusted with the opencv folders coming along with openvino packages. Plase make sure you are updating the opencv paths according to your custom builds.

For the squeezenet IO buffer sample:
Make sure you are creating the opencl context for the right GPU device in a multi-GPU environment.

Build samples using msbuild either for Debug or Release configuration.

```bat
msbuild onnxruntime_samples.sln /p:Configuration=Debug|Release
```

To run the samples make sure you source openvino variables using setupvars.bat.
To run the samples make sure you source openvino variables using setupvars.bat. Also add opencv dll paths to $PATH.
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,16 @@ Portions of this software are copyright of their respective authors and released
#include <string>
#include <vector>
#include <stdexcept> // To use runtime_error
#include <Windows.h>
#include <psapi.h>

std::size_t GetPeakWorkingSetSize() {
PROCESS_MEMORY_COUNTERS pmc;
if (GetProcessMemoryInfo(GetCurrentProcess(), &pmc, sizeof(pmc))) {
return pmc.PeakWorkingSetSize;
}
return 0;
}

template <typename T>
T vectorProduct(const std::vector<T>& v)
Expand Down Expand Up @@ -387,5 +397,7 @@ int main(int argc, char* argv[])
std::cout << "Minimum Inference Latency: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count() / static_cast<float>(numTests)
<< " ms" << std::endl;
size_t mem_size = GetPeakWorkingSetSize();
std::cout << "Peak working set size: " << mem_size << " bytes" << std::endl;
return 0;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.

add_executable(run_squeezenet_io_buffer "squeezenet_cpp_app_io_buffer.cpp")
target_include_directories(run_squeezenet_io_buffer PRIVATE ${OPENCV_INCLUDE_DIRS} ${OPENCL_INCLUDE})
target_link_libraries(run_squeezenet_io_buffer PRIVATE onnxruntime)

if(OPENCV_LIBDIR)
target_link_directories(run_squeezenet_io_buffer PRIVATE ${OPENCV_LIBDIR})
foreach(RelLib DebLib IN ZIP_LISTS OPENCV_RELEASE_LIBRARIES OPENCV_DEBUG_LIBRARIES)
target_link_libraries(run_squeezenet_io_buffer PRIVATE optimized ${RelLib} debug ${DebLib})
endforeach()
endif()

if(OPENCL_LIB)
target_link_directories(run_squeezenet_io_buffer PRIVATE ${OPENCL_LIB})
target_link_libraries(run_squeezenet_io_buffer PRIVATE OpenCL.lib)
endif()

#In onnxruntime deafault install path, the required dlls are in lib and bin folders
set(DLL_DIRS "${ONNXRUNTIME_ROOTDIR}/lib;${ONNXRUNTIME_ROOTDIR}/bin")
foreach(DLL_DIR IN LISTS DLL_DIRS)
file(GLOB ALL_DLLS ${DLL_DIR}/*.dll)
foreach(ORTDll IN LISTS ALL_DLLS)
add_custom_command(TARGET run_squeezenet_io_buffer POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
"${ORTDll}"
$<TARGET_FILE_DIR:run_squeezenet_io_buffer>)
endforeach()
endforeach()
Loading