Skip to content

Commit cd1c8ac

Browse files
sfatimarmayavijx
andauthored
Windows io buffer sample (microsoft#172)
* Windows IO buffer sample * Updated readme and created symlink for squeezenet sample label file * Updated README * Updated README for IO buffer sample * ORT API update * Updated README * Updated OVEP cpp samplesreadme fiiles Co-authored-by: mayavijx <[email protected]>
1 parent 4a56e60 commit cd1c8ac

File tree

8 files changed

+558
-34
lines changed

8 files changed

+558
-34
lines changed

c_cxx/OpenVINO_EP/Linux/squeezenet_classification/README.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@
44

55
2. The sample involves presenting an image to the ONNX Runtime (RT), which uses the OpenVINO Execution Provider for ONNX RT to run inference on various Intel hardware devices like Intel CPU, GPU, VPU and more. The sample uses OpenCV for image processing and ONNX Runtime OpenVINO EP for inference. After the sample image is inferred, the terminal will output the predicted label classes in order of their confidence.
66

7-
The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Linux/squeezenet_classification).
7+
The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app.cpp).
8+
9+
3. There is one more sample [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app_io.cpp) with IO Buffer optimization enabled. With IO Buffer interfaces we can avoid any memory copy overhead when plugging OpenVINO™ inference into an existing GPU pipeline. It also enables OpenCL kernels to participate in the pipeline to become native buffer consumers or producers of the OpenVINO™ inference. Refer [here](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API.html) for more details. This sample is for GPUs only.
810

911
# How to build
1012

@@ -65,6 +67,10 @@ export OPENCL_INCS=path/to/your/directory/openvino/thirdparty/ocl/clhpp_headers/
6567
If you are using the opencv from openvino package, below are the paths:
6668
* For latest version (2022.1.0), run download_opencv.sh in /path/to/openvino/extras/script and the opencv folder will be downloaded at /path/to/openvino/extras.
6769
* For older openvino version, opencv folder is available at openvino directory itself.
70+
* The current cmake files are adjusted with the opencv folders coming along with openvino packages. Plase make sure you are updating the opencv paths according to your custom builds.
71+
72+
For the squeezenet IO buffer sample:
73+
Make sure you are creating the opencl context for the right GPU device in a multi-GPU environment.
6874
6975
4. Run the sample
7076

c_cxx/OpenVINO_EP/Windows/CMakeLists.txt

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ string(APPEND CMAKE_CXX_FLAGS " /W4")
1111
option(onnxruntime_USE_OPENVINO "Build with OpenVINO support" OFF)
1212
option(OPENCV_ROOTDIR "OpenCV root dir")
1313
option(ONNXRUNTIME_ROOTDIR "onnxruntime root dir")
14+
option(OPENCL_LIB "OpenCL lib dir")
15+
option(OPENCL_INCLUDE "OpenCL header dir")
1416

1517
if(NOT ONNXRUNTIME_ROOTDIR)
1618
set(ONNXRUNTIME_ROOTDIR "C:/Program Files (x86)/onnxruntime")
@@ -27,11 +29,20 @@ if(OPENCV_ROOTDIR)
2729
list(FILTER OPENCV_RELEASE_LIBRARIES EXCLUDE REGEX ".*d\\.lib")
2830
endif()
2931

32+
if(OPENCL_LIB AND OPENCL_INCLUDE)
33+
set(OPENCL_FOUND true)
34+
endif()
35+
3036
if(onnxruntime_USE_OPENVINO)
3137
add_definitions(-DUSE_OPENVINO)
3238
endif()
3339

3440
if(OPENCV_FOUND)
3541
add_subdirectory(squeezenet_classification)
3642
endif()
43+
44+
if(OPENCL_FOUND)
45+
add_subdirectory(squeezenet_classification_io_buffer)
46+
endif()
47+
3748
add_subdirectory(model-explorer)

c_cxx/OpenVINO_EP/Windows/README.md

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,28 @@
11
# Windows C++ sample with OVEP:
22

33
1. model-explorer
4-
2. Squeezenet classification
4+
5+
This sample application demonstrates how to use components of the experimental C++ API to query for model inputs/outputs and how to run inferrence using OpenVINO Execution Provider for ONNXRT on a model. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/model-explorer).
6+
7+
2. Squeezenet classification sample
8+
9+
The sample involves presenting an image to the ONNX Runtime (RT), which uses the OpenVINO Execution Provider for ONNXRT to run inference on various Intel hardware devices like Intel CPU, GPU, VPU and more. The sample uses OpenCV for image processing and ONNX Runtime OpenVINO EP for inference. After the sample image is inferred, the terminal will output the predicted label classes in order of their confidence. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/squeezenet_classification).
10+
11+
3. Squeezenet classification sample with IO Buffer feature
12+
13+
This sample is also doing the same process but with IO Buffer optimization enabled. With IO Buffer interfaces we can avoid any memory copy overhead when plugging OpenVINO™ inference into an existing GPU pipeline. It also enables OpenCL kernels to participate in the pipeline to become native buffer consumers or producers of the OpenVINO™ inference. Refer [here](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API.html) for more details. This sample is for GPUs only. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/squeezenet_classification_io_buffer).
514

615
## How to build
716

817
#### Build ONNX Runtime
918
Open x64 Native Tools Command Prompt for VS 2019.
19+
For running the sample with IO Buffer optimization feature, make sure you set the OpenCL paths. For example if you are setting the path from openvino source build folder, the paths will be like:
20+
21+
```
22+
set OPENCL_LIBS=\path\to\openvino\folder\bin\intel64\Release\OpenCL.lib
23+
set OPENCL_INCS=\path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include
24+
```
25+
1026
```
1127
build.bat --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel --cmake_extra_defines CMAKE_INSTALL_PREFIX=c:\dev\ort_install --skip_tests
1228
```
@@ -32,15 +48,25 @@ cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=
3248
```
3349
Choose required opencv path. Skip the opencv flag if you don't want to build squeezenet sample.
3450

51+
To get the squeezenet sample with IO buffer feature enabled, pass opencl paths as well:
52+
```bat
53+
mkdir build && cd build
54+
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="path\to\opencv -DOPENCL_LIB=path\to\openvino\folder\bin\intel64\Release\ -DOPENCL_INCLUDE=path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include"
55+
```
56+
3557
**Note:**
3658
If you are using the opencv from openvino package, below are the paths:
37-
* For latest version (2022.1.0), run download_opencv.ps1 in \path\to\openvino\extras\script and the opencv folder will be downloaded at \path\to\openvino\extras.
59+
* For openvino version 2022.1.0, run download_opencv.ps1 in \path\to\openvino\extras\script and the opencv folder will be downloaded at \path\to\openvino\extras.
3860
* For older openvino version, opencv folder is available at openvino directory itself.
61+
* The current cmake files are adjusted with the opencv folders coming along with openvino packages. Plase make sure you are updating the opencv paths according to your custom builds.
62+
63+
For the squeezenet IO buffer sample:
64+
Make sure you are creating the opencl context for the right GPU device in a multi-GPU environment.
3965

4066
Build samples using msbuild either for Debug or Release configuration.
4167

4268
```bat
4369
msbuild onnxruntime_samples.sln /p:Configuration=Debug|Release
4470
```
4571

46-
To run the samples make sure you source openvino variables using setupvars.bat.
72+
To run the samples make sure you source openvino variables using setupvars.bat. Also add opencv dll paths to $PATH.

c_cxx/OpenVINO_EP/Windows/squeezenet_classification/squeezenet_cpp_app.cpp

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,16 @@ Portions of this software are copyright of their respective authors and released
2121
#include <string>
2222
#include <vector>
2323
#include <stdexcept> // To use runtime_error
24+
#include <Windows.h>
25+
#include <psapi.h>
26+
27+
std::size_t GetPeakWorkingSetSize() {
28+
PROCESS_MEMORY_COUNTERS pmc;
29+
if (GetProcessMemoryInfo(GetCurrentProcess(), &pmc, sizeof(pmc))) {
30+
return pmc.PeakWorkingSetSize;
31+
}
32+
return 0;
33+
}
2434

2535
template <typename T>
2636
T vectorProduct(const std::vector<T>& v)
@@ -387,5 +397,7 @@ int main(int argc, char* argv[])
387397
std::cout << "Minimum Inference Latency: "
388398
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count() / static_cast<float>(numTests)
389399
<< " ms" << std::endl;
400+
size_t mem_size = GetPeakWorkingSetSize();
401+
std::cout << "Peak working set size: " << mem_size << " bytes" << std::endl;
390402
return 0;
391403
}
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# Copyright (c) Microsoft Corporation. All rights reserved.
2+
# Licensed under the MIT License.
3+
4+
add_executable(run_squeezenet_io_buffer "squeezenet_cpp_app_io_buffer.cpp")
5+
target_include_directories(run_squeezenet_io_buffer PRIVATE ${OPENCV_INCLUDE_DIRS} ${OPENCL_INCLUDE})
6+
target_link_libraries(run_squeezenet_io_buffer PRIVATE onnxruntime)
7+
8+
if(OPENCV_LIBDIR)
9+
target_link_directories(run_squeezenet_io_buffer PRIVATE ${OPENCV_LIBDIR})
10+
foreach(RelLib DebLib IN ZIP_LISTS OPENCV_RELEASE_LIBRARIES OPENCV_DEBUG_LIBRARIES)
11+
target_link_libraries(run_squeezenet_io_buffer PRIVATE optimized ${RelLib} debug ${DebLib})
12+
endforeach()
13+
endif()
14+
15+
if(OPENCL_LIB)
16+
target_link_directories(run_squeezenet_io_buffer PRIVATE ${OPENCL_LIB})
17+
target_link_libraries(run_squeezenet_io_buffer PRIVATE OpenCL.lib)
18+
endif()
19+
20+
#In onnxruntime deafault install path, the required dlls are in lib and bin folders
21+
set(DLL_DIRS "${ONNXRUNTIME_ROOTDIR}/lib;${ONNXRUNTIME_ROOTDIR}/bin")
22+
foreach(DLL_DIR IN LISTS DLL_DIRS)
23+
file(GLOB ALL_DLLS ${DLL_DIR}/*.dll)
24+
foreach(ORTDll IN LISTS ALL_DLLS)
25+
add_custom_command(TARGET run_squeezenet_io_buffer POST_BUILD
26+
COMMAND ${CMAKE_COMMAND} -E copy_if_different
27+
"${ORTDll}"
28+
$<TARGET_FILE_DIR:run_squeezenet_io_buffer>)
29+
endforeach()
30+
endforeach()

0 commit comments

Comments
 (0)