Skip to content

Commit 2c14609

Browse files
Sahar/ovep 1.230 sample update (#541)
* [OVEP] Updated CXX sample * [OVEP] Update python sample --------- Co-authored-by: jatinwadhwa921 <[email protected]>
1 parent 0c0e20d commit 2c14609

File tree

13 files changed

+175
-136
lines changed

13 files changed

+175
-136
lines changed

c_cxx/OpenVINO_EP/Windows/CMakeLists.txt

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ if(OPENCV_ROOTDIR)
2424
set(OPENCV_FOUND true)
2525
set(OPENCV_INCLUDE_DIRS "${OPENCV_ROOTDIR}/include")
2626
set(OPENCV_LIBDIR "${OPENCV_ROOTDIR}/x64/vc16/lib")
27-
file(GLOB OPENCV_DEBUG_LIBRARIES ${OPENCV_LIBDIR}/opencv_world470d.lib)
28-
file(GLOB OPENCV_RELEASE_LIBRARIES ${OPENCV_LIBDIR}/opencv_world470.lib)
27+
file(GLOB OPENCV_DEBUG_LIBRARIES "${OPENCV_LIBDIR}/opencv_world*d.lib")
28+
file(GLOB OPENCV_RELEASE_LIBRARIES "${OPENCV_LIBDIR}/opencv_world*.lib")
2929
list(FILTER OPENCV_RELEASE_LIBRARIES EXCLUDE REGEX ".*d\\.lib")
3030
endif()
3131

@@ -41,8 +41,4 @@ if(OPENCV_FOUND)
4141
add_subdirectory(squeezenet_classification)
4242
endif()
4343

44-
if(OPENCL_FOUND)
45-
add_subdirectory(squeezenet_classification_io_buffer)
46-
endif()
47-
4844
add_subdirectory(model-explorer)

c_cxx/OpenVINO_EP/Windows/README.md

Lines changed: 32 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -2,34 +2,43 @@
22

33
1. model-explorer
44

5-
This sample application demonstrates how to use components of the experimental C++ API to query for model inputs/outputs and how to run inferrence using OpenVINO Execution Provider for ONNXRT on a model. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/model-explorer).
5+
This sample application demonstrates how to use the **ONNX Runtime C++ API** with the OpenVINO Execution Provider (OVEP).
6+
It loads an ONNX model, inspects the input/output node names and shapes, creates random input data, and runs inference.
7+
The sample is useful for exploring model structure and verifying end-to-end execution with OVEP. [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/model-explorer).
68

79
2. Squeezenet classification sample
810

9-
The sample involves presenting an image to the ONNX Runtime (RT), which uses the OpenVINO Execution Provider for ONNXRT to run inference on various Intel hardware devices like Intel CPU, GPU, VPU and more. The sample uses OpenCV for image processing and ONNX Runtime OpenVINO EP for inference. After the sample image is inferred, the terminal will output the predicted label classes in order of their confidence. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/squeezenet_classification).
10-
11-
3. Squeezenet classification sample with IO Buffer feature
12-
13-
This sample is also doing the same process but with IO Buffer optimization enabled. With IO Buffer interfaces we can avoid any memory copy overhead when plugging OpenVINO™ inference into an existing GPU pipeline. It also enables OpenCL kernels to participate in the pipeline to become native buffer consumers or producers of the OpenVINO™ inference. Refer [here](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API.html) for more details. This sample is for GPUs only. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/squeezenet_classification_io_buffer).
11+
The sample involves presenting an image to the ONNX Runtime (RT), which uses the OpenVINO Execution Provider for ONNXRT to run inference on various Intel hardware devices like Intel CPU, GPU and NPU. The sample uses OpenCV for image processing and ONNX Runtime OpenVINO EP for inference. After the sample image is inferred, the terminal will output the predicted label classes in order of their confidence. The source code for this sample is available [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/OpenVINO_EP/Windows/squeezenet_classification).
1412

1513
## How to build
1614

1715
## Prerequisites
18-
1. [The Intel<sup>®</sup> Distribution of OpenVINO toolkit](https://docs.openvinotoolkit.org/latest/index.html)
19-
2. Use opencv
20-
3. Use opencl for IO buffer sample (squeezenet_cpp_app_io.cpp).
21-
4. Use any sample image as input to the sample.
22-
5. Download the latest Squeezenet model from the ONNX Model Zoo.
23-
This example was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [Squeezenet](https://github.com/onnx/models/tree/master/validated/vision/classification/squeezenet) model from here.
16+
1. [The Intel<sup>®</sup> Distribution of OpenVINO toolkit](https://docs.openvino.ai/2025/get-started/install-openvino.html)
17+
2. Use opencv [OpenCV](https://opencv.org/releases/)
18+
3. Use any sample image as input to the sample.
19+
4. Download the latest Squeezenet model from the ONNX Model Zoo.
20+
This example was adapted from [ONNX Model Zoo](https://github.com/onnx/models). Download the latest version of the [Squeezenet](https://github.com/onnx/models/tree/master/vision/classification/squeezenet) model from here.
2421

25-
#### Build ONNX Runtime
26-
Open x64 Native Tools Command Prompt for VS 2019.
27-
For running the sample with IO Buffer optimization feature, make sure you set the OpenCL paths. For example if you are setting the path from openvino source build folder, the paths will be like:
22+
## Build ONNX Runtime with OpenVINO on Windows
2823

24+
Make sure you open **x64 Native Tools Command Prompt for VS 2019** before running the following steps.
25+
26+
### 1. Download OpenVINO package
27+
Download the OpenVINO archive package from the official repository:
28+
[OpenVINO Archive Packages](https://storage.openvinotoolkit.org/repositories/openvino/packages)
29+
30+
Extract the downloaded archive to a directory (e.g., `C:\openvino`).
31+
32+
---
33+
34+
### 2. Set up OpenVINO environment
35+
After extracting, run the following command to set up environment variables:
36+
37+
```cmd
38+
"C:\openvino\setupvars.bat"
2939
```
30-
set OPENCL_LIBS=\path\to\openvino\folder\bin\intel64\Release\OpenCL.lib
31-
set OPENCL_INCS=\path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include
32-
```
40+
41+
### 3. Build ONNX Runtime
3342

3443
```
3544
build.bat --config RelWithDebInfo --use_openvino CPU --build_shared_lib --parallel --cmake_extra_defines CMAKE_INSTALL_PREFIX=c:\dev\ort_install --skip_tests
@@ -43,44 +52,23 @@ cd build\Windows\RelWithDebInfo
4352
msbuild INSTALL.vcxproj /p:Configuration=RelWithDebInfo
4453
```
4554

46-
#### Build the samples
55+
### Build the samples
4756

48-
Open x64 Native Tools Command Prompt for VS 2019, Git clone the sample repo.
57+
Open x64 Native Tools Command Prompt for VS 2022, Git clone the sample repo.
4958
```
5059
git clone https://github.com/microsoft/onnxruntime-inference-examples.git
5160
```
5261
Change your current directory to c_cxx\OpenVINO_EP\Windows, then run
62+
5363
```bat
5464
mkdir build && cd build
5565
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="path\to\opencv"
5666
```
5767
Choose required opencv path. Skip the opencv flag if you don't want to build squeezenet sample.
5868

59-
To get the squeezenet sample with IO buffer feature enabled, pass opencl paths as well:
60-
```bat
61-
mkdir build && cd build
62-
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="path\to\opencv" -DOPENCL_LIB=path\to\openvino\folder\bin\intel64\Release\ -DOPENCL_INCLUDE="path\to\openvino\folder\thirdparty\ocl\clhpp_headers\include;path\to\openvino\folder\thirdparty\ocl\cl_headers"
63-
```
64-
65-
**Note:**
66-
If you are using the opencv from openvino package, below are the paths:
67-
* For openvino version 2022.1.0, run download_opencv.ps1 in \path\to\openvino\extras\script and the opencv folder will be downloaded at \path\to\openvino\extras.
68-
* For older openvino version, opencv folder is available at openvino directory itself.
69-
* The current cmake files are adjusted with the opencv folders coming along with openvino packages. Plase make sure you are updating the opencv paths according to your custom builds.
70-
71-
For the squeezenet IO buffer sample:
72-
Make sure you are creating the opencl context for the right GPU device in a multi-GPU environment.
73-
74-
Build samples using msbuild for Debug configuration. For Release configuration replace Debug with Release.
75-
76-
```bat
77-
msbuild onnxruntime_samples.sln /p:Configuration=Debug
78-
```
79-
69+
### Note
8070
To run the samples make sure you source openvino variables using setupvars.bat.
8171

82-
To run the samples download and install(extract) OpenCV from: [download OpenCV](https://github.com/opencv/opencv/releases/download/4.7.0/opencv-4.7.0-windows.exe). Also copy OpenCV dll (opencv_world470.dll which is located at: "path\to\opencv\build\x64\vc16\bin") to the location of the application exe file(Release dll for release build) and (opencv_world470d.dll which is located at:"path\to\opencv\build\x64\vc16\bin") to the location of the application exe file (debug dll for debug build).
83-
8472
#### Run the sample
8573

8674
- To Run the general sample
@@ -96,13 +84,9 @@ To run the samples download and install(extract) OpenCV from: [download OpenCV](
9684
```
9785
run_squeezenet.exe --use_cpu <path_to_onnx_model> <path_to_sample_image> <path_to_labels_file>
9886
```
99-
- To Run the sample for IO Buffer Optimization feature
100-
```
101-
run_squeezenet.exe <path_to_onnx_model> <path_to_sample_image> <path_to_labels_file>
102-
```
10387

10488
## References:
10589

106-
[OpenVINO Execution Provider](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/faster-inferencing-with-one-line-of-code.html)
90+
[OpenVINO Execution Provider](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html)
10791

10892
[Other ONNXRT Reference Samples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx)

c_cxx/OpenVINO_EP/Windows/model-explorer/model-explorer.cpp

Lines changed: 68 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
#include <iostream>
2626
#include <sstream>
2727
#include <vector>
28-
#include <experimental_onnxruntime_cxx_api.h>
28+
#include <onnxruntime_cxx_api.h>
2929

3030
// pretty prints a shape dimension vector
3131
std::string print_shape(const std::vector<int64_t>& v) {
@@ -64,59 +64,102 @@ int main(int argc, char** argv) {
6464
//Appending OpenVINO Execution Provider API
6565
#ifdef USE_OPENVINO
6666
// Using OPENVINO backend
67-
OrtOpenVINOProviderOptions options;
68-
options.device_type = "CPU";
69-
std::cout << "OpenVINO device type is set to: " << options.device_type << std::endl;
70-
session_options.AppendExecutionProvider_OpenVINO(options);
67+
std::unordered_map<std::string, std::string> options;
68+
options["device_type"] = "CPU";
69+
std::cout << "OpenVINO device type is set to: " << options["device_type"] << std::endl;
70+
session_options.AppendExecutionProvider_OpenVINO_V2(options);
7171
#endif
72-
Ort::Experimental::Session session = Ort::Experimental::Session(env, model_file, session_options); // access experimental components via the Experimental namespace
73-
74-
// print name/shape of inputs
75-
std::vector<std::string> input_names = session.GetInputNames();
76-
std::vector<std::vector<int64_t> > input_shapes = session.GetInputShapes();
77-
cout << "Input Node Name/Shape (" << input_names.size() << "):" << endl;
78-
for (size_t i = 0; i < input_names.size(); i++) {
72+
Ort::Session session(env, model_file.c_str(), session_options);
73+
Ort::AllocatorWithDefaultOptions allocator;
74+
75+
size_t num_input_nodes = session.GetInputCount();
76+
std::vector<std::string> input_names;
77+
std::vector<std::vector<int64_t>> input_shapes;
78+
79+
cout << "Input Node Name/Shape (" << num_input_nodes << "):" << endl;
80+
for (size_t i = 0; i < num_input_nodes; i++) {
81+
// Get input name
82+
auto input_name = session.GetInputNameAllocated(i, allocator);
83+
input_names.push_back(std::string(input_name.get()));
84+
85+
// Get input shape
86+
Ort::TypeInfo input_type_info = session.GetInputTypeInfo(i);
87+
auto input_tensor_info = input_type_info.GetTensorTypeAndShapeInfo();
88+
std::vector<int64_t> input_dims = input_tensor_info.GetShape();
89+
input_shapes.push_back(input_dims);
90+
7991
cout << "\t" << input_names[i] << " : " << print_shape(input_shapes[i]) << endl;
92+
8093
}
8194

82-
// print name/shape of outputs
83-
std::vector<std::string> output_names = session.GetOutputNames();
84-
std::vector<std::vector<int64_t> > output_shapes = session.GetOutputShapes();
85-
cout << "Output Node Name/Shape (" << output_names.size() << "):" << endl;
86-
for (size_t i = 0; i < output_names.size(); i++) {
95+
size_t num_output_nodes = session.GetOutputCount();
96+
std::vector<std::string> output_names;
97+
std::vector<std::vector<int64_t>> output_shapes;
98+
99+
cout << "Output Node Name/Shape (" << num_output_nodes << "):" << endl;
100+
for (size_t i = 0; i < num_output_nodes; i++) {
101+
// Get output name
102+
auto output_name = session.GetOutputNameAllocated(i, allocator);
103+
output_names.push_back(std::string(output_name.get()));
104+
105+
// Get output shape
106+
Ort::TypeInfo output_type_info = session.GetOutputTypeInfo(i);
107+
auto output_tensor_info = output_type_info.GetTensorTypeAndShapeInfo();
108+
std::vector<int64_t> output_dims = output_tensor_info.GetShape();
109+
output_shapes.push_back(output_dims);
110+
87111
cout << "\t" << output_names[i] << " : " << print_shape(output_shapes[i]) << endl;
112+
88113
}
89-
90-
// Assume model has 1 input node and 1 output node.
114+
91115
assert(input_names.size() == 1 && output_names.size() == 1);
92116

93117
// Create a single Ort tensor of random numbers
94118
auto input_shape = input_shapes[0];
95119
int total_number_elements = calculate_product(input_shape);
96120
std::vector<float> input_tensor_values(total_number_elements);
97-
std::generate(input_tensor_values.begin(), input_tensor_values.end(), [&] { return rand() % 255; }); // generate random numbers in the range [0, 255]
121+
std::generate(input_tensor_values.begin(), input_tensor_values.end(), [&] { return rand() % 255; });
122+
123+
// Create input tensor
124+
Ort::MemoryInfo memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);
98125
std::vector<Ort::Value> input_tensors;
99-
input_tensors.push_back(Ort::Experimental::Value::CreateTensor<float>(input_tensor_values.data(), input_tensor_values.size(), input_shape));
126+
input_tensors.push_back(Ort::Value::CreateTensor<float>(memory_info, input_tensor_values.data(),
127+
input_tensor_values.size(), input_shape.data(),
128+
input_shape.size()));
100129

101130
// double-check the dimensions of the input tensor
102131
assert(input_tensors[0].IsTensor() &&
103132
input_tensors[0].GetTensorTypeAndShapeInfo().GetShape() == input_shape);
104133
cout << "\ninput_tensor shape: " << print_shape(input_tensors[0].GetTensorTypeAndShapeInfo().GetShape()) << endl;
105134

135+
// Create input/output name arrays for Run()
136+
std::vector<const char*> input_names_char(input_names.size(), nullptr);
137+
std::vector<const char*> output_names_char(output_names.size(), nullptr);
138+
139+
for (size_t i = 0; i < input_names.size(); i++) {
140+
input_names_char[i] = input_names[i].c_str();
141+
}
142+
for (size_t i = 0; i < output_names.size(); i++) {
143+
output_names_char[i] = output_names[i].c_str();
144+
}
145+
106146
// pass data through model
107147
cout << "Running model...";
108148
try {
109-
auto output_tensors = session.Run(session.GetInputNames(), input_tensors, session.GetOutputNames());
149+
auto output_tensors = session.Run(Ort::RunOptions{nullptr}, input_names_char.data(),
150+
input_tensors.data(), input_names_char.size(),
151+
output_names_char.data(), output_names_char.size());
110152
cout << "done" << endl;
111153

112154
// double-check the dimensions of the output tensors
113-
// NOTE: the number of output tensors is equal to the number of output nodes specifed in the Run() call
114-
assert(output_tensors.size() == session.GetOutputNames().size() &&
115-
output_tensors[0].IsTensor());
155+
assert(output_tensors.size() == output_names.size() && output_tensors[0].IsTensor());
116156
cout << "output_tensor_shape: " << print_shape(output_tensors[0].GetTensorTypeAndShapeInfo().GetShape()) << endl;
117157

118158
} catch (const Ort::Exception& exception) {
119159
cout << "ERROR running model inference: " << exception.what() << endl;
120160
exit(-1);
121161
}
162+
163+
return 0;
164+
122165
}

c_cxx/OpenVINO_EP/Windows/squeezenet_classification/CMakeLists.txt

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,14 @@ if(OPENCV_LIBDIR)
1313
endif()
1414

1515
#In onnxruntime deafault install path, the required dlls are in lib and bin folders
16-
set(DLL_DIRS "${ONNXRUNTIME_ROOTDIR}/lib;${ONNXRUNTIME_ROOTDIR}/bin")
16+
set(DLL_DIRS "${ONNXRUNTIME_ROOTDIR}/lib;${ONNXRUNTIME_ROOTDIR}/bin;${OPENCV_ROOTDIR}/x64/vc16/bin")
17+
1718
foreach(DLL_DIR IN LISTS DLL_DIRS)
1819
file(GLOB ALL_DLLS ${DLL_DIR}/*.dll)
19-
foreach(ORTDll IN LISTS ALL_DLLS)
20+
foreach(DLLFile IN LISTS ALL_DLLS)
2021
add_custom_command(TARGET run_squeezenet POST_BUILD
21-
COMMAND ${CMAKE_COMMAND} -E copy_if_different
22-
"${ORTDll}"
23-
$<TARGET_FILE_DIR:run_squeezenet>)
22+
COMMAND ${CMAKE_COMMAND} -E copy_if_different
23+
"${DLLFile}"
24+
$<TARGET_FILE_DIR:run_squeezenet>)
2425
endforeach()
2526
endforeach()

c_cxx/OpenVINO_EP/Windows/squeezenet_classification/squeezenet_cpp_app.cpp

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -218,10 +218,16 @@ int main(int argc, char* argv[])
218218
//Appending OpenVINO Execution Provider API
219219
if (useOPENVINO) {
220220
// Using OPENVINO backend
221-
OrtOpenVINOProviderOptions options;
222-
options.device_type = "CPU";
223-
std::cout << "OpenVINO device type is set to: " << options.device_type << std::endl;
224-
sessionOptions.AppendExecutionProvider_OpenVINO(options);
221+
std::unordered_map<std::string, std::string> options;
222+
options["device_type"] = "CPU";
223+
std::string config = R"({
224+
"CPU": {
225+
"INFERENCE_NUM_THREADS": "1"
226+
}
227+
})";
228+
options["load_config"] = config;
229+
std::cout << "OpenVINO device type is set to: " << options["device_type"] << std::endl;
230+
sessionOptions.AppendExecutionProvider_OpenVINO_V2(options);
225231
}
226232

227233
// Sets graph optimization level

python/OpenVINO_EP/tiny_yolo_v2_object_detection/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ python3 tiny_yolov2_obj_detection_sample.py --h
4545
```
4646
## Running the ONNXRuntime OpenVINO™ Execution Provider sample
4747
```bash
48-
python3 tiny_yolov2_obj_detection_sample.py --video face-demographics-walking-and-pause.mp4 --model tinyyolov2.onnx --device CPU_FP32
48+
python3 tiny_yolov2_obj_detection_sample.py --video face-demographics-walking-and-pause.mp4 --model tinyyolov2.onnx --device CPU
4949
```
5050

5151
## To stop the sample from running

0 commit comments

Comments
 (0)