Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions c_cxx/OpenVINO_EP/Linux/squeezenet_classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,12 @@ export OPENCL_INCS=path/to/your/directory/openvino/thirdparty/ocl/clhpp_headers/

- For general sample
```
g++ -o run_squeezenet squeezenet_cpp_app.cpp -I ../../../include/onnxruntime/core/session/ -I /opt/intel/openvino_2021.4.752/opencv/include/ -I /opt/intel/openvino_2021.4.752/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /opt/intel/openvino_2021.4.752/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc
g++ -o run_squeezenet squeezenet_cpp_app.cpp -I ../../../include/onnxruntime/core/session/ -I /path/to/opencv/include/ -I /path/to/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /path/to/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc
```
Note: This build command is using the opencv location from OpenVINO 2021.4.2 Release Installation. You can use any version of OpenVINO and change the location path accordingly.
**Note:**
If you are using the opencv from openvino package, below are the paths:
* For latest version (2022.1.0), run download_opencv.sh in /path/to/openvino/extras/script and the opencv will be downloaded at /path/to/openvino/extras.
* For older openvino version, opencv is available at openvino directory itself.

- For the sample using IO Buffer Optimization feature
Set the OpenCL lib and headers path. For example if you are setting the path from openvino source build folder, the paths will be like:
Expand All @@ -56,8 +59,12 @@ export OPENCL_INCS=path/to/your/directory/openvino/thirdparty/ocl/clhpp_headers/
```
Now based on the above path, compile command will be:
```
g++ -o run_squeezenet squeezenet_cpp_app_io.cpp -I ../../../include/onnxruntime/core/session/ -I $OPENCL_INCS -I $OPENCL_INCS/../../cl_headers/ -I /opt/intel/openvino_2021.4.752/opencv/include/ -I /opt/intel/openvino_2021.4.752/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /opt/intel/openvino_2021.4.752/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc -L $OPENCL_LIBS -lOpenCL
g++ -o run_squeezenet squeezenet_cpp_app_io.cpp -I ../../../include/onnxruntime/core/session/ -I $OPENCL_INCS -I $OPENCL_INCS/../../cl_headers/ -I /path/to/opencv/include/ -I /path/to/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /path/to/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc -L $OPENCL_LIBS -lOpenCL
```
**Note:**
If you are using the opencv from openvino package, below are the paths:
* For latest version (2022.1.0), run download_opencv.sh in /path/to/openvino/extras/script and the opencv will be downloaded at /path/to/openvino/extras.
* For older openvino version, opencv is available at openvino directory itself.

4. Run the sample

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ int main(int argc, char* argv[])
// step 2: Resize the image.
cv::Mat resizedImageBGR, resizedImageRGB, resizedImage, preprocessedImage;
cv::resize(imageBGR, resizedImageBGR,
cv::Size(inputDims.at(2), inputDims.at(3)),
cv::Size(inputDims.at(3), inputDims.at(2)),
cv::InterpolationFlags::INTER_CUBIC);

// step 3: Convert the image to HWC RGB UINT8 format.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ int main(int argc, char* argv[])
// step 2: Resize the image.
cv::Mat resizedImageBGR, resizedImageRGB, resizedImage, preprocessedImage;
cv::resize(imageBGR, resizedImageBGR,
cv::Size(inputDims.at(2), inputDims.at(3)),
cv::Size(inputDims.at(3), inputDims.at(2)),
cv::InterpolationFlags::INTER_CUBIC);

// step 3: Convert the image to HWC RGB UINT8 format.
Expand Down
7 changes: 6 additions & 1 deletion c_cxx/OpenVINO_EP/Windows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,14 @@ git clone https://github.com/microsoft/onnxruntime-inference-examples.git
Change your current directory to c_cxx\OpenVINO_EP\Windows, then run
```bat
mkdir build && cd build
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="C:\Program Files (x86)\Intel\openvino_2021.4.752\opencv"
cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR=path\to\opencv"
```
Choose required opencv path. Skip the opencv flag if you don't want to build squeezenet sample.
**Note:**
If you are using the opencv from openvino package, below are the paths:
* For latest version (2022.1.0), run download_opencv.ps1 in \path\to\openvino\extras\script and the opencv will be downloaded at \path\to\openvino\extras.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

download_opencv.ps1 should be download_opencv.sh

* For older openvino version, opencv is available at openvino directory itself.

Build samples using msbuild either for Debug or Release configuration.

```bat
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ int main(int argc, char* argv[])
// step 2: Resize the image.
cv::Mat resizedImageBGR, resizedImageRGB, resizedImage, preprocessedImage;
cv::resize(imageBGR, resizedImageBGR,
cv::Size(inputDims.at(2), inputDims.at(3)),
cv::Size(inputDims.at(3), inputDims.at(2)),
cv::InterpolationFlags::INTER_CUBIC);

// step 3: Convert the image to HWC RGB UINT8 format.
Expand Down
20 changes: 11 additions & 9 deletions c_sharp/OpenVINO_EP/yolov3_object_detection/Program.cs
Original file line number Diff line number Diff line change
Expand Up @@ -81,16 +81,19 @@ static void Main(string[] args)

//Preprocessing image
Tensor<float> input = new DenseTensor<float>(new[] { 1, 3, h, w });
for (int y = 0; y < clone.Height; y++)
clone.ProcessPixelRows(accessor =>
{
Span<Rgb24> pixelSpan = clone.GetPixelRowSpan(y);
for (int x = 0; x < clone.Width; x++)
for (int y = 0; y < accessor.Height; y++)
{
input[0, 0, y, x] = pixelSpan[x].B / 255f;
input[0, 1, y, x] = pixelSpan[x].G / 255f;
input[0, 2, y, x] = pixelSpan[x].R / 255f;
Span<Rgb24> pixelSpan = accessor.GetRowSpan(y);
for (int x = 0; x < pixelSpan.Length; x++)
{
input[0, 0, y, x] = pixelSpan[x].B / 255f;
input[0, 1, y, x] = pixelSpan[x].G / 255f;
input[0, 2, y, x] = pixelSpan[x].R / 255f;
}
}
}
});

//Get the Image Shape
var image_shape = new DenseTensor<float>(new[] { 1, 2 });
Expand Down Expand Up @@ -145,8 +148,7 @@ static void Main(string[] args)

// Put boxes, labels and confidence on image and save for viewing
using var outputImage = File.OpenWrite(outImageFilePath);
// Using FreeMono font for Linux and Arial for others
Font font = (RuntimeInformation.IsOSPlatform(OSPlatform.Linux)) ? SystemFonts.CreateFont("FreeMono", 16) : SystemFonts.CreateFont("Arial", 16);
Font font = SystemFonts.CreateFont("Arial", 16);
foreach (var p in predictions)
{
imageOrg.Mutate(x =>
Expand Down
53 changes: 34 additions & 19 deletions c_sharp/OpenVINO_EP/yolov3_object_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The source code for this sample is available [here](https://github.com/microsoft
# How to build

## Prerequisites
1. Install [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0) or higher for your OS (Mac, Windows or Linux).
1. Install [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0) or higher and download nuget for your OS (Mac, Windows or Linux). Refer [here](https://onnxruntime.ai/docs/build/inferencing.html#prerequisites-1).
2. [The Intel<sup>®</sup> Distribution of OpenVINO toolkit](https://docs.openvinotoolkit.org/latest/index.html)
3. Use any sample Image as input to the sample.
4. Download the latest YOLOv3 model from the ONNX Model Zoo.
Expand All @@ -24,37 +24,52 @@ The source code for this sample is available [here](https://github.com/microsoft
[Documentation](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html)

To build nuget packages of onnxruntime with openvino flavour
```
./build.sh --config Release --use_openvino MYRIAD_FP16 --build_shared_lib --build_nuget
```
```
./build.sh --config Release --use_openvino MYRIAD_FP16 --build_shared_lib --build_nuget
```
## Build the sample C# Application
1. Create a new console project
```
dotnet new console
```
```
dotnet new console
```
Replace the sample scripts with the one [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_sharp/OpenVINO_EP/yolov3_object_detection)
2. Install Nuget Packages of Onnxruntime and [ImageSharp](https://www.nuget.org/packages/SixLabors.ImageSharp)
1. Open the Visual C# Project file (.csproj) using VS19.
2. Right click on project, navigate to manage Nuget Packages.
3. Install SixLabors.ImageSharp, SixLabors.Core, SixLabors.Fonts and SixLabors.ImageSharp.Drawing Packages from nuget.org.
4. Install Microsoft.ML.OnnxRuntime.Managed and Microsoft.ML.OnnxRuntime.Openvino from your build directory nuget-artifacts.

* Using Visual Studio
1. Open the Visual C# Project file (.csproj) using VS19.
2. Right click on project, navigate to manage Nuget Packages.
3. Install SixLabors.ImageSharp, SixLabors.Core, SixLabors.Fonts and SixLabors.ImageSharp.Drawing Packages from nuget.org.
4. Install Microsoft.ML.OnnxRuntime.Managed and Microsoft.ML.OnnxRuntime.Openvino from your build directory nuget-artifacts.
* Using cmd
```
mkdir [source-folder]
cd [console-project-folder]
dotnet add package SixLabors.ImageSharp
dotnet add package SixLabors.Core
dotnet add package SixLabors.Fonts
dotnet add package SixLabors.ImageSharp.Drawing
```
Add Microsoft.ML.OnnxRuntime.Managed and Microsoft.ML.OnnxRuntime.Openvino packages.
```
nuget add [path-to-nupkg] -Source [source-path]
dotnet add package [nuget=package-name] -v [package-version] -s [source-path]
```
3. Compile the sample
```
dotnet build
```
```
dotnet build
```

4. Run the sample
```
dotnet run [path-to-model] [path-to-image] [path-to-output-image]
```
```
dotnet run [path-to-model] [path-to-image] [path-to-output-image]
```

## References:

[OpenVINO Execution Provider](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/faster-inferencing-with-one-line-of-code.html)

[Get started with ORT for C#](https://onnxruntime.ai/docs/get-started/with-csharp.html)

[fasterrcnn_csharp](https://onnxruntime.ai/docs/tutorials/fasterrcnn_csharp.html))
[fasterrcnn_csharp](https://onnxruntime.ai/docs/tutorials/fasterrcnn_csharp.html)

[resnet50_csharp](https://onnxruntime.ai/docs/tutorials/resnet50_csharp.html)