diff --git a/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/README.md b/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/README.md index ec4dbc1bf..6fdcc1555 100644 --- a/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/README.md +++ b/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/README.md @@ -44,9 +44,12 @@ export OPENCL_INCS=path/to/your/directory/openvino/thirdparty/ocl/clhpp_headers/ - For general sample ``` - g++ -o run_squeezenet squeezenet_cpp_app.cpp -I ../../../include/onnxruntime/core/session/ -I /opt/intel/openvino_2021.4.752/opencv/include/ -I /opt/intel/openvino_2021.4.752/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /opt/intel/openvino_2021.4.752/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc + g++ -o run_squeezenet squeezenet_cpp_app.cpp -I ../../../include/onnxruntime/core/session/ -I /path/to/opencv/include/ -I /path/to/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /path/to/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc ``` - Note: This build command is using the opencv location from OpenVINO 2021.4.2 Release Installation. You can use any version of OpenVINO and change the location path accordingly. + **Note:** + If you are using the opencv from openvino package, below are the paths: + * For latest version (2022.1.0), run download_opencv.sh in /path/to/openvino/extras/script and the opencv folder will be downloaded at /path/to/openvino/extras. + * For older openvino version, opencv folder is available at openvino directory itself. - For the sample using IO Buffer Optimization feature Set the OpenCL lib and headers path. For example if you are setting the path from openvino source build folder, the paths will be like: @@ -56,8 +59,12 @@ export OPENCL_INCS=path/to/your/directory/openvino/thirdparty/ocl/clhpp_headers/ ``` Now based on the above path, compile command will be: ``` - g++ -o run_squeezenet squeezenet_cpp_app_io.cpp -I ../../../include/onnxruntime/core/session/ -I $OPENCL_INCS -I $OPENCL_INCS/../../cl_headers/ -I /opt/intel/openvino_2021.4.752/opencv/include/ -I /opt/intel/openvino_2021.4.752/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /opt/intel/openvino_2021.4.752/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc -L $OPENCL_LIBS -lOpenCL + g++ -o run_squeezenet squeezenet_cpp_app_io.cpp -I ../../../include/onnxruntime/core/session/ -I $OPENCL_INCS -I $OPENCL_INCS/../../cl_headers/ -I /path/to/opencv/include/ -I /path/to/opencv/lib/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime -L /path/to/opencv/lib/ -lopencv_imgcodecs -lopencv_dnn -lopencv_core -lopencv_imgproc -L $OPENCL_LIBS -lOpenCL ``` + **Note:** + If you are using the opencv from openvino package, below are the paths: + * For latest version (2022.1.0), run download_opencv.sh in /path/to/openvino/extras/script and the opencv folder will be downloaded at /path/to/openvino/extras. + * For older openvino version, opencv folder is available at openvino directory itself. 4. Run the sample diff --git a/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app.cpp b/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app.cpp index f553d16fb..7e0786260 100644 --- a/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app.cpp +++ b/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app.cpp @@ -260,7 +260,7 @@ int main(int argc, char* argv[]) // step 2: Resize the image. cv::Mat resizedImageBGR, resizedImageRGB, resizedImage, preprocessedImage; cv::resize(imageBGR, resizedImageBGR, - cv::Size(inputDims.at(2), inputDims.at(3)), + cv::Size(inputDims.at(3), inputDims.at(2)), cv::InterpolationFlags::INTER_CUBIC); // step 3: Convert the image to HWC RGB UINT8 format. diff --git a/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app_io.cpp b/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app_io.cpp index 5e649548a..c0642d03f 100644 --- a/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app_io.cpp +++ b/c_cxx/OpenVINO_EP/Linux/squeezenet_classification/squeezenet_cpp_app_io.cpp @@ -290,7 +290,7 @@ int main(int argc, char* argv[]) // step 2: Resize the image. cv::Mat resizedImageBGR, resizedImageRGB, resizedImage, preprocessedImage; cv::resize(imageBGR, resizedImageBGR, - cv::Size(inputDims.at(2), inputDims.at(3)), + cv::Size(inputDims.at(3), inputDims.at(2)), cv::InterpolationFlags::INTER_CUBIC); // step 3: Convert the image to HWC RGB UINT8 format. diff --git a/c_cxx/OpenVINO_EP/Windows/README.md b/c_cxx/OpenVINO_EP/Windows/README.md index c347f93cc..6c96a54fd 100644 --- a/c_cxx/OpenVINO_EP/Windows/README.md +++ b/c_cxx/OpenVINO_EP/Windows/README.md @@ -8,7 +8,7 @@ #### Build ONNX Runtime Open x64 Native Tools Command Prompt for VS 2019. ``` -build.bat --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel --cmake_extra_defines CMAKE_INSTALL_PREFIX=c:\dev\ort_install +build.bat --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel --cmake_extra_defines CMAKE_INSTALL_PREFIX=c:\dev\ort_install --skip_tests ``` By default products of the build on Windows go to build\Windows\config folder. In the case above it would be build\Windows\RelWithDebInfo. @@ -28,9 +28,15 @@ git clone https://github.com/microsoft/onnxruntime-inference-examples.git Change your current directory to c_cxx\OpenVINO_EP\Windows, then run ```bat mkdir build && cd build -cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="C:\Program Files (x86)\Intel\openvino_2021.4.752\opencv" +cmake .. -A x64 -T host=x64 -Donnxruntime_USE_OPENVINO=ON -DONNXRUNTIME_ROOTDIR=c:\dev\ort_install -DOPENCV_ROOTDIR="path\to\opencv" ``` Choose required opencv path. Skip the opencv flag if you don't want to build squeezenet sample. + +**Note:** +If you are using the opencv from openvino package, below are the paths: +* For latest version (2022.1.0), run download_opencv.ps1 in \path\to\openvino\extras\script and the opencv folder will be downloaded at \path\to\openvino\extras. +* For older openvino version, opencv folder is available at openvino directory itself. + Build samples using msbuild either for Debug or Release configuration. ```bat diff --git a/c_cxx/OpenVINO_EP/Windows/squeezenet_classification/squeezenet_cpp_app.cpp b/c_cxx/OpenVINO_EP/Windows/squeezenet_classification/squeezenet_cpp_app.cpp index 1c3023d64..fe1a70a66 100644 --- a/c_cxx/OpenVINO_EP/Windows/squeezenet_classification/squeezenet_cpp_app.cpp +++ b/c_cxx/OpenVINO_EP/Windows/squeezenet_classification/squeezenet_cpp_app.cpp @@ -265,7 +265,7 @@ int main(int argc, char* argv[]) // step 2: Resize the image. cv::Mat resizedImageBGR, resizedImageRGB, resizedImage, preprocessedImage; cv::resize(imageBGR, resizedImageBGR, - cv::Size(inputDims.at(2), inputDims.at(3)), + cv::Size(inputDims.at(3), inputDims.at(2)), cv::InterpolationFlags::INTER_CUBIC); // step 3: Convert the image to HWC RGB UINT8 format. diff --git a/c_sharp/OpenVINO_EP/yolov3_object_detection/Program.cs b/c_sharp/OpenVINO_EP/yolov3_object_detection/Program.cs index 30bc46e5c..8ac757e1c 100644 --- a/c_sharp/OpenVINO_EP/yolov3_object_detection/Program.cs +++ b/c_sharp/OpenVINO_EP/yolov3_object_detection/Program.cs @@ -81,16 +81,19 @@ static void Main(string[] args) //Preprocessing image Tensor input = new DenseTensor(new[] { 1, 3, h, w }); - for (int y = 0; y < clone.Height; y++) + clone.ProcessPixelRows(accessor => { - Span pixelSpan = clone.GetPixelRowSpan(y); - for (int x = 0; x < clone.Width; x++) + for (int y = 0; y < accessor.Height; y++) { - input[0, 0, y, x] = pixelSpan[x].B / 255f; - input[0, 1, y, x] = pixelSpan[x].G / 255f; - input[0, 2, y, x] = pixelSpan[x].R / 255f; + Span pixelSpan = accessor.GetRowSpan(y); + for (int x = 0; x < pixelSpan.Length; x++) + { + input[0, 0, y, x] = pixelSpan[x].B / 255f; + input[0, 1, y, x] = pixelSpan[x].G / 255f; + input[0, 2, y, x] = pixelSpan[x].R / 255f; + } } - } + }); //Get the Image Shape var image_shape = new DenseTensor(new[] { 1, 2 }); @@ -145,8 +148,7 @@ static void Main(string[] args) // Put boxes, labels and confidence on image and save for viewing using var outputImage = File.OpenWrite(outImageFilePath); - // Using FreeMono font for Linux and Arial for others - Font font = (RuntimeInformation.IsOSPlatform(OSPlatform.Linux)) ? SystemFonts.CreateFont("FreeMono", 16) : SystemFonts.CreateFont("Arial", 16); + Font font = SystemFonts.CreateFont("Arial", 16); foreach (var p in predictions) { imageOrg.Mutate(x => diff --git a/c_sharp/OpenVINO_EP/yolov3_object_detection/README.md b/c_sharp/OpenVINO_EP/yolov3_object_detection/README.md index 1a73e7fb6..a73d2d2ee 100644 --- a/c_sharp/OpenVINO_EP/yolov3_object_detection/README.md +++ b/c_sharp/OpenVINO_EP/yolov3_object_detection/README.md @@ -9,7 +9,7 @@ The source code for this sample is available [here](https://github.com/microsoft # How to build ## Prerequisites -1. Install [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0) or higher for your OS (Mac, Windows or Linux). +1. Install [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0) or higher and download nuget for your OS (Mac, Windows or Linux). Refer [here](https://onnxruntime.ai/docs/build/inferencing.html#prerequisites-1). 2. [The Intel® Distribution of OpenVINO toolkit](https://docs.openvinotoolkit.org/latest/index.html) 3. Use any sample Image as input to the sample. 4. Download the latest YOLOv3 model from the ONNX Model Zoo. @@ -24,29 +24,44 @@ The source code for this sample is available [here](https://github.com/microsoft [Documentation](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html) To build nuget packages of onnxruntime with openvino flavour -``` -./build.sh --config Release --use_openvino MYRIAD_FP16 --build_shared_lib --build_nuget -``` + ``` + ./build.sh --config Release --use_openvino MYRIAD_FP16 --build_shared_lib --build_nuget + ``` ## Build the sample C# Application 1. Create a new console project -``` -dotnet new console -``` + ``` + dotnet new console + ``` +Replace the sample scripts with the one [here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_sharp/OpenVINO_EP/yolov3_object_detection) 2. Install Nuget Packages of Onnxruntime and [ImageSharp](https://www.nuget.org/packages/SixLabors.ImageSharp) - 1. Open the Visual C# Project file (.csproj) using VS19. - 2. Right click on project, navigate to manage Nuget Packages. - 3. Install SixLabors.ImageSharp, SixLabors.Core, SixLabors.Fonts and SixLabors.ImageSharp.Drawing Packages from nuget.org. - 4. Install Microsoft.ML.OnnxRuntime.Managed and Microsoft.ML.OnnxRuntime.Openvino from your build directory nuget-artifacts. - + * Using Visual Studio + 1. Open the Visual C# Project file (.csproj) using VS19. + 2. Right click on project, navigate to manage Nuget Packages. + 3. Install SixLabors.ImageSharp, SixLabors.Core, SixLabors.Fonts and SixLabors.ImageSharp.Drawing Packages from nuget.org. + 4. Install Microsoft.ML.OnnxRuntime.Managed and Microsoft.ML.OnnxRuntime.Openvino from your build directory nuget-artifacts. + * Using cmd + ``` + mkdir [source-folder] + cd [console-project-folder] + dotnet add package SixLabors.ImageSharp + dotnet add package SixLabors.Core + dotnet add package SixLabors.Fonts + dotnet add package SixLabors.ImageSharp.Drawing + ``` + Add Microsoft.ML.OnnxRuntime.Managed and Microsoft.ML.OnnxRuntime.Openvino packages. + ``` + nuget add [path-to-nupkg] -Source [source-path] + dotnet add package [nuget=package-name] -v [package-version] -s [source-path] + ``` 3. Compile the sample -``` -dotnet build -``` + ``` + dotnet build + ``` 4. Run the sample -``` -dotnet run [path-to-model] [path-to-image] [path-to-output-image] -``` + ``` + dotnet run [path-to-model] [path-to-image] [path-to-output-image] + ``` ## References: @@ -54,7 +69,7 @@ dotnet run [path-to-model] [path-to-image] [path-to-output-image] [Get started with ORT for C#](https://onnxruntime.ai/docs/get-started/with-csharp.html) -[fasterrcnn_csharp](https://onnxruntime.ai/docs/tutorials/fasterrcnn_csharp.html)) +[fasterrcnn_csharp](https://onnxruntime.ai/docs/tutorials/fasterrcnn_csharp.html) [resnet50_csharp](https://onnxruntime.ai/docs/tutorials/resnet50_csharp.html)