Skip to content

Commit 829c4f6

Browse files
E#-88506 Extend G-API classification benchmark to work with different inference backends (#3853)
* Add gapi execution provider selection * Make compiled with OCV 4.9 * Add GAPI inference backend selection, Apply some review comments * Add checks on backend arguments * Apply review comments, Add MACRO guards for different backends employment * Fix -backend help message * Fix compilation bug with provider template * Fix possible 'unusable params' warning * Update demos/classification_benchmark_demo/cpp_gapi/README.md Co-authored-by: Zlobin Vladimir <[email protected]> * Fix comments, Add improved openCV version check * Add mean/scale into README.md * Update demos/classification_benchmark_demo/cpp_gapi/README.md applied Co-authored-by: Zlobin Vladimir <[email protected]> * Add missing ONNX mean/scale processing * Decrease IE and ONNX in opencv support versions requirements --------- Co-authored-by: Zlobin Vladimir <[email protected]>
1 parent d831efe commit 829c4f6

File tree

15 files changed

+482
-20
lines changed

15 files changed

+482
-20
lines changed

demos/classification_benchmark_demo/cpp_gapi/README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -145,6 +145,9 @@ Options:
145145
-no_show Optional. Disable showing of processed images.
146146
-time "<integer>" Optional. Time in seconds to execute program. Default is -1 (infinite time).
147147
-u Optional. List of monitors to show initially.
148+
-backend <string> Optional. Specify an inference backend. The list of available backends depends on openCV version. Default value is IE. See README.md for details
149+
-mean_values Optional. Normalize input by subtracting the mean values per channel. Example: "255.0 255.0 255.0"
150+
-scale_values Optional. Divide input by scale values per channel. Division is applied after mean values subtraction. Example: "255.0 255.0 255.0"
148151
```
149152

150153
The number of `InferRequest`s is specified by -nireq flag. Each `InferRequest` acts as a "buffer": it waits in queue before being filled with images and sent for inference, then after the inference completes, it waits in queue until its results are processed. Increasing the number of `InferRequest`s usually increases performance, because in that case multiple `InferRequest`s can be processed simultaneously if the device supports parallelization. However, big number of `InferRequest`s increases latency because each image still needs to wait in queue.
@@ -161,6 +164,10 @@ For example, use the following command-line command to run the application:
161164
-u CDM
162165
```
163166

167+
To let the demo find onnx libs copy `onnxruntime_providers_openvino.dll` from `lib` and `onnxruntime.dll`, `onnxruntime_providers_shared.dll` from `bin` dirs of onnxruntume install to the folder with the demo executable.
168+
169+
The inference backend is specified by -backend flag. Depends on openCV version there might be different backends available: `IE` starting from openCV 4.2.0, `ONNX` from 4.5.1 and `OV` from 4.8. The possible device value for `"ONNX/DML/<device>` can be found using `set OPENCV_LOG_LEVEL=INFO` and running the demo with `-backend ONNX/DML/`. Search for `Available DirectML adapters:`. For `-backend ONNX/OV/<device>` the device is the same as the one selected for `--use_openvino` while compiling `onnxruntime`.
170+
164171
## Demo Output
165172

166173
The demo uses OpenCV to display the resulting image grid with classification results presented as a text above images. The demo reports:

demos/classification_benchmark_demo/cpp_gapi/classification_benchmark_demo_gapi.hpp

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@
88

99
#include <gflags/gflags.h>
1010

11+
#include <utils_gapi/backend_description.hpp>
12+
#include <utils/args_helper.hpp>
13+
1114
static const char help_message[] = "Print a usage message.";
1215
static const char image_message[] = "Required. Path to a folder with images or path to an image file.";
1316
static const char model_message[] = "Required. Path to an .xml file with a trained model.";
@@ -26,7 +29,13 @@ static const char no_show_message[] = "Optional. Disable showing of processed im
2629
static const char execution_time_message[] = "Optional. Time in seconds to execute program. "
2730
"Default is -1 (infinite time).";
2831
static const char utilization_monitors_message[] = "Optional. List of monitors to show initially.";
29-
32+
static const std::string backend_message_str("Optional. Specify an inference backend. The list of available backends: " +
33+
merge(getSupportedInferenceBackends(), ",") + ". Default value is IE. See README.md for details");
34+
static const char *backend_message = backend_message_str.c_str();
35+
static const char mean_values_message[] =
36+
"Optional. Normalize input by subtracting the mean values per channel. Example: \"255.0 255.0 255.0\"";
37+
static const char scale_values_message[] = "Optional. Divide input by scale values per channel. Division is applied "
38+
"after mean values subtraction. Example: \"255.0 255.0 255.0\"";
3039

3140
DEFINE_bool(h, false, help_message);
3241
DEFINE_string(i, "", image_message);
@@ -42,6 +51,9 @@ DEFINE_string(res, "1280x720", image_grid_resolution_message);
4251
DEFINE_bool(no_show, false, no_show_message);
4352
DEFINE_uint32(time, std::numeric_limits<gflags::uint32>::max(), execution_time_message);
4453
DEFINE_string(u, "", utilization_monitors_message);
54+
DEFINE_string(backend, "IE", backend_message);
55+
DEFINE_string(mean_values, "", mean_values_message);
56+
DEFINE_string(scale_values, "", scale_values_message);
4557

4658
/**
4759
* \brief This function shows a help message
@@ -66,4 +78,7 @@ static void showUsage() {
6678
std::cout << " -no_show " << no_show_message << std::endl;
6779
std::cout << " -time \"<integer>\" " << execution_time_message << std::endl;
6880
std::cout << " -u " << utilization_monitors_message << std::endl;
81+
std::cout << " -backend " << backend_message << std::endl;
82+
std::cout << " -mean_values " << mean_values_message << std::endl;
83+
std::cout << " -scale_values " << scale_values_message << std::endl;
6984
}

demos/classification_benchmark_demo/cpp_gapi/main.cpp

Lines changed: 21 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,6 @@
1818
#include <opencv2/gapi/gcomputation.hpp>
1919
#include <opencv2/gapi/gmat.hpp>
2020
#include <opencv2/gapi/gproto.hpp>
21-
#include <opencv2/gapi/infer.hpp>
22-
#include <opencv2/gapi/infer/ie.hpp>
2321
#include <opencv2/gapi/util/optional.hpp>
2422

2523

@@ -36,11 +34,15 @@
3634
#include <utils/slog.hpp>
3735
#include <utils_gapi/kernel_package.hpp>
3836
#include <utils_gapi/stream_source.hpp>
37+
#include <utils_gapi/backend_builder.hpp>
3938

4039
#include "classification_benchmark_demo_gapi.hpp"
4140
#include "custom_kernels.hpp"
4241
#include <models/classification_model.h>
4342

43+
namespace nets {
44+
G_API_NET(Classification, <cv::GMat(cv::GMat)>, "classification");
45+
}
4446

4547
namespace util {
4648
bool ParseAndCheckCommandLine(int argc, char* argv[]) {
@@ -61,11 +63,16 @@ bool ParseAndCheckCommandLine(int argc, char* argv[]) {
6163
return true;
6264
}
6365

64-
} // namespace util
65-
66-
namespace nets {
67-
G_API_NET(Classification, <cv::GMat(cv::GMat)>, "classification");
66+
inference_backends_t ParseInferenceBackends(const std::string &str, char sep = ',') {
67+
inference_backends_t backends;
68+
std::stringstream params_list(str);
69+
std::string line;
70+
while (std::getline(params_list, line, sep)) {
71+
backends.push(BackendDescription::parseFromArgs(line));
72+
}
73+
return backends;
6874
}
75+
} // namespace util
6976

7077
int main(int argc, char* argv[]) {
7178
try {
@@ -133,19 +140,16 @@ int main(int argc, char* argv[]) {
133140
});
134141

135142
/** Configure network **/
143+
auto nets = cv::gapi::networks();
136144
auto config = ConfigFactory::getUserConfig(FLAGS_d, FLAGS_nireq, FLAGS_nstreams, FLAGS_nthreads);
137-
// clang-format off
138-
const auto net =
139-
cv::gapi::ie::Params<nets::Classification>{
140-
FLAGS_m, // path to topology IR
141-
fileNameNoExt(FLAGS_m) + ".bin", // path to weights
142-
FLAGS_d // device specifier
143-
}.cfgNumRequests(config.maxAsyncRequests)
144-
.pluginConfig(config.getLegacyConfig());
145-
// clang-format on
146-
145+
inference_backends_t backends = util::ParseInferenceBackends(FLAGS_backend);
146+
nets += create_execution_network<nets::Classification>(FLAGS_m,
147+
BackendsConfig {config,
148+
FLAGS_mean_values,
149+
FLAGS_scale_values},
150+
backends);
147151
auto pipeline = comp.compileStreaming(cv::compile_args(custom::kernels(),
148-
cv::gapi::networks(net),
152+
nets,
149153
cv::gapi::streaming::queue_capacity{1}));
150154

151155
/** Output container for result **/

demos/common/cpp/utils/include/utils/args_helper.hpp

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,9 @@ void readInputFilesArguments(std::vector<std::string>& files, const std::string&
3232
void parseInputFilesArguments(std::vector<std::string>& files);
3333

3434
std::vector<std::string> split(const std::string& s, char delim);
35+
void split(const std::string& s, char delim, std::vector<float> &out);
36+
std::string merge(std::initializer_list<std::string> list, const char *delim);
37+
std::string merge(const std::vector<std::string> &list, const char *delim);
3538

3639
std::vector<std::string> parseDevices(const std::string& device_string);
3740

demos/common/cpp/utils/include/utils/config_factory.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ struct ModelConfig {
3131
ov::AnyMap compiledModelConfig;
3232

3333
std::set<std::string> getDevices();
34-
std::map<std::string, std::string> getLegacyConfig();
34+
std::map<std::string, std::string> getLegacyConfig() const;
3535

3636
protected:
3737
std::set<std::string> devices;

demos/common/cpp/utils/src/args_helper.cpp

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717
#include <map>
1818

1919
#include <algorithm>
20+
#include <iterator>
2021
#include <cctype>
2122
#include <sstream>
2223

@@ -77,6 +78,37 @@ std::vector<std::string> split(const std::string& s, char delim) {
7778
return result;
7879
}
7980

81+
void split(const std::string& s, char delim, std::vector<float> &out) {
82+
std::stringstream ss(s);
83+
std::string item;
84+
85+
while (getline(ss, item, delim)) {
86+
try {
87+
out.push_back(std::stof(item));
88+
} catch (...) {
89+
throw std::runtime_error("cannot split the string: \"" + s + "\" onto floats");
90+
}
91+
}
92+
}
93+
94+
template <class It>
95+
static std::string merge_impl(It begin, It end, const char* delim) {
96+
std::stringstream ss;
97+
std::copy(begin, end, std::ostream_iterator<std::string>(ss, delim));
98+
std::string result = ss.str();
99+
if (!result.empty()) {
100+
result.resize(result.size() - strlen(delim));
101+
}
102+
return result;
103+
}
104+
std::string merge(std::initializer_list<std::string> list, const char* delim) {
105+
return merge_impl(list.begin(), list.end(), delim);
106+
}
107+
108+
std::string merge(const std::vector<std::string> &list, const char *delim) {
109+
return merge_impl(list.begin(), list.end(), delim);
110+
}
111+
80112
std::vector<std::string> parseDevices(const std::string& device_string) {
81113
const std::string::size_type colon_position = device_string.find(":");
82114
if (colon_position != std::string::npos) {

demos/common/cpp/utils/src/config_factory.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ ModelConfig ConfigFactory::getCommonConfig(const std::string& flags_d, uint32_t
9393
return config;
9494
}
9595

96-
std::map<std::string, std::string> ModelConfig::getLegacyConfig() {
96+
std::map<std::string, std::string> ModelConfig::getLegacyConfig() const {
9797
std::map<std::string, std::string> config;
9898
for (const auto& item : compiledModelConfig) {
9999
config[item.first] = item.second.as<std::string>();
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
// Copyright (C) 2021-2023 Intel Corporation
2+
// SPDX-License-Identifier: Apache-2.0
3+
//
4+
5+
#pragma once
6+
7+
#include <functional>
8+
#include <list>
9+
#include <map>
10+
#include <stdexcept>
11+
#include <string>
12+
13+
#include "ie_backend.hpp"
14+
#include "onnx_backend.hpp"
15+
#include "ov_backend.hpp"
16+
17+
template<class ExecNetwork>
18+
cv::gapi::GNetPackage create_execution_network(const std::string &model_path,
19+
const BackendsConfig &config,
20+
const inference_backends_t &backends = inference_backends_t{}) {
21+
if (backends.empty()) {
22+
throw std::runtime_error("No G-API backend specified.\nPlease select a backend from the list: " +
23+
merge(getSupportedInferenceBackends(), ", "));
24+
}
25+
static const std::map<std::string,
26+
std::function<cv::gapi::GNetPackage(const std::string &,
27+
const BackendsConfig &,
28+
const inference_backends_t &)>
29+
> maps {
30+
#ifdef GAPI_IE_BACKEND
31+
{"IE", &BackendApplicator<ExecNetwork, cv::gapi::ie::Params>::apply}
32+
#endif
33+
#ifdef GAPI_ONNX_BACKEND
34+
, { "ONNX", &BackendApplicator<ExecNetwork, cv::gapi::onnx::Params>::apply}
35+
#endif
36+
#ifdef GAPI_OV_BACKEND
37+
, { "OV", &BackendApplicator<ExecNetwork, cv::gapi::ov::Params>::apply}
38+
#endif
39+
};
40+
41+
const BackendDescription &backend = backends.front();
42+
const auto it = maps.find(backend.name);
43+
if (it == maps.end()) {
44+
throw std::runtime_error("Cannot apply unknown G-API backend: " + backend.name +
45+
"\nPlease, check on available backend list: " +
46+
merge(getSupportedInferenceBackends(), ","));
47+
}
48+
return it->second(model_path, config, backends);
49+
}
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
// Copyright (C) 2021-2023 Intel Corporation
2+
// SPDX-License-Identifier: Apache-2.0
3+
//
4+
5+
#pragma once
6+
7+
#include <queue>
8+
#include <string>
9+
#include <vector>
10+
11+
#include <opencv2/gapi/infer.hpp>
12+
#include <utils/config_factory.h>
13+
14+
#include "utils_gapi/gapi_features.hpp"
15+
16+
struct BackendDescription {
17+
template<class It>
18+
BackendDescription(const std::string &name, It begin, It end) :
19+
name(name), properties(begin, end) {}
20+
21+
static BackendDescription parseFromArgs(const std::string &arg, char sep = '/');
22+
23+
std::string name;
24+
std::vector<std::string> properties;
25+
};
26+
27+
struct BackendsConfig: ModelConfig {
28+
BackendsConfig(const ModelConfig &src,
29+
const std::string &mean_values = "",
30+
const std::string &scale_values = "");
31+
std::string mean_values;
32+
std::string scale_values;
33+
};
34+
35+
using inference_backends_t = std::queue<BackendDescription>;
36+
37+
std::initializer_list<std::string> getSupportedInferenceBackends();
38+
39+
template<class ExecNetwork,
40+
template <class> class Params>
41+
struct BackendApplicator {
42+
static cv::gapi::GNetPackage apply(const std::string&, const BackendsConfig &, const inference_backends_t &);
43+
};
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
#pragma once
2+
3+
#include <opencv2/core/version.hpp>
4+
#include <opencv2/gapi/core.hpp>
5+
6+
#if CV_VERSION_MAJOR > 4 || (CV_VERSION_MAJOR == 4 && CV_VERSION_MINOR >= 2)
7+
#define GAPI_IE_BACKEND
8+
#endif
9+
10+
#if CV_VERSION_MAJOR > 4 || (CV_VERSION_MAJOR == 4 && CV_VERSION_MINOR > 5)
11+
#define GAPI_ONNX_BACKEND
12+
#endif
13+
14+
#if CV_VERSION_MAJOR > 4 || (CV_VERSION_MAJOR == 4 && CV_VERSION_MINOR >= 8)
15+
#define GAPI_OV_BACKEND
16+
#endif
17+
18+
#if CV_VERSION_MAJOR > 4 || (CV_VERSION_MAJOR == 4 && CV_VERSION_MINOR > 8)
19+
#define GAPI_ONNX_BACKEND_EP_EXTENSION
20+
#endif

0 commit comments

Comments
 (0)