Replies: 4 comments 4 replies
-
This is the code for setting up my ONNXRunTime inference environment: cv::Mat det1, det2;
cv::resize(input, det1, cv::Size(256, 256), cv::INTER_AREA);
det1.convertTo(det1, CV_32FC3);
InferencerOfEfficientAD::preProcess(det1, det2);
cv::Mat blob = cv::dnn::blobFromImage(det2, 1., cv::Size(256, 256), cv::Scalar(0, 0, 0), false, true); //Creates 4-dimensional blob from image
auto memory_info = Ort::MemoryInfo::CreateCpu(OrtAllocatorType::OrtArenaAllocator, OrtMemType::OrtMemTypeDefault);
vector<Ort::Value> input_tensors;
input_tensors.push_back(Ort::Value::CreateTensor<float>(memory_info, blob.ptr<float>(), blob.total(), input_dims.data(), input_dims.size()));
const char* ch_in = "input";
const char* const* p_in = &ch_in;
const char* ch_out = "output";
const char* const* p_out = &ch_out;
vector<Ort::Value> output_tensors;
try {
output_tensors = session->Run(Ort::RunOptions{ nullptr }, p_in, input_tensors.data(), 1, p_out, 1);
}
catch(const Ort::Exception& e){
std::cerr << "ONNX Runtime error: " << e.what() << std::endl;
} |
Beta Was this translation helpful? Give feedback.
-
Experts, please help me, this is really important to me. |
Beta Was this translation helpful? Give feedback.
-
Hello. I'm not familiar with ONNX engine and OpenCV in C++ but could you please also share the It is important that values for EfficientAd are scaled to intervals [0, 1] (if you didn't change the I don't think any of these things is really the reason that the inference doesn't work, but these things should be fixed to get valid results once you figure out the problem. But maybe if you use the cv::Mat blob = cv::dnn::blobFromImage(det1, 1.0 / 255, cv::Size(256, 256), cv::Scalar(0, 0, 0), true, true); |
Beta Was this translation helpful? Give feedback.
-
@B1SH0PP, not sure if it would help, but @ashwinvaidya17 created a OpenVINO C++ inference inference capabilities here. This might give you some ideas? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I tried to run the

efficient.onnx
file exported from the Anomalib library onC++ ONNXRunTime
, but encountered a model structure issue during execution. I'm not sure if this is due to a problem with how I wrote theONNXRunTime
code or if it's caused by the network structure.Beta Was this translation helpful? Give feedback.
All reactions