Error while using ORT with OpenVINO EP #25758
-
I am using Intel.ML.OnnxRuntime.OpenVino nuget package for OpenVino support on Intel UHD Graphics 630,... then have this C# code to enable OpenVino EP...
No other setup, or installation been done. When I run following ONNX file: llmware/tiny-llama-chat-onnx/model.onnx (from huggingface) this error occurs: [OpenVINO-EP] Output names mismatch between OpenVINO and ONNX This model runs fine on CPU. Questions:
Full warnings/error log: 2025-08-15 08:52:57.7334262 [W:onnxruntime:CSharpOnnxRuntime, openvino_provider_factory.cc:240 onnxruntime::openvino_ep::ParseProviderInfo::<lambda_2>::operator ()] Empty OV Config Map passed. Skipping load_config option parsing. 2025-08-15 08:53:00.6611546 [W:onnxruntime:CSharpOnnxRuntime, openvino_provider_factory.cc:240 onnxruntime::openvino_ep::ParseProviderInfo::<lambda_2>::operator ()] Empty OV Config Map passed. Skipping load_config option parsing. 2025-08-15 08:53:01.7632736 [W:onnxruntime:, session_state.cc:1280 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf. 2025-08-15 08:53:01.7752132 [W:onnxruntime:, session_state.cc:1282 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments. 2025-08-15 08:53:06.0933579 [E:onnxruntime:, sequential_executor.cc:572 onnxruntime::ExecuteKernel] Non-zero status code returned while running OpenVINO-EP-subgraph_1 node. Name:'OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0' Status Message: C:\Users\Administrator\Documents\jatin\nuget_122_latest\onnxruntime\onnxruntime\core\providers\openvino\backend_utils.cc:216 struct Ort::detail::ValueImpl<struct Ort::detail::Unowned > __cdecl onnxruntime::openvino_ep::backend_utils::GetOutputTensor(struct Ort::KernelContext &,class std::basic_string<char,struct std::char_traits,class std::allocator >,const class std::unordered_map<class std::basic_string<char,struct std::char_traits,class std::allocator >,unsigned int,struct std::hash<class std::basic_string<char,struct std::char_traits,class std::allocator > >,struct std::equal_to<class std::basic_string<char,struct std::char_traits,class std::allocator > >,class std::allocator<struct std::pair<class std::basic_string<char,struct std::char_traits,class std::allocator > const ,unsigned int> > > &,class std::shared_ptr) [OpenVINO-EP] Output names mismatch between OpenVINO and ONNX |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
The error you’re seeing comes from a mismatch between the ONNX model outputs and what the OpenVINO Execution Provider (EP) can parse. A few important points: 1. Model format support
2. GPU vs CPU on UHD 630
3. Installation requirements
4. What you can do
Answering your questions directly:
Recommendation: Start with FP32/FP16 vision models to verify your setup, then use |
Beta Was this translation helpful? Give feedback.
-
@alishanawer, thanks for the detail answer |
Beta Was this translation helpful? Give feedback.
The error you’re seeing comes from a mismatch between the ONNX model outputs and what the OpenVINO Execution Provider (EP) can parse. A few important points:
1. Model format support
The
Intel.ML.OnnxRuntime.OpenVino
NuGet package does not support all ONNX models out of the box.The LLaMA model you’re trying to run (
llmware/tiny-llama-chat-onnx
) is quantized and uses ops that are not yet fully supported by OpenVINO EP. That’s why you get:Best compatibility is with standard FP32 or FP16 ONNX models. INT8 is partially supported (mainly for CNNs), but int4/q4 quantized models are not.
2. GPU vs CPU on UHD 630