Intel One DNN #20078
Unanswered
heflinstephenraj-sa-14411
asked this question in
EP Q&A
Intel One DNN
#20078
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am trying to run the sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 model in ONNX Runtime Java using Intel's DNNL execution provider. I have exported the model to ONNX and also generated the build with the
--use_dnnl
flag.However, when I attempt to infer with the model, I am encountering the following error:
Here is the snippet for loading the model:
The same model works fine in ONNX CPU executive provider.
Am I missing anything here? Please assist.
Beta Was this translation helpful? Give feedback.
All reactions