Replies: 3 comments 22 replies
-
I am working on adding support to ONNX export (with and without caffe2) #4120 and #4153 Hopefully it gets accepted soon |
Beta Was this translation helpful? Give feedback.
0 replies
-
If you are still stuck on this, then you can use the export_model.py present in the tools/deploy/ folder of detectron2. |
Beta Was this translation helpful? Give feedback.
9 replies
-
Hello, have you solved your problem? I also face the same problem #4881 |
Beta Was this translation helpful? Give feedback.
13 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Ask a Question
Question
I'm trying to export a detectron2 model in onnx format (model.onnx) and do inference with onnxruntime using the exported file (model.onnx).
To do so, i tried to export the famous instance segmentation model provided by detectron2 ( model_zoo ), i succeeded to get the model.onnx file ( i'm not sure if i did correctly though ) but i can't use it to do inference, can anyone tell me what i'm doing wrong ?
Here is a shared google colab notebook containing my code https://colab.research.google.com/drive/11BB-iufQCmwHoRzS4iBFwH7SqasDOik8?usp=sharing
Note 1 : In order to modify the file, you'll have to make a copy of it in your drive !
Note 2 : I raised this issue in the onnx github page, and their answer was :
And here is the error message ( you can find it in the notebook but i wanted to add it just for SEO purposes, in case someone is looking for a solution to the same problem )
I Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions