Replies: 1 comment
-
I am working on adding support to ONNX export (with and without caffe2) #4120 and #4153 Hopefully it gets accepted soon |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I trained a Retinanet model using the existing config file from Detectron2 with the following command:
python train.py --output_path ./retina --yaml COCO-Detection/retinanet_R_101_FPN_3x.yaml --resume --max_iter 20000 --lr_rate 1e-5 --batch_size 16 --iou 0.4
I wonder how to use the caffe2_export.py to export the PyTorch file (.pth) to the ONNX file. What's the command that I should use and do I need to parse any arguments in the command?
If this is not the right way to export the ONNX, could you suggest some methods to do it?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions