Issue loading converted YOLOv5 ONNX model on Ambarella CV25 #1053
Replies: 3 comments 3 replies
-
|
This automatically generated reply acts as a friendly reminder. Answers to your questions will most often come from the community, from developers like yourself. You will, from time to time, find that Axis employees answers some of the questions, but this is not a guarantee. Think of the discussion forum as a complement to other support channels, not a replacement to any of them. If your question remains unanswered for a period of time, please revisit it to see whether it can be improved by following the guidelines listed in Axis support guidelines. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @sg-sugiyama , Test your model can be used for testing your model before packaging it in the ACAP application. Additional information based on my understanding: Ambarella CVflow Model Deployment: Key DetailsAmbarella’s SoCs like CV25 or CV28 use a proprietary CVflow hardware accelerator. Unlike general-purpose platforms, deploying neural networks here requires offline compilation into a native binary format, with tools and workflows provided only through Ambarella’s official toolchain. 1. Offline Compilation – No Runtime InterpreterUnlike TensorFlow Lite or ONNX Runtime, Ambarella devices do not support on-device interpretation or just-in-time model loading. What this means:
2. Model Format Requirements
3. Limitations
4. You may need to apply the patch for YOLOv5-based model like in the YOLOv5 guide
|
Beta Was this translation helpful? Give feedback.
-
|
Feel free to re-open it OR share the solution/alternative/comment if already resolved👁️🗨️ |
Beta Was this translation helpful? Give feedback.





Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm working on deploying a YOLOv5-based model to an Ambarella CV25 board using
cvtool.The ONNX model was successfully converted and a
.binfile was generated, but loading the model on the device gives the following error:I'm attaching the conversion command and part of the log. Could someone help me figure out what might be going wrong?
Are there specific recommended settings or restrictions for exporting YOLOv5 models for CV25 compatibility?
1. Model graph summary before surgery
onnx_print_graph_summary.py -p my_model.onnx2. Graph surgery
graph_surgery.py onnx -m my_model.onnx -isrc "i:images|is:1,3,640,640" -t CVFlow3. Model graph summary after surgery
onnx_print_graph_summary.py -p my_model_modified.onnx4. Generate image list for int8 calibration
gen_image_list.py -f images/ -o ./dra_image_bin/image_list.txt -ns -e jpg -c RGB -d int8 -r 640,640 -bf dra_image_bin/ -bo dra_image_bin/dra_bin_list.txt5. Parse the ONNX model
onnxparser.py -m my_model_modified.onnx -isrc "i:images=dra_image_bin/dra_bin_list.txt|idf:0,0,0,0|is:1,3,640,640|iq" -o my_model -of ./outputs/ -on output0vas -auto -show-progress outputs/my_model.vascavalry_gen -V 2.1.6 -d ./vas_output/ -f my_model.binThank you in advance for your support!
Beta Was this translation helpful? Give feedback.
All reactions