Guide how to create and use YOLO-NAS ONNX models with Frigate 0.17 TensorRT Image (GeForce GPU) #21858
Rick-Hard89
started this conversation in
Show and tell
Replies: 2 comments 7 replies
-
|
Thank you so so much...... |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
The aws s3 link to the weights is bad (403). Any advice on where to get that file now? I see a ton of yolo weight files on huggingface.co but I do not see anything I could confidently say is the same file. Specifically, I can't find a source of a yolo file that is labeled with both nas and coco. Would any of these work? |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This is a simple walkthrough for using a custom YOLO-NAS ONNX model with Frigate 0.17.0-beta2-tensorrt.
I'm using a docker-compose install on Debian 12 but might work with other versions (haven't tested)
Example container config (compose.yaml):
Detector and model configuration (config.yaml):
Adjust width/height/path if you export 480 or 640.
Prepare model directory and fix ownership:
Environment setup:
Install packages in the following order or paste everything at once:
You will see red dependency warnings, ignore them.
Download YOLO-NAS weights
Then create the python script that will convert the model:
(Adjust width/height/name if you export 480 or 640.)
Paste:
Start the script and wait for it to say Done
Your exported ONNX-file should be in:
Then create the folder labelmaps and a file called coco-80.txt:
with the following list into it:
Start Frigate
Detector Inference Speed (GeForce RTX 3050) from testing with 4x4k and 4x1080p cameras:
320 = ~10ms
480 = ~13ms
640 = ~18ms
Beta Was this translation helpful? Give feedback.
All reactions