Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK | Seeed Studio Wiki #198
Replies: 4 comments 3 replies
-
How to make INT8 engine for yolov8 model with imgsz=3040, for use in deepstream python app on jetson? And how much does the accuracy drop when switching from Fp32 to INT8? |
Beta Was this translation helpful? Give feedback.
-
Welcome to our product documentation platform! We value user feedback and contributions, so we couldn’t be more grateful that you can help us improving the documentation. Hence, if you find obvious errors in the product documentation, or you have additional ideas and insights about using products, please contribute to us. If you need technical support, you can email it to [email protected]. If you want to discuss the product, I recommend the Seeed Studio Forum and Seeed Studio Discord Channel The "Discussion channel" in Github here is a relatively new addition and may not receive much attention. Nonetheless, we hope you find it helpful with the help of others. We will clean it up on a monthly basis. |
Beta Was this translation helpful? Give feedback.
-
Hello, i have a question and im pretty new into machine vision and using jetson devices in general. would it be feasible to run YOLOv8 on a jetson nano for a real time project like a drone being able to catch a thrown ball in mid-air? |
Beta Was this translation helpful? Give feedback.
-
Hi all! In chapter "INT8 Calibration", isn't there a step missing consisting of creating the calibration table file? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK | Seeed Studio Wiki
Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK
https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/
Beta Was this translation helpful? Give feedback.
All reactions