Deploy YOLOv8 with TensorRT and DeepStream SDK | Seeed Studio Wiki #640
Replies: 6 comments
-
Hello, I have this error at the end that I can't resolve: "CUDA Failure: CUDA Driver version is insufficient for CUDA Runtime version in file yoloplugins.cpp. Aborted (core dumped)" Astrid Duban |
Beta Was this translation helpful? Give feedback.
-
Hello, Can I know the JetPack version and the DeepStream version you have installed? |
Beta Was this translation helpful? Give feedback.
-
can we use it with deepstream 6.4.0 becasue i am getting an error when I use it with yolov7. |
Beta Was this translation helpful? Give feedback.
-
What about calibration for int8? |
Beta Was this translation helpful? Give feedback.
-
I have reComputer J3011 JETSON ORIN. It has tensorrt version 10.3.0 import cv2 Assuming you have the HostDeviceMem, allocate_buffers, do_inference from your original scriptLoad the TensorRT engine from the .engine filedef load_engine(trt_logger, engine_path: str) -> trt.ICudaEngine: def allocate_buffers(engine: trt.ICudaEngine):
Function to process a frame from the video and perform inferencedef process_frame(frame: np.ndarray, context, engine, inputs, outputs, bindings, stream):
Video processing loopdef run_inference_on_video(engine_path: str, video_path: str):
Run the inference on a videoif name == "main":
File "/home/eric/1.py", line 106, in can someone go through my script try to run on their JETSON device and tell me a solution |
Beta Was this translation helpful? Give feedback.
-
Very practical and well-documented guide! Deploying YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream makes real-time inference highly efficient, especially for edge AI applications. The integration of YOLOv8 and DeepStream not only boosts performance but also enables scalable video analytics with minimal latency. It’s great to see how powerful and lightweight object detection can become with this setup. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Deploy YOLOv8 with TensorRT and DeepStream SDK | Seeed Studio Wiki
Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK - Data Label, AI Model Train, AI Model Deploy
https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/
Beta Was this translation helpful? Give feedback.
All reactions