guides/deepstream-nvidia-jetson/ #14233
Replies: 26 comments 105 replies
-
|
I have just done everything as I've seen, apparently i can't seem to run inference: Quitting |
Beta Was this translation helpful? Give feedback.
-
|
is the ultralytics package necessary for following this guide? would just the .pt file be sufficient for exporting with deepstream-yolo |
Beta Was this translation helpful? Give feedback.
-
|
Hello everyone, I'm trying to run the deepstream-app with an FP16 YOLOv8 model. I first followed this guide https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv8.md but couldn't find an "fp16 option". Then, I tried to export a custom model with model.export(half=True,...), but the deepstream-app runs without displaying rectangles (I did set network-mode=2 in the config file). Has anyone successfully used an FP16 YOLO model with DeepStream? |
Beta Was this translation helpful? Give feedback.
-
|
Hello everyone, I have just done everything as I've seen, and everything works fine. My problem is how to run RTSP streaming. I created one using VLC on my local Windows and modified these 2 arguments. type=4 but it's giving me a black screen and showing this error. |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I also have a separate venv with yolov8 to train on a custom dataset. My question is: can I just update this to yolo11 and next move my trained .pt file to the deepstream installation. Will this work? |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I'm using the Jetson Orin Nano 8GB Developer Kit, and I followed the steps from the video and documentation to convert the YOLOv11 model to TensorRT. After building the model successfully, I encountered a segmentation fault (core dumped) error when trying to run the model. |
Beta Was this translation helpful? Give feedback.
-
|
Will this guide work for every yolo model including YOLOv10 or it is exclusive for YOLOV11 alone. |
Beta Was this translation helpful? Give feedback.
-
|
do u have an implement of the bytetrack with deepstream? |
Beta Was this translation helpful? Give feedback.
-
|
|
Beta Was this translation helpful? Give feedback.
-
|
DeepStream 7.1 - Negotiation Errors & Best Practices for USB Webcam in Docker I'm trying to transition from example pipelines to a live USB webcam stream in DeepStream 7.1, but I'm struggling with negotiation errors when modifying configurations or building a custom Python pipeline. I've successfully run other pipelines, so I’m confident my hardware and passthroughs are working. However, integrating capture, display, and inference into a single working pipeline has been a challenge. Issues: Questions: I was thinking of using shared memory for a one to many model approach but im open to suggestions, I want to first get a USB webcam working as a known reference point. Any guidance would be greatly appreciated! |
Beta Was this translation helpful? Give feedback.
-
|
So Glenn, |
Beta Was this translation helpful? Give feedback.
-
|
Do we first have to set up our NVIDIA Jetson device with Ultralytics YOLO11 before following this guide? |
Beta Was this translation helpful? Give feedback.
-
|
I mean, first do this https://docs.ultralytics.com/guides/nvidia-jetson/ |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I just followed the tutorial step by step and able to initiate the inference on the sample video file. However, all the detected objects are classified as "person" even the cars. I checked the "labels.txt" file and its path on the "config.txt" as well. I also set the "num-detected-classes" as 80 inside config.txt. My hardware is Jetson Xavier NX running Jetpack 5.0.2 and Deepstream 6.1 . I tried to omit "--simplify" option during conversion but it didn't help. Can you elaborate on what may be the reason for this misclassification ? |
Beta Was this translation helpful? Give feedback.
-
|
Do you have examples for models that are loaded on a Triton server? Also what if the source is a live source. Do you have gstreamer pipeline examples? |
Beta Was this translation helpful? Give feedback.
-
|
Hello. Can we use 2 YOLO11 models on 1 deepstream task? I'm running it on nvidia jetson orin nano but I couldn't seem to find the option to add another yolo11 model in the "deepstream_app_config.txt" file |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I've trained a custom classification model using yolo11n-cls.pt. However, when I try to export this model using export_yolo11.py, I encounter the following error: python3 export_yolo11.py -w /home/tns/Downloads/final-best-classifier-train1.pt Starting: /home/tns/Downloads/final-best-classifier-train1.pt It appears the error, AttributeError: 'tuple' object has no attribute 'transpose', occurs within the forward method of export_yolo11.py on line 31. This suggests that the input x at that point is a tuple, not a tensor, and therefore doesn't have the transpose method. Could you please advise on how to resolve this issue? |
Beta Was this translation helpful? Give feedback.
-
|
this step ->> error. anything I can do? |
Beta Was this translation helpful? Give feedback.
-
|
I'm currently working with the YOLOv11 Oriented Bounding Box (OBB) model exported in TensorRT format, and deploying it on an NVIDIA Jetson device (AGX Orin). While the model works in standard inference pipelines (.pt) , I’m facing challenges integrating it into a DeepStream-based application to process live RTSP video streams. There seems to be no existing reference implementation or sample Python code that demonstrates how to use the OBB variant of the model with DeepStream. Specifically, I'm looking for guidance on: 1- Integrating the YOLOv11 OBB TensorRT model with DeepStream (preferably using Python bindings). Any support, documentation, or examples from the team or community would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
Hello, i want to use the .engine model which is created by deepstream 7.1 in cpp or python. What should i do? Every instruction is just like prev versions! |
Beta Was this translation helpful? Give feedback.
-
|
Do you know how to get the detections to work within a ROI only in deepstream? |
Beta Was this translation helpful? Give feedback.
-
|
Hello, Deepstream Clarity question please. I notice the Nvidia Deepstream_python_app sample application use python file and one or more configuration text files. While Ultralytics Deepstream_Yolo uses two configuration text files.
Thank you for your help and time. |
Beta Was this translation helpful? Give feedback.
-
|
Title: Guidance Needed: Where to Modify Code to Add Custom Violation Logic and Modbus Output in DeepStream-YOLO Pipeline Hello DeepStream-YOLO Team, First of all, thank you for the excellent DeepStream-YOLO integration repository. I was able to successfully deploy a YOLO26 model using DeepStream on a Jetson Orin Nano (JetPack 6.1), and inference is working correctly using My Use CaseI am building an industrial safety system. So I need to integrate custom logic after detection results. My QuestionIn this DeepStream-YOLO setup, in which file or location should I add my custom logic? Specifically, I want to know:
Because I understand that:
So I want guidance on the recommended DeepStream architecture for this. My Current Setup
GoalAfter detection → evaluate violation → send Modbus 0/1 signal in real-time. I would really appreciate guidance on:
Thank you again for your great work 🙏 |
Beta Was this translation helpful? Give feedback.
-
|
Hello |
Beta Was this translation helpful? Give feedback.
-
|
Hello, Thank you for all your help to my issues. I have a usb camera with the below format and fails to run. How do I get deepstream to accept or convert the format below to the needed format for display. Deepstream 7.1/Jetpack 6.2.1deepstream app text file:[soruce0] v4l2-ctl -d /dev/video0 --list-formats-extioctl: VIDIOC_ENUM_FMT GST_DEBUG=3 deepstream-app -c deepstream_app_config_yolo26_1usbcam.txtSetting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode. 0:00:00.401518311 6238 0xaaaad1ab46c0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/shamsee/DeepStream-Yolo/model_b1_gpu0_fp32.engine Runtime commands: NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. ** INFO: <bus_callback:291>: Pipeline ready ERROR from src_elem: Internal data stream error. ** INFO: <bus_callback:277>: Pipeline running ** INFO: <bus_callback:334>: Received EOS. Exiting ... Quitting |
Beta Was this translation helpful? Give feedback.
-
|
Below is the result from deepstream **PERF: FPS 0 (Avg) But i can get around 85 fps using ultralytics inference. why is that so. what is the need of using deepstream then. or i am doing anything wrong? |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
guides/deepstream-nvidia-jetson/
Learn how to deploy Ultralytics YOLOv8 on NVIDIA Jetson devices using TensorRT and DeepStream SDK. Explore performance benchmarks and maximize AI capabilities.
https://docs.ultralytics.com/guides/deepstream-nvidia-jetson/
Beta Was this translation helpful? Give feedback.
All reactions