You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> **5. Submission of Contributions.** Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
8
+
9
+
Contributors must sign-off each commit by adding a `Signed-off-by: ...`
10
+
line to commit messages to certify that they have the right to submit
11
+
the code they are contributing to the project according to the
12
+
[Developer Certificate of Origin (DCO)](https://developercertificate.org/).
This repository provides a GPU-accelerated package for object detection based on [DetectNet](https://developer.nvidia.com/blog/detectnet-deep-neural-network-object-detection-digits/). Using a trained deep-learning model and a monocular camera, the `isaac_ros_detectnet` package can detect objects of interest in an image and provide bounding boxes. DetectNet is similar to other popular object detection models such as YOLOV3, FasterRCNN, SSD, and others while being efficient with multiple object classes in large images.
7
8
8
9
### Isaac ROS NITROS Acceleration
10
+
9
11
This package is powered by [NVIDIA Isaac Transport for ROS (NITROS)](https://developer.nvidia.com/blog/improve-perception-performance-for-ros-2-applications-with-nvidia-isaac-transport-for-ros/), which leverages type adaptation and negotiation to optimize message formats and dramatically accelerate communication between participating nodes.
10
12
11
13
### Performance
12
-
The performance results of benchmarking the prepared pipelines in this package on supported platforms are below:
13
14
15
+
The performance results of benchmarking the prepared pipelines in this package on supported platforms are below:
14
16
15
-
| Pipeline | AGX Orin | AGX Xavier| x86_64 w/ RTX 3060 Ti |
> **Note:** These numbers are reported with defaults parameter values found in [params.yaml](./isaac_ros_detectnet/config/params.yaml).
22
+
23
+
These data have been collected per the methodology described [here](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/performance-summary.md#methodology).
24
+
20
25
### ROS2 Graph Configuration
21
26
22
27
To run the DetectNet object detection inference, the following ROS2 nodes should be set up and running:
@@ -25,13 +30,14 @@ To run the DetectNet object detection inference, the following ROS2 nodes should
25
30
26
31
1.**Isaac ROS DNN Image encoder**: This will take an image message and convert it to a tensor ([`TensorList`](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/isaac_ros_tensor_list_interfaces/msg/TensorList.msg) that can be
27
32
processed by the network.
28
-
2.**Isaac ROS DNN Inference - Triton**: This will execute the DetectNet network and take as input the tensor from the DNN Image Encoder.
33
+
2.**Isaac ROS DNN Inference - Triton**: This will execute the DetectNet network and take as input the tensor from the DNN Image Encoder.
29
34
> **Note:** The [Isaac ROS TensorRT](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/tree/main/isaac_ros_tensor_rt) package is not able to perform inference with DetectNet models at this time.
30
35
31
36
The output will be a TensorList message containing the encoded detections. Use the parameters `model_name` and `model_repository_paths` to point to the model folder and set the model name. The `.plan` file should be located at `$model_repository_path/$model_name/1/model.plan`
32
37
3.**Isaac ROS Detectnet Decoder**: This node will take the TensorList with encoded detections as input, and output `Detection2DArray` messages for each frame. See the following section for the parameters.
@@ -44,7 +50,6 @@ To run the DetectNet object detection inference, the following ROS2 nodes should
44
50
-[Quickstart](#quickstart)
45
51
-[Next Steps](#next-steps)
46
52
-[Try More Examples](#try-more-examples)
47
-
-[Use Different Models](#use-different-models)
48
53
-[Customize your Dev Environment](#customize-your-dev-environment)
49
54
-[Package Reference](#package-reference)
50
55
-[`isaac_ros_detectnet`](#isaac_ros_detectnet)
@@ -58,26 +63,28 @@ To run the DetectNet object detection inference, the following ROS2 nodes should
58
63
-[Updates](#updates)
59
64
60
65
## Latest Update
61
-
Update 2022-08-31: Update to use [NVIDIA Isaac Transport for ROS (NITROS)](https://developer.nvidia.com/blog/improve-perception-performance-for-ros-2-applications-with-nvidia-isaac-transport-for-ros/) and to be compatible with JetPack 5.0.2
66
+
67
+
Update 2022-10-19: Updated OSS licensing
62
68
63
69
## Supported Platforms
70
+
64
71
This package is designed and tested to be compatible with ROS2 Humble running on [Jetson](https://developer.nvidia.com/embedded-computing) or an x86_64 system with an NVIDIA GPU.
65
72
66
73
> **Note**: Versions of ROS2 earlier than Humble are **not** supported. This package depends on specific ROS2 implementation features that were only introduced beginning with the Humble release.
| Jetson |[Jetson Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/)<br/>[Jetson Xavier](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/)|[JetPack 5.0.2](https://developer.nvidia.com/embedded/jetpack)| For best performance, ensure that [power settings](https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/PlatformPowerAndPerformance.html) are configured appropriately. |
| Jetson |[Jetson Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/) <br> [Jetson Xavier](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/)|[JetPack 5.0.2](https://developer.nvidia.com/embedded/jetpack)| For best performance, ensure that [power settings](https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/PlatformPowerAndPerformance.html) are configured appropriately. |
To simplify development, we strongly recommend leveraging the Isaac ROS Dev Docker images by following [these steps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md). This will streamline your development environment setup with the correct versions of dependencies on both Jetson and x86_64 platforms.
77
83
78
84
> **Note:** All Isaac ROS Quickstarts, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.
79
85
80
86
## Quickstart
87
+
81
88
1. Set up your development environment by following the instructions [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md).
82
89
2. Clone this repository and its dependencies under `~/workspaces/isaac_ros-dev/src`.
83
90
@@ -102,47 +109,59 @@ To simplify development, we strongly recommend leveraging the Isaac ROS Dev Dock
5. Inside the container, build and source the workspace:
126
+
115
127
```bash
116
128
cd /workspaces/isaac_ros-dev && \
117
129
colcon build --symlink-install && \
118
130
source install/setup.bash
119
131
```
132
+
120
133
6. (Optional) Run tests to verify complete and correct installation:
134
+
121
135
```bash
122
136
colcon test --executor sequential
123
137
```
138
+
124
139
7. Run the quickstart setup script which will download the [PeopleNet Model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet) from NVIDIA GPU Cloud(NGC)
140
+
125
141
```bash
126
142
cd /workspaces/isaac_ros-dev/src/isaac_ros_object_detection/isaac_ros_detectnet && \
9. Visualize and validate the output of the package in the `rqt_image_view` window. After about a minute, your output should look like this:
152
+
153
+
9. Visualize and validate the output of the package in the `rqt_image_view` window. After about a minute, your output should look like this:
136
154
137
155

138
156
139
157
## Next Steps
158
+
140
159
### Try More Examples
160
+
141
161
To continue your exploration, check out the following suggested examples:
142
-
- [Tutorial with Isaac Sim](docs/tutorial-isaac-sim.md)
143
162
144
-
### Use Different Models
145
-
Click [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/blob/main/docs/model-preparation.md)for more information about how to use NGC models.
163
+
- [Tutorial with Isaac Sim](docs/tutorial-isaac-sim.md)
164
+
- [Tutorial with Custom Model](docs/tutorial-custom-model.md) For more info click [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/blob/main/docs/model-preparation.md)
146
165
147
166
This package only supports models based on the `Detectnet_v2` architecture. Some of the [supported DetectNet models](https://catalog.ngc.nvidia.com/?filters=&orderBy=scoreDESC&query=DetectNet) from NGC:
148
167
@@ -153,12 +172,14 @@ This package only supports models based on the `Detectnet_v2` architecture. Some
153
172
| [DashCamNet](https://ngc.nvidia.com/catalog/models/nvidia:tao:dashcamnet) | Identify objects from a moving object |
154
173
| [FaceDetectIR](https://ngc.nvidia.com/catalog/models/nvidia:tao:facedetectir) | Detect faces in a dark environment with IR camera |
155
174
156
-
157
175
### Customize your Dev Environment
176
+
158
177
To customize your development environment, reference [this guide](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/modify-dockerfile.md).
|`tensor_sub`| [isaac_ros_tensor_list_interfaces/TensorList](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/isaac_ros_tensor_list_interfaces/msg/TensorList.msg) | The tensor that represents the inferred aligned bounding boxes. |
For solutions to problems with Isaac ROS, please check [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/troubleshooting.md).
198
222
199
223
### Deep Learning Troubleshooting
224
+
200
225
For solutions to problems with using DNN models, please check [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/blob/main/docs/troubleshooting.md).
This tutorial walks you through how to use a different [DetectNet Model](https://catalog.ngc.nvidia.com/models?filters=&orderBy=dateModifiedDESC&query=detectnet) with [isaac_ros_detectnet](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_object_detection) for object detection.
6
+
7
+
## Tutorial Walkthrough
8
+
9
+
1. Complete the [Quickstart section](../README.md#quickstart) in the main README.
10
+
2. Choose one of the detectnet model that is [listed here](https://catalog.ngc.nvidia.com/models?filters=&orderBy=dateModifiedDESC&query=detectnet&page=0&pageSize=25)
11
+
3. Create a config file. Use `resources/quickstart_config.pbtxt` as a template. The datatype can be found in the overview tab of the model page. The `input/dims` should be the size of the raw input images. It can be different for the same model. The `output/dims` dimensions can be calculated as `round(input_dims/max_batch_size)`. Place this config file in the `isaac_ros_detectnet/resources` directory. You can find more information about the config file [here](https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/docs/configuring_the_client.md#configuring-the-detectnet_v2-model-entry-in-the-model-repository)
12
+
4. Run the following command with the required input parameters:
13
+
14
+
```bash
15
+
cd /workspaces/isaac_ros-dev/src/isaac_ros_object_detection/isaac_ros_detectnet && \
`--model-link`: Get the wget link to the specific model version under the file browser tab in the page. Click on the download button on the top right and selectWGET. This will copy the commend to you clipboard. Paste this in a text editor and extract only the hyperlink. eg: `https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/zip`
22
+
23
+
`--model-file-name`: The name of the .etl file found in the file browser tab of the model page. eg: `resnet34_peoplenet_int8.etlt`
24
+
25
+
`--height`: height dimension of the input image eg: `632`
26
+
27
+
`--width`: width dimension of the input image. eg: `1200`
28
+
29
+
`--config-file`: relative path to the config file mentioned in step 3. eg: `isaac_ros_detectnet/resources/peoplenet_config.pbtxt`
30
+
--precision : type/precision of model found in the overview tag of the model page. eg: `int8`
31
+
32
+
`--output-layers`: output layers seperated by commas that can be found from the txt file in the file browser tab of the model page. eg: `output_cov/Sigmoid,output_bbox/BiasAdd`
33
+
5. Replace lines 32 and 33 in [isaac_ros_detectnet.launch.py](../isaac_ros_detectnet/launch/isaac_ros_detectnet.launch.py#L32-33) with the input image dimensions
0 commit comments