Skip to content

Commit f5dbd1c

Browse files
Isaac ROS 0.20.0 (DP2)
1 parent 3859996 commit f5dbd1c

File tree

54 files changed

+826
-11390
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+826
-11390
lines changed

CONTRIBUTING.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Isaac ROS Contribution Rules
2+
3+
Any contribution that you make to this repository will
4+
be under the Apache 2 License, as dictated by that
5+
[license](http://www.apache.org/licenses/LICENSE-2.0.html):
6+
7+
> **5. Submission of Contributions.** Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
8+
9+
Contributors must sign-off each commit by adding a `Signed-off-by: ...`
10+
line to commit messages to certify that they have the right to submit
11+
the code they are contributing to the project according to the
12+
[Developer Certificate of Origin (DCO)](https://developercertificate.org/).
13+
14+
[//]: # (202201002)

LICENSE

Lines changed: 201 additions & 65 deletions
Large diffs are not rendered by default.

README.md

Lines changed: 114 additions & 57 deletions
Large diffs are not rendered by default.

docs/model-preparation.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Preparing Deep Learning Models for Isaac ROS
22

33
## Obtaining a Pre-trained Model from NGC
4+
45
The NVIDIA GPU Cloud hosts a [catalog](https://catalog.ngc.nvidia.com/models) of Deep Learning pre-trained models that are available for your development.
56

67
1. Use the **Search Bar** to find a pre-trained model that you are interested in working with.
@@ -15,6 +16,7 @@ The NVIDIA GPU Cloud hosts a [catalog](https://catalog.ngc.nvidia.com/models) of
1516
5. **Paste** the copied command into a terminal to download the model in the current working directory.
1617

1718
## Using `tao-converter` to decrypt the Encrypted TLT Model (`.etlt`) Format
19+
1820
As discussed above, models distributed with the `.etlt` file extension are encrypted and must be decrypted before use via NVIDIA's [`tao-converter`](https://developer.nvidia.com/tao-toolkit-get-started).
1921

2022
`tao-converter` is already included in the Docker images available as part of the standard [Isaac ROS Development Environment](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md).
@@ -26,22 +28,25 @@ The per-platform installation paths are described below:
2628
| x86_64 | `/opt/nvidia/tao/tao-converter-x86-tensorrt8.0/tao-converter` | **`/opt/nvidia/tao/tao-converter`** |
2729
| Jetson(aarch64) | `/opt/nvidia/tao/jp5` | **`/opt/nvidia/tao/tao-converter`** |
2830

29-
3031
### Converting `.etlt` to a TensorRT Engine Plan
32+
3133
Here are some examples for generating the TensorRT engine file using `tao-converter`. In this example, we will use the [`PeopleSemSegnet Shuffleseg` model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplesemsegnet/files?version=deployable_shuffleseg_unet_v1.0):
3234

33-
#### Generate an engine file for the `fp16` data type:
35+
#### Generate an engine file for the `fp16` data type
36+
3437
```bash
3538
mkdir -p /workspaces/isaac_ros-dev/models && \
3639
/opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -e /workspaces/isaac_ros-dev/models/peoplesemsegnet_shuffleseg.engine -o argmax_1 peoplesemsegnet_shuffleseg_etlt.etlt
3740
```
41+
3842
> **Note:** The specific values used in the command above are retrieved from the **PeopleSemSegnet** page under the **Overview** tab. The model input node name and output node name can be found in `peoplesemsegnet_shuffleseg_cache.txt` from `File Browser`. The output file is specified using the `-e` option. The tool needs write permission to the output directory.
3943
>
4044
> A detailed explanation of the input parameters is available [here](https://docs.nvidia.com/tao/tao-toolkit/text/tensorrt.html#running-the-tao-converter).
4145
42-
#### Generate an engine file for the data type `int8`:
43-
46+
#### Generate an engine file for the data type `int8`
47+
4448
Create the models directory:
49+
4550
```bash
4651
mkdir -p /workspaces/isaac_ros-dev/models
4752
```

docs/tensorrt-and-triton-info.md

Lines changed: 18 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,45 @@
11
# Isaac ROS Triton and TensorRT Nodes for DNN Inference
22

3-
NVIDIA's Isaac ROS suite of packages provides two separate nodes for performing DNN inference: Triton and TensorRT.
3+
NVIDIA's Isaac ROS suite of packages provides two separate nodes for performing DNN inference: Triton and TensorRT.
44

55
Our benchmarks show comparable performance and inference speed with both nodes, so a decision should be based on other characteristics of the overall model being deployed.
66

77
## NVIDIA Triton
8-
The NVIDIA Triton Inference Server is an [open-source inference serving software](https://developer.nvidia.com/nvidia-triton-inference-server) that provides a uniform interface for deploying AI models. Crucially, Triton supports a wide array of compute devices like NVIDIA GPUs and both x86 and ARM CPUs, and also operates with all major frameworks such as TensorFlow, TensorRT, and PyTorch.
8+
9+
The NVIDIA Triton Inference Server is an [open-source inference serving software](https://developer.nvidia.com/nvidia-triton-inference-server) that provides a uniform interface for deploying AI models. Crucially, Triton supports a wide array of compute devices like NVIDIA GPUs and both x86 and ARM CPUs, and also operates with all major frameworks such as TensorFlow, TensorRT, and PyTorch.
910

1011
Because Triton can take advantage of additional compute devices beyond just the GPU, Triton can be a better choice in situations where there is GPU resource contention from other model inference or processing tasks. However, in order to provide for this flexibility, Triton requires the creation of a model repository and additional configuration files before deployment.
1112

1213
## NVIDIA TensorRT
13-
NVIDIA TensorRT is a specific CUDA-based, on-GPU inference framework that performs a number of optimizations to deliver extremely performant model execution. TensorRT only supports ONNX and TensorRT Engine Plans, providing less flexibility than Triton but also requiring less initial configuration.
14+
15+
NVIDIA TensorRT is a specific CUDA-based, on-GPU inference framework that performs a number of optimizations to deliver extremely performant model execution. TensorRT only supports ONNX and TensorRT Engine Plans, providing less flexibility than Triton but also requiring less initial configuration.
1416

1517
## Using either Triton or TensorRT Nodes
16-
Both nodes use the Isaac ROS [Tensor List message](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/isaac_ros_tensor_list_interfaces/msg/TensorList.msg) for input data and output inference result.
18+
19+
Both nodes use the Isaac ROS [Tensor List message](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/isaac_ros_tensor_list_interfaces/msg/TensorList.msg) for input data and output inference result.
1720

1821
Users can either prepare a custom model or download pre-trained models from NGC as described [here](./model-preparation.md#obtaining-a-pre-trained-model-from-ngc). Models should be converted to the TensorRT Engine File format using the `tao-converter` tool as described [here](./model-preparation.md#using-tao-converter-to-decrypt-the-encrypted-tlt-model-etlt-format).
1922

20-
> **Note:** While the TensorRT node can automatically convert ONNX plans to the TensorRT Engine Plan format if configured to use a `.onnx` file, this conversion step will considerably extend the node's per-launch initial setup time.
21-
>
23+
> **Note:** While the TensorRT node can automatically convert ONNX plans to the TensorRT Engine Plan format if configured to use a `.onnx` file, this conversion step will considerably extend the node's per-launch initial setup time.
24+
>
2225
> As a result, we recommend converting any ONNX models to TensorRT Engine Plans first, and configuring the TensorRT node to use the Engine Plan instead.
2326
24-
2527
## Pre- and Post-Processing Nodes
26-
In order to be a useful component of a ROS graph, both Isaac ROS Triton and TensorRT inference nodes will require application-specific `pre-processor` (`encoder`) and `post-processor` (`decoder`) nodes to handle type conversion and other necessary steps.
2728

28-
A `pre-processor` node should take in a ROS2 message, perform the pre-processing steps dictated by the model, and then convert the data into an Isaac ROS Tensor List message. For example, a `pre-processor` node could resize an image, normalize it, and then convert it into a Tensor List.
29+
In order to be a useful component of a ROS graph, both Isaac ROS Triton and TensorRT inference nodes will require application-specific `pre-processor` (`encoder`) and `post-processor` (`decoder`) nodes to handle type conversion and other necessary steps.
30+
31+
A `pre-processor` node should take in a ROS2 message, perform the pre-processing steps dictated by the model, and then convert the data into an Isaac ROS Tensor List message. For example, a `pre-processor` node could resize an image, normalize it, and then convert it into a Tensor List.
2932

30-
A `post-processor` node should be used to convert the Isaac ROS Tensor List output of the model inference into a useful ROS2 message. For example, a `post-processor` node may perform argmax to identify the class label from a classification problem.
33+
A `post-processor` node should be used to convert the Isaac ROS Tensor List output of the model inference into a useful ROS2 message. For example, a `post-processor` node may perform argmax to identify the class label from a classification problem.
3134

32-
<div align="center">
35+
<div align="center">
3336

34-
![Using TensorRT or Triton](../resources/pipeline.png "Using TensorRT or Triton")
37+
![Using TensorRT or Triton](../resources/pipeline.png "Using TensorRT or Triton")
3538

3639
</div>
3740

3841
## Further Reading
39-
For more documentation on Triton, see [here](https://developer.nvidia.com/nvidia-triton-inference-server).
4042

41-
For more documentation on TensorRT, see [here](https://developer.nvidia.com/tensorrt).
43+
For more documentation on Triton, see [here](https://developer.nvidia.com/nvidia-triton-inference-server).
44+
45+
For more documentation on TensorRT, see [here](https://developer.nvidia.com/tensorrt).

docs/troubleshooting.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,12 @@
11
# DNN Inference Troubleshooting
2+
23
## Seeing operation failed followed by the process dying
4+
35
One cause of this issue is when the GPU being used does not have enough memory to run the model. For example, DOPE may require up to 6GB of VRAM to operate, depending on the application.
46

57
### Symptom
6-
```
8+
9+
```log
710
[component_container_mt-1] 2022-06-27 08:35:37.518 ERROR extensions/tensor_ops/Reshape.cpp@71: reshape tensor failed.
811
[component_container_mt-1] 2022-06-27 08:35:37.518 ERROR extensions/tensor_ops/TensorOperator.cpp@151: operation failed.
912
[component_container_mt-1] 2022-06-27 08:35:37.518 ERROR gxf/std/entity_executor.cpp@200: Entity with 102 not found!
@@ -14,5 +17,7 @@ One cause of this issue is when the GPU being used does not have enough memory t
1417
[component_container_mt-1] what(): [NitrosPublisher] Vault ("vault/vault", eid=102) was stopped. The graph may have been terminated due to an error.
1518
[ERROR] [component_container_mt-1]: process has died [pid 13378, exit code -6, cmd '/opt/ros/humble/install/lib/rclcpp_components/component_container_mt --ros-args -r __node:=dope_container -r __ns:=/'].
1619
```
20+
1721
### Solution
22+
1823
Try using the Isaac ROS TensorRT node or the Isaac ROS Triton node with the TensorRT backend instead. Otherwise, a discrete GPU with more VRAM may be required.

isaac_ros_dnn_encoders/CMakeLists.txt

Lines changed: 15 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,19 @@
1-
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
1+
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
2+
# Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
23
#
3-
# NVIDIA CORPORATION and its licensors retain all intellectual property
4-
# and proprietary rights in and to this software, related documentation
5-
# and any modifications thereto. Any use, reproduction, disclosure or
6-
# distribution of this software and related documentation without an express
7-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
#
16+
# SPDX-License-Identifier: Apache-2.0
817

918
cmake_minimum_required(VERSION 3.8)
1019
project(isaac_ros_dnn_encoders LANGUAGES C CXX)
@@ -58,10 +67,6 @@ install(TARGETS dnn_image_encoder_node
5867

5968
if(BUILD_TESTING)
6069
find_package(ament_lint_auto REQUIRED)
61-
62-
# Ignore copyright notices since we use custom NVIDIA Isaac ROS Software License
63-
set(ament_cmake_copyright_FOUND TRUE)
64-
6570
ament_lint_auto_find_test_dependencies()
6671

6772
find_package(launch_testing_ament_cmake REQUIRED)

isaac_ros_dnn_encoders/config/dnn_image_encoder_node.yaml

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,20 @@
11
%YAML 1.2
2-
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2+
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
3+
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
34
#
4-
# NVIDIA CORPORATION and its licensors retain all intellectual property
5-
# and proprietary rights in and to this software, related documentation
6-
# and any modifications thereto. Any use, reproduction, disclosure or
7-
# distribution of this software and related documentation without an express
8-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
5+
# Licensed under the Apache License, Version 2.0 (the "License");
6+
# you may not use this file except in compliance with the License.
7+
# You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
#
17+
# SPDX-License-Identifier: Apache-2.0
918
---
1019
name: global
1120
components:

isaac_ros_dnn_encoders/config/namespace_injector_rule.yaml

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,20 @@
11
%YAML 1.2
2-
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2+
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
3+
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
34
#
4-
# NVIDIA CORPORATION and its licensors retain all intellectual property
5-
# and proprietary rights in and to this software, related documentation
6-
# and any modifications thereto. Any use, reproduction, disclosure or
7-
# distribution of this software and related documentation without an express
8-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
5+
# Licensed under the Apache License, Version 2.0 (the "License");
6+
# you may not use this file except in compliance with the License.
7+
# You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
#
17+
# SPDX-License-Identifier: Apache-2.0
918
---
1019
name: DNN Image Encoder Namespace Injector Rule
1120
operation: namespace_injector

isaac_ros_dnn_encoders/include/isaac_ros_dnn_encoders/dnn_image_encoder_node.hpp

Lines changed: 16 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,19 @@
1-
/**
2-
* Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
3-
*
4-
* NVIDIA CORPORATION and its licensors retain all intellectual property
5-
* and proprietary rights in and to this software, related documentation
6-
* and any modifications thereto. Any use, reproduction, disclosure or
7-
* distribution of this software and related documentation without an express
8-
* license agreement from NVIDIA CORPORATION is strictly prohibited.
9-
*/
1+
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
2+
// Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
//
4+
// Licensed under the Apache License, Version 2.0 (the "License");
5+
// you may not use this file except in compliance with the License.
6+
// You may obtain a copy of the License at
7+
//
8+
// http://www.apache.org/licenses/LICENSE-2.0
9+
//
10+
// Unless required by applicable law or agreed to in writing, software
11+
// distributed under the License is distributed on an "AS IS" BASIS,
12+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
// See the License for the specific language governing permissions and
14+
// limitations under the License.
15+
//
16+
// SPDX-License-Identifier: Apache-2.0
1017

1118
#ifndef ISAAC_ROS_DNN_ENCODERS__DNN_IMAGE_ENCODER_NODE_HPP_
1219
#define ISAAC_ROS_DNN_ENCODERS__DNN_IMAGE_ENCODER_NODE_HPP_

0 commit comments

Comments
 (0)