Skip to content

Commit 2a9ad05

Browse files
Merge pull request #5 from NVIDIA-ISAAC-ROS/release-dp
Isaac ROS 0.10.0 (DP)
2 parents 8e984fc + a6ad6e5 commit 2a9ad05

23 files changed

+894
-616
lines changed

.gitattributes

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,17 @@
88
*.gz filter=lfs diff=lfs merge=lfs -text
99
*.tar filter=lfs diff=lfs merge=lfs -text
1010
*.zip filter=lfs diff=lfs merge=lfs -text
11+
1112
# Documents
1213
*.pdf filter=lfs diff=lfs merge=lfs -text
13-
# Numpy data
14-
*.npy filter=lfs diff=lfs merge=lfs -text
15-
# Debian package
16-
*.deb filter=lfs diff=lfs merge=lfs -text
1714

1815
# Shared libraries
1916
*.so filter=lfs diff=lfs merge=lfs -text
2017
*.so.* filter=lfs diff=lfs merge=lfs -text
2118

22-
# PCD files
23-
*.pcd filter=lfs diff=lfs merge=lfs -text
19+
# ROS Bags
20+
**/resources/**/*.db3 filter=lfs diff=lfs merge=lfs -text
21+
**/resources/**/*.yaml filter=lfs diff=lfs merge=lfs -text
2422

25-
# Model files
23+
# DNN Model files
2624
*.onnx filter=lfs diff=lfs merge=lfs -text

README.md

Lines changed: 269 additions & 349 deletions
Large diffs are not rendered by default.

docs/centerpose.md

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
### Inference on CenterPose using Triton
2+
This tutorial is for using CenterPose with Triton.
3+
> **Warning**: These steps will only work on `x86_64` and **NOT** on `Jetson`.
4+
5+
1. Complete steps 1-5 of the quickstart [here](../README.md#quickstart)
6+
2. Select a CenterPose model by visiting the CenterPose model collection available on the official [CenterPose GitHub](https://github.com/NVlabs/CenterPose) repository [here](https://drive.google.com/drive/folders/1QIxcfKepOR4aktOz62p3Qag0Fhm0LVa0). The model is assumed to be downloaded to `~/Downloads` outside the docker container. This example will use `shoe_resnet_140.pth`, which should be downloaded into `/tmp/models` inside the docker container:
7+
> **Note**: this should be run outside the container
8+
```bash
9+
cd ~/Downloads && \
10+
docker cp shoe_resnet_140.pth isaac_ros_dev-x86_64-container:/tmp/models
11+
```
12+
13+
> **Warning**: The models in the root directory of the model collection listed above will *NOT WORK* with our inference nodes because they have custom layers not supported by TensorRT nor Triton. Make sure to use the PyTorch weights that have the string `resnet` in their file names.
14+
15+
3. Create a models repository with version `1`:
16+
```bash
17+
mkdir -p /tmp/models/centerpose_shoe/1
18+
```
19+
20+
4. Create a configuration file for this model at path `/tmp/models/centerpose_shoe/config.pbtxt`. Note that name has to be the same as the model repository name. Take a look at the example at `isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt` and copy that file to `/tmp/models/centerpose_shoe/config.pbtxt`.
21+
```bash
22+
cp /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt /tmp/models/centerpose_shoe/config.pbtxt
23+
```
24+
25+
5. To run the TensorRT engine plan, convert the PyTorch model to ONNX first. Export the model into an ONNX file using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py`:
26+
```bash
27+
python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py --input /tmp/models/shoe_resnet_140.pth --output /tmp/models/centerpose_shoe/1/model.onnx
28+
```
29+
6. To get a TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter `trtexec`:
30+
```bash
31+
/usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/centerpose_shoe/1/model.onnx --saveEngine=/tmp/models/centerpose_shoe/1/model.plan
32+
```
33+
34+
7. Inside the container, build and source the workspace:
35+
```bash
36+
cd /workspaces/isaac_ros-dev && \
37+
colcon build --symlink-install && \
38+
source install/setup.bash
39+
```
40+
41+
8. Start `isaac_ros_centerpose` using the launch file:
42+
```bash
43+
ros2 launch isaac_ros_centerpose isaac_ros_centerpose.launch.py model_name:=centerpose_shoe model_repository_paths:=['/tmp/models']
44+
```
45+
46+
Then open **another** terminal, and enter the Docker container again:
47+
```bash
48+
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
49+
./scripts/run_dev.sh
50+
```
51+
Then, play the ROS bag:
52+
53+
```bash
54+
ros2 bag play -l src/isaac_ros_pose_estimation/resources/rosbags/centerpose_rosbag/
55+
```
56+
57+
9. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through `ros2 topic echo`:
58+
59+
In a **third** terminal, enter the Docker container again:
60+
```bash
61+
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
62+
./scripts/run_dev.sh
63+
```
64+
```bash
65+
source install/setup.bash && \
66+
ros2 topic echo /object_poses
67+
```
68+
69+
10. Launch `rviz2`. Click on `Add` button, select "By topic", and choose `MarkerArray` under `/object_poses`. Set the fixed frame to `centerpose`. You'll be able to see the cuboid marker representing the object's pose detected!
70+
71+
<div align="center"><img src="../resources/centerpose_rviz.png" width="600px"/></div>

docs/dope-triton.md

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
### Inference on DOPE using Triton
2+
This tutorial shows using Triton with different backends.
3+
4+
> **Note**: The DOPE converter script only works on `x86_64`, so the resultant `onnx` model following these steps must be copied to the Jetson.
5+
6+
1. Complete steps 1-6 of the quickstart [here](../README.md#quickstart).
7+
2. Make a directory called `Ketchup` inside `/tmp/models`, which will serve as the model repository. This will be versioned as `1`. The downloaded model will be placed here:
8+
```bash
9+
mkdir -p /tmp/models/Ketchup/1 && \
10+
mv /tmp/models/Ketchup.pth /tmp/models/Ketchup/
11+
```
12+
3. Now select a backend. The PyTorch and ONNX options **MUST** be run on `x86_64`:
13+
- To run ONNX models with Triton, export the model into an ONNX file using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py`:
14+
```bash
15+
python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.onnx --input_name INPUT__0 --output_name OUTPUT__0
16+
```
17+
- To run `TensorRT Plan` files with Triton, first copy the generated `onnx` model from the above point to the target platform (e.g. a Jetson or an `x86_64` machine). The model will be assumed to be copied to `/tmp/models/Ketchup/1/model.onnx` inside the Docker container. Then use `trtexec` to convert the `onnx` model to a `plan` model:
18+
```bash
19+
/usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/Ketchup/1/model.onnx --saveEngine=/tmp/models/Ketchup/1/model.plan
20+
```
21+
- To run PyTorch model with Triton (**inferencing PyTorch model is supported for x86_64 platform only**), the model needs to be saved using `torch.jit.save()`. The downloaded DOPE model is saved with `torch.save()`. Export the DOPE model using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py`:
22+
```bash
23+
python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format pytorch --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.pt
24+
```
25+
4. Create a configuration file for this model at path `/tmp/models/Ketchup/config.pbtxt`. Note that name has to be the same as the model repository. Depending on the platform selected from a previous step, a slightly different `config.pbtxt` file must be created: `onnxruntime_onnx` (`.onnx` file), `tensorrt_plan` (`.plan` file) or `pytorch_libtorch` (`.pt` file):
26+
```
27+
name: "Ketchup"
28+
platform: <insert-platform>
29+
max_batch_size: 0
30+
input [
31+
{
32+
name: "INPUT__0"
33+
data_type: TYPE_FP32
34+
dims: [ 1, 3, 480, 640 ]
35+
}
36+
]
37+
output [
38+
{
39+
name: "OUTPUT__0"
40+
data_type: TYPE_FP32
41+
dims: [ 1, 25, 60, 80 ]
42+
}
43+
]
44+
version_policy: {
45+
specific {
46+
versions: [ 1 ]
47+
}
48+
}
49+
```
50+
The `<insert-platform>` part should be replaced with `onnxruntime_onnx` for `.onnx` files, `tensorrt_plan` for `.plan` files and `pytorch_libtorch` for `.pt` files.
51+
52+
> **Note**: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop or resize using ROS2 nodes from [Isaac ROS Image Pipeline](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline) or similar packages.
53+
54+
> **Note**: The model name must be `model.onnx`.
55+
56+
5. Rebuild and source `isaac_ros_dope`:
57+
```bash
58+
cd /workspaces/isaac_ros-dev
59+
colcon build --packages-up-to isaac_ros_dope && source install/setup.bash
60+
```
61+
62+
6. Start `isaac_ros_dope` using the launch file:
63+
```bash
64+
ros2 launch isaac_ros_dope isaac_ros_dope_triton.launch.py model_name:=Ketchup model_repository_paths:=['/tmp/models'] input_binding_names:=['INPUT__0'] output_binding_names:=['OUTPUT__0'] object_name:=Ketchup
65+
```
66+
67+
> **Note**: `object_name` should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.
68+
69+
7. Open **another** terminal, and enter the Docker container again:
70+
```bash
71+
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
72+
./scripts/run_dev.sh
73+
```
74+
Then, play the ROS bag:
75+
```bash
76+
ros2 bag play -l src/isaac_ros_pose_estimation/resources/rosbags/dope_rosbag/
77+
```
78+
8. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through `ros2 topic echo`:
79+
80+
In a **third** terminal, enter the Docker container again:
81+
```bash
82+
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
83+
./scripts/run_dev.sh
84+
```
85+
```bash
86+
ros2 topic echo /poses
87+
```
88+
> **Note**: We are echoing `/poses` because we remapped the original topic `/dope/pose_array` to `poses` in the launch file.
89+
90+
Now visualize the pose array in rviz2:
91+
```bash
92+
rviz2
93+
```
94+
Then click on the `Add` button, select `By topic` and choose `PoseArray` under `/poses`. Finally, change the display to show an axes by updating `Shape` to be `Axes`, as shown in the screenshot below. Make sure to update the `Fixed Frame` to `camera`.
95+
96+
<div align="center"><img src="../resources/dope_rviz2.png" width="600px"/></div>
97+
98+
> **Note:** For best results, crop/resize input images to the same dimensions your DNN model is expecting.

isaac_ros_centerpose/CMakeLists.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
1+
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
22
#
33
# NVIDIA CORPORATION and its licensors retain all intellectual property
44
# and proprietary rights in and to this software, related documentation
@@ -29,7 +29,7 @@ execute_process(COMMAND uname -m COMMAND tr -d '\n'
2929
)
3030
message( STATUS "Architecture: ${ARCHITECTURE}" )
3131

32-
set(CUDA_MIN_VERSION "10.2")
32+
set(CUDA_MIN_VERSION "11.4")
3333

3434
# Find dependencies
3535
find_package(ament_cmake REQUIRED)

isaac_ros_centerpose/isaac_ros_centerpose/CenterPoseDecoder.py

Lines changed: 12 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
1+
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
22
#
33
# NVIDIA CORPORATION and its licensors retain all intellectual property
44
# and proprietary rights in and to this software, related documentation
@@ -9,7 +9,7 @@
99
from isaac_ros_centerpose.CenterPoseDecoderUtils import Cuboid3d, CuboidPNPSolver, \
1010
merge_outputs, nms, object_pose_post_process, tensor_to_numpy_array, \
1111
topk, topk_channel, transpose_and_gather_feat
12-
from isaac_ros_nvengine_interfaces.msg import TensorList
12+
from isaac_ros_tensor_list_interfaces.msg import TensorList
1313
import numpy as np
1414
import rclpy
1515
from rclpy.duration import Duration
@@ -220,12 +220,12 @@ def __init__(self, name='centerpose_decoder_node'):
220220
self.declare_parameters(
221221
namespace='',
222222
parameters=[
223-
('camera_matrix', None),
224-
('original_image_size', None),
225-
('output_field_size', None),
226-
('height', None),
223+
('camera_matrix', rclpy.Parameter.Type.DOUBLE_ARRAY),
224+
('original_image_size', rclpy.Parameter.Type.INTEGER_ARRAY),
225+
('output_field_size', rclpy.Parameter.Type.INTEGER_ARRAY),
226+
('height', rclpy.Parameter.Type.DOUBLE),
227227
('frame_id', 'centerpose'),
228-
('marker_color', None)
228+
('marker_color', rclpy.Parameter.Type.DOUBLE_ARRAY)
229229
]
230230
)
231231
# Sanity check parameters
@@ -234,8 +234,11 @@ def __init__(self, name='centerpose_decoder_node'):
234234
'output_field_size', 'height', 'marker_color',
235235
'frame_id']
236236
for param_name in param_names:
237-
self.params_config[param_name] = self.get_parameter(
238-
param_name).value
237+
try:
238+
self.params_config[param_name] = self.get_parameter(
239+
param_name).value
240+
except rclpy.exceptions.ParameterUninitializedException:
241+
self.params_config[param_name] = None
239242

240243
if (self.params_config['camera_matrix'] is None) or \
241244
(len(self.params_config['camera_matrix']) != 9):

0 commit comments

Comments
 (0)