|
| 1 | +### Inference on DOPE using Triton |
| 2 | +This tutorial shows using Triton with different backends. |
| 3 | + |
| 4 | +> **Note**: The DOPE converter script only works on `x86_64`, so the resultant `onnx` model following these steps must be copied to the Jetson. |
| 5 | +
|
| 6 | +1. Complete steps 1-6 of the quickstart [here](../README.md#quickstart). |
| 7 | +2. Make a directory called `Ketchup` inside `/tmp/models`, which will serve as the model repository. This will be versioned as `1`. The downloaded model will be placed here: |
| 8 | + ```bash |
| 9 | + mkdir -p /tmp/models/Ketchup/1 && \ |
| 10 | + mv /tmp/models/Ketchup.pth /tmp/models/Ketchup/ |
| 11 | + ``` |
| 12 | +3. Now select a backend. The PyTorch and ONNX options **MUST** be run on `x86_64`: |
| 13 | + - To run ONNX models with Triton, export the model into an ONNX file using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py`: |
| 14 | + ```bash |
| 15 | + python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.onnx --input_name INPUT__0 --output_name OUTPUT__0 |
| 16 | + ``` |
| 17 | + - To run `TensorRT Plan` files with Triton, first copy the generated `onnx` model from the above point to the target platform (e.g. a Jetson or an `x86_64` machine). The model will be assumed to be copied to `/tmp/models/Ketchup/1/model.onnx` inside the Docker container. Then use `trtexec` to convert the `onnx` model to a `plan` model: |
| 18 | + ```bash |
| 19 | + /usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/Ketchup/1/model.onnx --saveEngine=/tmp/models/Ketchup/1/model.plan |
| 20 | + ``` |
| 21 | + - To run PyTorch model with Triton (**inferencing PyTorch model is supported for x86_64 platform only**), the model needs to be saved using `torch.jit.save()`. The downloaded DOPE model is saved with `torch.save()`. Export the DOPE model using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py`: |
| 22 | + ```bash |
| 23 | + python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format pytorch --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.pt |
| 24 | + ``` |
| 25 | +4. Create a configuration file for this model at path `/tmp/models/Ketchup/config.pbtxt`. Note that name has to be the same as the model repository. Depending on the platform selected from a previous step, a slightly different `config.pbtxt` file must be created: `onnxruntime_onnx` (`.onnx` file), `tensorrt_plan` (`.plan` file) or `pytorch_libtorch` (`.pt` file): |
| 26 | + ``` |
| 27 | + name: "Ketchup" |
| 28 | + platform: <insert-platform> |
| 29 | + max_batch_size: 0 |
| 30 | + input [ |
| 31 | + { |
| 32 | + name: "INPUT__0" |
| 33 | + data_type: TYPE_FP32 |
| 34 | + dims: [ 1, 3, 480, 640 ] |
| 35 | + } |
| 36 | + ] |
| 37 | + output [ |
| 38 | + { |
| 39 | + name: "OUTPUT__0" |
| 40 | + data_type: TYPE_FP32 |
| 41 | + dims: [ 1, 25, 60, 80 ] |
| 42 | + } |
| 43 | + ] |
| 44 | + version_policy: { |
| 45 | + specific { |
| 46 | + versions: [ 1 ] |
| 47 | + } |
| 48 | + } |
| 49 | + ``` |
| 50 | + The `<insert-platform>` part should be replaced with `onnxruntime_onnx` for `.onnx` files, `tensorrt_plan` for `.plan` files and `pytorch_libtorch` for `.pt` files. |
| 51 | +
|
| 52 | + > **Note**: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop or resize using ROS2 nodes from [Isaac ROS Image Pipeline](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline) or similar packages. |
| 53 | + |
| 54 | + > **Note**: The model name must be `model.onnx`. |
| 55 | +
|
| 56 | +5. Rebuild and source `isaac_ros_dope`: |
| 57 | + ```bash |
| 58 | + cd /workspaces/isaac_ros-dev |
| 59 | + colcon build --packages-up-to isaac_ros_dope && source install/setup.bash |
| 60 | + ``` |
| 61 | +
|
| 62 | +6. Start `isaac_ros_dope` using the launch file: |
| 63 | + ```bash |
| 64 | + ros2 launch isaac_ros_dope isaac_ros_dope_triton.launch.py model_name:=Ketchup model_repository_paths:=['/tmp/models'] input_binding_names:=['INPUT__0'] output_binding_names:=['OUTPUT__0'] object_name:=Ketchup |
| 65 | + ``` |
| 66 | +
|
| 67 | + > **Note**: `object_name` should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object. |
| 68 | +
|
| 69 | +7. Open **another** terminal, and enter the Docker container again: |
| 70 | + ```bash |
| 71 | + cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \ |
| 72 | + ./scripts/run_dev.sh |
| 73 | + ``` |
| 74 | + Then, play the ROS bag: |
| 75 | + ```bash |
| 76 | + ros2 bag play -l src/isaac_ros_pose_estimation/resources/rosbags/dope_rosbag/ |
| 77 | + ``` |
| 78 | +8. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through `ros2 topic echo`: |
| 79 | + |
| 80 | + In a **third** terminal, enter the Docker container again: |
| 81 | + ```bash |
| 82 | + cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \ |
| 83 | + ./scripts/run_dev.sh |
| 84 | + ``` |
| 85 | + ```bash |
| 86 | + ros2 topic echo /poses |
| 87 | + ``` |
| 88 | + > **Note**: We are echoing `/poses` because we remapped the original topic `/dope/pose_array` to `poses` in the launch file. |
| 89 | +
|
| 90 | + Now visualize the pose array in rviz2: |
| 91 | + ```bash |
| 92 | + rviz2 |
| 93 | + ``` |
| 94 | + Then click on the `Add` button, select `By topic` and choose `PoseArray` under `/poses`. Finally, change the display to show an axes by updating `Shape` to be `Axes`, as shown in the screenshot below. Make sure to update the `Fixed Frame` to `camera`. |
| 95 | +
|
| 96 | + <div align="center"><img src="../resources/dope_rviz2.png" width="600px"/></div> |
| 97 | +
|
| 98 | + > **Note:** For best results, crop/resize input images to the same dimensions your DNN model is expecting. |
0 commit comments