|
| 1 | +# Preparing Deep Learning Models for Isaac ROS |
| 2 | + |
| 3 | +## Obtaining a Pre-trained Model from NGC |
| 4 | +The NVIDIA GPU Cloud hosts a [catalog](https://catalog.ngc.nvidia.com/models) of Deep Learning pre-trained models that are available for your development. |
| 5 | + |
| 6 | +1. Use the **Search Bar** to find a pre-trained model that you are interested in working with. |
| 7 | + |
| 8 | +2. Click on the model's card to view an expanded description, and then click on the **File Browser** tab along the navigation bar. |
| 9 | + |
| 10 | +3. Using the **File Browser**, find a deployable `.etlt` file for the model you are interested in. |
| 11 | + |
| 12 | + > **Note:** The `.etlt` file extension indicates that this model has pre-trained but **encrypted** weights, which means one needs to use the `tao-converter` utility to decrypt the model, as described [below](#using-tao-converter-to-decrypt-the-encrypted-tlt-model-etlt-format). |
| 13 | +
|
| 14 | +4. Under the **Actions** heading, click on the **...** icon for the file you selected in the previous step, and then click **Copy `wget` command**. |
| 15 | +5. **Paste** the copied command into a terminal to download the model in the current working directory. |
| 16 | + |
| 17 | +## Using `tao-converter` to decrypt the Encrypted TLT Model (`.etlt`) Format |
| 18 | +As discussed above, models distributed with the `.etlt` file extension are encrypted and must be decrypted before use via NVIDIA's [`tao-converter`](https://developer.nvidia.com/tao-toolkit-get-started). |
| 19 | + |
| 20 | +`tao-converter` is already included in the Docker images available as part of the standard [Isaac ROS Development Environment](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/docs/dev-env-setup.md). |
| 21 | + |
| 22 | +The per-platform installation paths are described below: |
| 23 | + |
| 24 | +| Platform | Installation Path | Symlink Path | |
| 25 | +| --------------- | ------------------------------------------------------------- | ----------------------------------- | |
| 26 | +| x86_64 | `/opt/nvidia/tao/tao-converter-x86-tensorrt8.0/tao-converter` | **`/opt/nvidia/tao/tao-converter`** | |
| 27 | +| Jetson(aarch64) | `/opt/nvidia/tao/jp5` | **`/opt/nvidia/tao/tao-converter`** | |
| 28 | + |
| 29 | + |
| 30 | +### Converting `.etlt` to a TensorRT Engine Plan |
| 31 | +Here are some examples for generating the TensorRT engine file using `tao-converter`. In this example, we will use the [`PeopleSemSegnet Shuffleseg` model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplesemsegnet/files?version=deployable_shuffleseg_unet_v1.0): |
| 32 | + |
| 33 | +#### Generate an engine file for the `fp16` data type: |
| 34 | + ```bash |
| 35 | + mkdir -p /workspaces/isaac_ros-dev/models && \ |
| 36 | + /opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -e /workspaces/isaac_ros-dev/models/peoplesemsegnet_shuffleseg.engine -o argmax_1 peoplesemsegnet_shuffleseg_etlt.etlt |
| 37 | + ``` |
| 38 | + > **Note:** The specific values used in the command above are retrieved from the **PeopleSemSegnet** page under the **Overview** tab. The model input node name and output node name can be found in `peoplesemsegnet_shuffleseg_cache.txt` from `File Browser`. The output file is specified using the `-e` option. The tool needs write permission to the output directory. |
| 39 | + > |
| 40 | + > A detailed explanation of the input parameters is available [here](https://docs.nvidia.com/tao/tao-toolkit/text/tensorrt.html#running-the-tao-converter). |
| 41 | +
|
| 42 | +#### Generate an engine file for the data type `int8`: |
| 43 | + |
| 44 | + Create the models directory: |
| 45 | + ```bash |
| 46 | + mkdir -p /workspaces/isaac_ros-dev/models |
| 47 | + ``` |
| 48 | + |
| 49 | + Download the calibration cache file: |
| 50 | + > **Note:** Check the model's page on NGC for the latest `wget` command. |
| 51 | +
|
| 52 | + ```bash |
| 53 | + wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_shuffleseg_unet_v1.0/files/peoplesemsegnet_shuffleseg_cache.txt |
| 54 | + ``` |
| 55 | + |
| 56 | + ```bash |
| 57 | + /opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -c peoplesemsegnet_shuffleseg_cache.txt -e /workspaces/isaac_ros-dev/models/peoplesemsegnet_shuffleseg.engine -o argmax_1 peoplesemsegnet_shuffleseg_etlt.etlt |
| 58 | + ``` |
| 59 | + |
| 60 | + > **Note**: The calibration cache file (specified using the `-c` option) is required to generate the `int8` engine file. This file is provided in the **File Browser** tab of the model's page on NGC. |
0 commit comments