You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tensorrt-and-triton-info.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,13 +28,13 @@ Users can either prepare a custom model or download pre-trained models from NGC
28
28
29
29
In order to be a useful component of a ROS graph, both Isaac ROS Triton and TensorRT inference nodes will require application-specific `pre-processor` (`encoder`) and `post-processor` (`decoder`) nodes to handle type conversion and other necessary steps.
30
30
31
-
A `pre-processor` node should take in a ROS2 message, perform the pre-processing steps dictated by the model, and then convert the data into an Isaac ROS Tensor List message. For example, a `pre-processor` node could resize an image, normalize it, and then convert it into a Tensor List.
31
+
A `pre-processor` node should take in a ROS 2 message, perform the pre-processing steps dictated by the model, and then convert the data into an Isaac ROS Tensor List message. For example, a `pre-processor` node could resize an image, normalize it, and then convert it into a Tensor List.
32
32
33
-
A `post-processor` node should be used to convert the Isaac ROS Tensor List output of the model inference into a useful ROS2 message. For example, a `post-processor` node may perform argmax to identify the class label from a classification problem.
33
+
A `post-processor` node should be used to convert the Isaac ROS Tensor List output of the model inference into a useful ROS 2 message. For example, a `post-processor` node may perform argmax to identify the class label from a classification problem.
34
34
35
35
<divalign="center">
36
36
37
-

37
+

Copy file name to clipboardExpand all lines: docs/troubleshooting.md
+26Lines changed: 26 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,3 +21,29 @@ One cause of this issue is when the GPU being used does not have enough memory t
21
21
### Solution
22
22
23
23
Try using the Isaac ROS TensorRT node or the Isaac ROS Triton node with the TensorRT backend instead. Otherwise, a discrete GPU with more VRAM may be required.
24
+
25
+
## Triton fails to create the TensorRT engine and load a model
1: [component_container_mt-1] I0331 05:56:08.213483 11359 tensorrt.cc:5678] TRITONBACKEND_ModelInstanceFinalize: delete instance state
35
+
1: [component_container_mt-1] I0331 05:56:08.213525 11359 tensorrt.cc:5617] TRITONBACKEND_ModelFinalize: delete model state
36
+
1: [component_container_mt-1] E0331 05:56:08.214059 11359 model_lifecycle.cc:596] failed to load 'detectnet' version 1: Internal: unable to create TensorRT engine
37
+
1: [component_container_mt-1] ERROR: infer_trtis_server.cpp:1057 Triton: failed to load model detectnet, triton_err_str:Invalid argument, err_msg:load failed for model 'detectnet': version 1 is at UNAVAILABLE state: Internal: unable to create TensorRT engine;
This error can occur when TensorRT attempts to load an incompatible `model.plan` file. The incompatibility may arise due to a versioning or platform mismatch between the time of plan generation and the time of plan execution.
48
+
49
+
Delete the `model.plan` file that is being passed in as an argument to the Triton node's `model_repository_paths` parameter, and then use the source package's instructions to regenerate the `model.plan` file from the original weights file (often a `.etlt` or `.onnx` file).
0 commit comments