Skip to content

ITensor::getDimensions: Error Code 4: API Usage Error failure of TensorRT 10.3 when running model conversion on GPU AGX Orin #4558

@aw632

Description

@aw632

Description

I converted a model (example: https://huggingface.co/onnx-community/dinov3-vits16-pretrain-lvd1689m-ONNX/tree/main) to ONNX then built with TensorRT on my RTX 4090 with TRT 10.13. This worked fine and I was able to successfully build the engine.

When trying to convert the exact same ONNX file on AGX Orin, I got this error:

[6] Invalid Node - /model/vc/backbone/rope_embed/If
ITensor::getDimensions: Error Code 4: API Usage Error (/model/vc/backbone/rope_embed/If_OutputLayer: IIfConditionalOutputLayer inputs must have the same shape. Shapes are [2] and [1].)

Although the TRT versions are different, I understand that in 10.13, the same restriction (outputs must be the same shape) applies, and the model was built successfully on the RTX 4090. AGX Orins have their TensorRT versions fixed, so I cannot upgrade.

Environment

TensorRT Version: 10.3

NVIDIA GPU: AGX Orin

NVIDIA Driver Version: N/A

CUDA Version: 12.6

CUDNN Version: N/A

Operating System: Jetpack 6

Python Version (if applicable): N/A

Tensorflow Version (if applicable): N/A

PyTorch Version (if applicable): N/A

Baremetal or Container (if so, version): N/A

Relevant Files

Model link: https://huggingface.co/onnx-community/dinov3-vits16-pretrain-lvd1689m-ONNX/tree/main

Steps To Reproduce

Commands or scripts: /usr/src/tensorrt/bin/trtexec --onnx=dinov3.onnx --saveEngine=dinov3.engine

Have you tried the latest release?: No

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt): Yes

Metadata

Metadata

Assignees

No one assigned

    Labels

    Module:Embeddedissues when using TensorRT on embedded platforms

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions