Skip to content

wrong results of TensorRT 10.0 when running on GPU Tesla T4 #3999

@yflv-yanxia

Description

@yflv-yanxia

Description

The output of the TensorRT 10 model converted from ONNX is incorrect, while the output of the TensorRT 8.6 model is correct. The issue seems to be located in some fully connected layers in the TensorRT 10 model, where the error in the output suddenly becomes very large. The exact cause is unknown. Please help to resolve this issue.

Environment

TensorRT Version: TensorRT 10.0.1

NVIDIA GPU: Tesla T4

NVIDIA Driver Version: 450.36.06

CUDA Version: 11.0

CUDNN Version:8.0.0

Operating System:

onnx opset17

Relevant Files

Model link: https://drive.google.com/file/d/1QBbmtdaecWAHzqMdh10QVbdSjTWzleqo/view?usp=sharing

Steps To Reproduce

  1. Convert the ONNX model to TensorRT 10 using ./trtexec --onnx=./test.onnx --device=0 --saveEngine=./test.trtmodel --precisionConstraints=obey.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Module:AccuracyOutput mismatch between TensorRT and other frameworksinternal-bug-trackedTracked internally, will be fixed in a future release.triagedIssue has been triaged by maintainers

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions