-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Open
Labels
Module:AccuracyOutput mismatch between TensorRT and other frameworksOutput mismatch between TensorRT and other frameworksinternal-bug-trackedTracked internally, will be fixed in a future release.Tracked internally, will be fixed in a future release.triagedIssue has been triaged by maintainersIssue has been triaged by maintainers
Description
Description
The output of the TensorRT 10 model converted from ONNX is incorrect, while the output of the TensorRT 8.6 model is correct. The issue seems to be located in some fully connected layers in the TensorRT 10 model, where the error in the output suddenly becomes very large. The exact cause is unknown. Please help to resolve this issue.
Environment
TensorRT Version: TensorRT 10.0.1
NVIDIA GPU: Tesla T4
NVIDIA Driver Version: 450.36.06
CUDA Version: 11.0
CUDNN Version:8.0.0
Operating System:
onnx opset17
Relevant Files
Model link: https://drive.google.com/file/d/1QBbmtdaecWAHzqMdh10QVbdSjTWzleqo/view?usp=sharing
Steps To Reproduce
- Convert the ONNX model to TensorRT 10 using ./trtexec --onnx=./test.onnx --device=0 --saveEngine=./test.trtmodel --precisionConstraints=obey.
Metadata
Metadata
Assignees
Labels
Module:AccuracyOutput mismatch between TensorRT and other frameworksOutput mismatch between TensorRT and other frameworksinternal-bug-trackedTracked internally, will be fixed in a future release.Tracked internally, will be fixed in a future release.triagedIssue has been triaged by maintainersIssue has been triaged by maintainers