-
Notifications
You must be signed in to change notification settings - Fork 544
Description
Description
There is inconsistency between outputs produced by TensorRT and ONNXRuntime for Deconvolution op with manually specified output shape
ONNX model (single ConvTranspose layer)
Compare python script, that load ONNX model, creates TensorRT engine for it, then runs both models with reference input and compares outputs. Output shape is correct, but values are different.
After some investigation it turned out that current TensorRT output is the same as ONNXRuntime before 1.14.0 version. Probably some of these ONNXRuntime commits changed behavior closer to ONNX standard:
microsoft/onnxruntime@6246662
microsoft/onnxruntime@f96f222
Also checked that OpenVINO output for this model is the same as in current ONNXRuntime version.
Environment
TensorRT Version: 10.13.3.9
ONNX-TensorRT Version / Branch: 10.13
GPU Type: GeForce RTX 4090
Nvidia Driver Version: 580.95.05
CUDA Version: 12.9
Operating System + Version: Ubuntu 22.04
Python Version: 3.10.12
Relevant Files
ONNX model
Compare python script
Steps To Reproduce
Put attached script and model into same folder and run:
python3 cmp.py