Skip to content

fails to parse valid onnx model: API Usage Error (node_of_reduce_min_output: at least 1 dimensions are required for input.) #4472

@coffezhou

Description

@coffezhou

Description

For the following valid onnx model,

Image
it cannot be imported by the onnx frontend in TensorRT. The following error message is produced:

[05/29/2025-12:16:24] [TRT] [E] ITensor::getDimensions: Error Code 3: API Usage Error (node_of_reduce_min_output: at least 1 dimensions are required for input.)
[05/29/2025-12:16:24] [TRT] [E] In node 3 with name:  and operator: ReduceMin (parseNode): INVALID_NODE: Invalid Node - node_of_reduce_min_output
ITensor::getDimensions: Error Code 3: API Usage Error (node_of_reduce_min_output: at least 1 dimensions are required for input.)
In node 3 with name:  and operator: ReduceMin (parseNode): INVALID_NODE: Invalid Node - node_of_reduce_min_output
ITensor::getDimensions: Error Code 3: API Usage Error (node_of_reduce_min_output: at least 1 dimensions are required for input.)

Environment

TensorRT Version: 10.11.0.33

NVIDIA GPU: GeForce RTX 3080

NVIDIA Driver Version: 535.183.01

CUDA Version: 12.2

CUDNN Version: none

Operating System: ubuntu 20.04

Python Version (if applicable): 3.12.9

Tensorflow Version (if applicable): none

PyTorch Version (if applicable): none

Baremetal or Container (if so, version): none

Steps To Reproduce

This bug can be reproduced by the following code with the model in the attachment. As shown in the code, the model can be executed by onnxruntime.

from typing import Dict, List, Literal, Optional
import sys
import os

import numpy as np
import onnx
import onnxruntime
from onnx import ModelProto, TensorProto, helper, mapping

import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit

import pickle

def test():
    onnx_model = onnx.load("1111.onnx")
    
    with open("inputs.pkl", "rb") as fp:
        inputs = pickle.load(fp)

    try:
        ort_session = onnxruntime.InferenceSession(
            onnx_model.SerializeToString(), providers=["CPUExecutionProvider"]
        )
        ort_output = ort_session.run([], inputs)
    except Exception as e:
        print(e)
        print("This model cannot be executed by onnxruntime!")
        sys.exit(1)
    
    print("ONNXRuntime:\n", ort_output)
    
    #--------------------------------------------------------
        
    trt_logger = trt.Logger(trt.Logger.WARNING)
    trt.init_libnvinfer_plugins(trt_logger, '')
    builder = trt.Builder(trt_logger)
    network = builder.create_network(flags=1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))

    parser = trt.OnnxParser(network, trt_logger)
    with open("1111.onnx", 'rb') as model_file:
        if not parser.parse(model_file.read()):
            for error in range(parser.num_errors):
                print(parser.get_error(error))
            sys.exit(1)

    
if __name__ == "__main__":
    test()

testcast.zip

Commands or scripts:

Have you tried the latest release?: yes

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt): the mode can be executed by onnxruntime.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Module:ONNXIssues relating to ONNX usage and import

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions