Skip to content

onnxruntime-gpu 1.23 not running on NVIDIA Blackwell with CUDAExecutionProvider #26245

@j-brtl

Description

@j-brtl

Describe the issue

When running inference on a NVIDIA RTX PRO 4500 I run into an error when I'm using the CUDAExecution Provider.
Running it with TensorrtExecutionProvider it runs through perfectly fine.
Is there a support issue with the GPU I am using or might the versions of onxruntime and CUDA be an issue?

The error I get is:

[ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Mul node. Name:'densenet121_cls_with_preproc/preprocess_densenet/truediv' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device

To reproduce

Unfortunately I am not able to provide the model or the exact code.

Urgency

No response

Platform

Windows

OS Version

11

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.23.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 12.9 & CUDNN 9.13

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions