Skip to content

[Bug] Dangling output operator in graph moved as final graph outputs in fused graph. #26176

@TedThemistokleous

Description

@TedThemistokleous

Describe the issue

Related to PR - #26092

We're running a customer model and have observed the following behavior:

Graph contains op nodes (batchnormalization) with outputs that aren't used in the Onnx model parsed in.

We've determined and seen that this is being added in GraphViewerToProto()

Netron output for example from the dumped model after we get to compile() but before MIGraphX does a compile on the resulting Onnx stream

Image

Is there a way to prune these outputs so they're not being used in the Onnx stream or is this some interaction/thing we need to handle with ToProto() calls?

We used log warnings to track this and got the following as well

Image

To reproduce

Run a model with dangling outputs for batch normalization then attempt a compile in the EP

Urgency

Customer model for us when using Onnxruntime. This unfortunately breaks the inference run even though the same model and same input in MIGraphX-driver doesn't yield any faults likely due to uninitialized outputs now that Onnxruntime is trying to write.

Platform

Linux

OS Version

Ubuntu 24.04

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

Seen on mainline Onnxruntime as well.

ONNX Runtime API

Python

Architecture

X64

Execution Provider

MIGraphX

Execution Provider Library Version

ROCm 7.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    ep:MIGraphXissues related to AMD MI GraphX execution providerep:ROCmquestions/issues related to ROCm execution provider

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions