Skip to content

[onnx->torch] The folding behavior of the reducemax operator is inconsistent for float and int. #4299

@chenxinyi0415

Description

@chenxinyi0415

For two test

func.func @test_reduce_max_empty_set_fp(%arg0: !torch.vtensor<[2,0,4],f32>, %arg1: !torch.vtensor<[1],si64>) -> !torch.vtensor<[2,1,4],f32> attributes {torch.onnx_meta.ir_version = 9 : si64, torch.onnx_meta.opset_version = 20 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
  %0 = torch.operator "onnx.ReduceMax"(%arg0, %arg1) {torch.onnx.keepdims = 1 : si64} : (!torch.vtensor<[2,0,4],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[2,1,4],f32>
  return %0 : !torch.vtensor<[2,1,4],f32>
}
func.func @test_reduce_max_empty_set_int(%arg0: !torch.vtensor<[2,0,4],si32>, %arg1: !torch.vtensor<[1],si64>) -> !torch.vtensor<[2,1,4],si32> attributes {torch.onnx_meta.ir_version = 9 : si64, torch.onnx_meta.opset_version = 20 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
  %0 = torch.operator "onnx.ReduceMax"(%arg0, %arg1) {torch.onnx.keepdims = 1 : si64} : (!torch.vtensor<[2,0,4],si32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[2,1,4],si32>
  return %0 : !torch.vtensor<[2,1,4],si32>
}

run

../torch-mlir/bin/torch-mlir-opt --torch-onnx-to-torch-backend-pipeline test.mlir

Will both conversion to fullOp from onnx to torch.
For int type , fullOp will be folded to torch.vtensor.literal. But for float type, will not be folded because of this code

if (isa<IntegerType>(elementType)) {
    int64_t value = 0;
    if (matchPattern(getFillValue(), m_TorchConstantInt(&value))) {
      Attribute attribute = IntegerAttr::get(elementType, value);
      return DenseElementsAttr::get(shapedty, attribute);
    }
  }
  if (isa<FloatType>(elementType)) {
    double value = 0.0;
    if (matchPattern(getFillValue(), m_TorchConstantFloat(&value))) {
      Attribute attribute = FloatAttr::get(elementType, value);
      return DenseElementsAttr::get(shapedty, attribute);
    }
  }

If change FloatType to mlir::FloatType will successfully be folded, maybe these is a bug, we need to fix it? Because the namespace of mlir and torch both have FloatType.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions