Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -980,7 +980,10 @@ struct FoldEmptyTensorWithDimOp : public OpRewritePattern<DimOp> {
auto emptyTensorOp = dimOp.getSource().getDefiningOp<EmptyOp>();
if (!emptyTensorOp || !maybeConstantIndex)
return failure();
if (!emptyTensorOp.getType().isDynamicDim(*maybeConstantIndex))
auto emptyTensorType = emptyTensorOp.getType();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine, but can you provide a repro. for this. Also this doesnt seem to be an error here, but rather a caller error AFAICS.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MaheshRavishankar thanks for your review!

Here is the original repro.

mlir-opt --linalg-fold-unit-extent-dims a.mlir

a.mlir:

func.func @func1() -> index {
  %3 = tensor.empty() : tensor<19x15x1xf16>
  %idx10 = index.constant 10
  %dim_162 = tensor.dim %3, %idx10 : tensor<19x15x1xf16>  // index out-of-bounds
  return %dim_162 : index
}

From my understanding, OOBs tensor.dim operations result in UB but that doesn't mean that the IR is invalid (or does it?). The crash here is unexpected due to the assertion check OOBs accesses inisDynamicDim.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should fix the crash, but I would change the verifier of tensor.dim to assert this as a verification error. If the idx10 is truly dynamic then I dont think the crash happens.

Copy link
Contributor Author

@brod4910 brod4910 Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so are you suggesting to add hasVerifier = 1 in ODS, then verify the condition? This doesn't seem to be inline with the spec of tensor.dim since an OOBs access is not a verification error but UB.

This comment from the tensor.dim op fold method seems to also suggest that as well:

Out of bound indices produce undefined behavior but are still valid IR.
Don't choke on them.

I do agree that if the index is truly dynamic, then this fold shouldn't be possible and the crash doesn't occur.

I may be totally confused here but handling the OOBs check in any of the code related to tensor.dim op seems to counter the spec of the op itself.

I found a similar check in the Bufferization dialect FoldDimOfAllocTensorOp pattern (mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp:329):

if (*maybeConstantIndex < 0 ||
    *maybeConstantIndex >= allocTensorOp.getType().getRank())
  return failure();

which also suggests I missed a check for when the index is < 0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont see any benefit in not flagging in the verifier when a tensor.dim is accessed with index greater than the tensor dim (or negative dim). In dynamic index cases you cant do much, its undefined behavior. If you know this statically though its better to flag in the verifier closer to when it is created, rather than letting it plumb through compilation. In all probability this will lead to a runtime error which would be hard to track down. Flagging it in the verifier on creation is better to diagnose errors since I cant think of a valid reason one would create a tensor.dim operation with negative index or index out of bounds.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are going to go through larger changes I would first ask on Discourse. As a user of MLIR such changes and "loosening" of verification can have very disruptive effects. I would strongly recommend that we discuss this on discourse first cause such blanket principles without some more nuance on cases-by-case basis is BAD.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(you could though fold all of these cases to a ub.unreachable() intrinsic for example):

%a = constant .... tensor<10xf32>
if (cond) {
  ...
  ub.unreachable()
  ...
}

You are missing the point and going on a tangent. The same check is being done in many places.. so your suggestions seems to be

  1. Keep doing that. Check at every place if the tensor.dim is out-of-bounds access.
  2. Add a different pass that verifies this, but that would have to happen everytime we run the verifier. I dont understand how this is expected to work.

Copy link
Collaborator

@joker-eph joker-eph Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand your point actually: this check only applies to places where you would try to fold the constant. This shouldn't happen in so many places.
Regardless, this is something common to every op that has UB on specific input values, that happens all the time!

For example you seems like you'd want to make the LLVM verifier reject this IR (which is runtime UB):

%cst1 = llvm.constant -1 : i32
%overflow = llvm.add %0, %0 nsw : i32

You are missing the point and going on a tangent.

I don't quite get why you believe this is a tangent? This seems to be an exact situation that explain why we don't implement such verifiers, and it should apply here just fine!

If you are going to go through larger changes I would first ask on Discourse. As a user of MLIR such changes and "loosening" of verification can have very disruptive effects. I would strongly recommend that we discuss this on discourse first cause such blanket principles without some more nuance on cases-by-case basis is BAD.

Sorry: I won't. If I find verifiers in MLIR that are violating this principle without a very strong exceptional argument (I know of a couple of these out-of-tree that I don't want to get into here), I will remove these without any Discourse thread. We have a policy and guideline on this, I am sorry you don't seem to understand the basic principle of the design behind the IR consistency and validity here, I'm happy to explain it to you further, but I won't jump through these hoops you seem to intend to put here in the way of keeping the code consistent from a system design point of view.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am saying I challenge the principle and we need to look at it. This is a community project. No one person gets to ram through their changes without consensus. Consensus can change with time too

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, but the current consensus and principle exists for now and so this is what we live with at the moment.. Feel free to challenge it but you have to run through things through first principles, and present strong argument and examples to reverse such policy.

if (*maybeConstantIndex < 0 ||
*maybeConstantIndex >= emptyTensorType.getRank() ||
!emptyTensorType.isDynamicDim(*maybeConstantIndex))
return failure();
rewriter.replaceOp(dimOp,
emptyTensorOp.getDynamicSize(*maybeConstantIndex));
Expand Down
39 changes: 39 additions & 0 deletions mlir/test/Dialect/Linalg/drop-unit-extent-dims.mlir
Original file line number Diff line number Diff line change
Expand Up @@ -1130,3 +1130,42 @@ module {
return %1 : tensor<?x1x61x1xf32>
}
}

// -----

func.func @no_fold_empty_tensor_dim_out_of_bounds(%arg0: tensor<1x?x10xf32>) -> tensor<1x?xf32> {
%cst = arith.constant 1.000000e+00 : f32
%cst7 = arith.constant 7 : index
%dim = tensor.dim %arg0, %cst7 : tensor<1x?x10xf32>
%0 = tensor.empty(%dim) : tensor<1x?xf32>
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<1x?xf32>) -> tensor<1x?xf32>
return %1 : tensor<1x?xf32>
}
// CHECK-LABEL: func.func @no_fold_empty_tensor_dim_out_of_bounds
// CHECK-SAME: %[[ARG0:.*]]: tensor<1x?x10xf32>) -> tensor<1x?xf32> {
// CHECK: %[[CST:.*]] = arith.constant 1.000000e+00 : f32
// CHECK: %[[C7:.*]] = arith.constant 7
// CHECK: %[[DIM:.*]] = tensor.dim %[[ARG0]], %[[C7]] : tensor<1x?x10xf32>
// CHECK: %[[VAL_0:.*]] = tensor.empty(%[[DIM]]) : tensor<1x?xf32>
// CHECK: %[[VAL_1:.*]] = linalg.fill ins(%[[CST]] : f32) outs(%[[VAL_0]] : tensor<1x?xf32>) -> tensor<1x?xf32>
// CHECK: return %[[VAL_1]] : tensor<1x?xf32>
// CHECK: }

// -----

func.func @fold_empty_tensor_dim_op(%arg0: tensor<1x?x10xf32>) -> tensor<1x?xf32> {
%cst = arith.constant 1.000000e+00 : f32
%cst2 = index.constant 2
%dim10 = tensor.dim %arg0, %cst2 : tensor<1x?x10xf32>
%0 = tensor.empty(%dim10) : tensor<1x?xf32>
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<1x?xf32>) -> tensor<1x?xf32>
return %1 : tensor<1x?xf32>
}
// CHECK-LABEL: func.func @fold_empty_tensor_dim_op
// CHECK-SAME: %[[ARG0:.*]]: tensor<1x?x10xf32>) -> tensor<1x?xf32> {
// CHECK: %[[CST:.*]] = arith.constant 1.000000e+00 : f32
// CHECK: %[[VAL_0:.*]] = tensor.empty() : tensor<1x10xf32>
// CHECK: %[[VAL_1:.*]] = tensor.cast %[[VAL_0]] : tensor<1x10xf32> to tensor<1x?xf32>
// CHECK: %[[VAL_2:.*]] = linalg.fill ins(%[[CST]] : f32) outs(%[[VAL_1]] : tensor<1x?xf32>) -> tensor<1x?xf32>
// CHECK: return %[[VAL_2]] : tensor<1x?xf32>
// CHECK: }
Loading