Skip to content

Conversation

@brod4910
Copy link
Contributor

No description provided.

@github-actions
Copy link

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Oct 14, 2024

@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir-tensor

Author: None (brod4910)

Changes

Full diff: https://github.com/llvm/llvm-project/pull/112196.diff

1 Files Affected:

  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+3-1)
diff --git a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
index 4d6c5965c4fcc3..7af869c2183892 100644
--- a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
+++ b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
@@ -979,7 +979,9 @@ struct FoldEmptyTensorWithDimOp : public OpRewritePattern<DimOp> {
     auto emptyTensorOp = dimOp.getSource().getDefiningOp<EmptyOp>();
     if (!emptyTensorOp || !maybeConstantIndex)
       return failure();
-    if (!emptyTensorOp.getType().isDynamicDim(*maybeConstantIndex))
+    auto emptyTensorType = emptyTensorOp.getType();
+    if (*maybeConstantIndex >= emptyTensorType.getRank() ||
+        !emptyTensorType.isDynamicDim(*maybeConstantIndex))
       return failure();
     rewriter.replaceOp(dimOp,
                        emptyTensorOp.getDynamicSize(*maybeConstantIndex));

@llvmbot
Copy link
Member

llvmbot commented Oct 14, 2024

@llvm/pr-subscribers-mlir

Author: None (brod4910)

Changes

Full diff: https://github.com/llvm/llvm-project/pull/112196.diff

1 Files Affected:

  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+3-1)
diff --git a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
index 4d6c5965c4fcc3..7af869c2183892 100644
--- a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
+++ b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
@@ -979,7 +979,9 @@ struct FoldEmptyTensorWithDimOp : public OpRewritePattern<DimOp> {
     auto emptyTensorOp = dimOp.getSource().getDefiningOp<EmptyOp>();
     if (!emptyTensorOp || !maybeConstantIndex)
       return failure();
-    if (!emptyTensorOp.getType().isDynamicDim(*maybeConstantIndex))
+    auto emptyTensorType = emptyTensorOp.getType();
+    if (*maybeConstantIndex >= emptyTensorType.getRank() ||
+        !emptyTensorType.isDynamicDim(*maybeConstantIndex))
       return failure();
     rewriter.replaceOp(dimOp,
                        emptyTensorOp.getDynamicSize(*maybeConstantIndex));

@brod4910 brod4910 changed the title Fix out-of-bounds FoldEmptyTensorWithDimOp crash #111270 [MLIR][Tensor] Fix out-of-bounds FoldEmptyTensorWithDimOp crash #111270 Oct 14, 2024
if (!emptyTensorOp || !maybeConstantIndex)
return failure();
if (!emptyTensorOp.getType().isDynamicDim(*maybeConstantIndex))
auto emptyTensorType = emptyTensorOp.getType();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine, but can you provide a repro. for this. Also this doesnt seem to be an error here, but rather a caller error AFAICS.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MaheshRavishankar thanks for your review!

Here is the original repro.

mlir-opt --linalg-fold-unit-extent-dims a.mlir

a.mlir:

func.func @func1() -> index {
  %3 = tensor.empty() : tensor<19x15x1xf16>
  %idx10 = index.constant 10
  %dim_162 = tensor.dim %3, %idx10 : tensor<19x15x1xf16>  // index out-of-bounds
  return %dim_162 : index
}

From my understanding, OOBs tensor.dim operations result in UB but that doesn't mean that the IR is invalid (or does it?). The crash here is unexpected due to the assertion check OOBs accesses inisDynamicDim.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should fix the crash, but I would change the verifier of tensor.dim to assert this as a verification error. If the idx10 is truly dynamic then I dont think the crash happens.

Copy link
Contributor Author

@brod4910 brod4910 Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so are you suggesting to add hasVerifier = 1 in ODS, then verify the condition? This doesn't seem to be inline with the spec of tensor.dim since an OOBs access is not a verification error but UB.

This comment from the tensor.dim op fold method seems to also suggest that as well:

Out of bound indices produce undefined behavior but are still valid IR.
Don't choke on them.

I do agree that if the index is truly dynamic, then this fold shouldn't be possible and the crash doesn't occur.

I may be totally confused here but handling the OOBs check in any of the code related to tensor.dim op seems to counter the spec of the op itself.

I found a similar check in the Bufferization dialect FoldDimOfAllocTensorOp pattern (mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp:329):

if (*maybeConstantIndex < 0 ||
    *maybeConstantIndex >= allocTensorOp.getType().getRank())
  return failure();

which also suggests I missed a check for when the index is < 0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont see any benefit in not flagging in the verifier when a tensor.dim is accessed with index greater than the tensor dim (or negative dim). In dynamic index cases you cant do much, its undefined behavior. If you know this statically though its better to flag in the verifier closer to when it is created, rather than letting it plumb through compilation. In all probability this will lead to a runtime error which would be hard to track down. Flagging it in the verifier on creation is better to diagnose errors since I cant think of a valid reason one would create a tensor.dim operation with negative index or index out of bounds.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are going to go through larger changes I would first ask on Discourse. As a user of MLIR such changes and "loosening" of verification can have very disruptive effects. I would strongly recommend that we discuss this on discourse first cause such blanket principles without some more nuance on cases-by-case basis is BAD.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(you could though fold all of these cases to a ub.unreachable() intrinsic for example):

%a = constant .... tensor<10xf32>
if (cond) {
  ...
  ub.unreachable()
  ...
}

You are missing the point and going on a tangent. The same check is being done in many places.. so your suggestions seems to be

  1. Keep doing that. Check at every place if the tensor.dim is out-of-bounds access.
  2. Add a different pass that verifies this, but that would have to happen everytime we run the verifier. I dont understand how this is expected to work.

Copy link
Collaborator

@joker-eph joker-eph Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand your point actually: this check only applies to places where you would try to fold the constant. This shouldn't happen in so many places.
Regardless, this is something common to every op that has UB on specific input values, that happens all the time!

For example you seems like you'd want to make the LLVM verifier reject this IR (which is runtime UB):

%cst1 = llvm.constant -1 : i32
%overflow = llvm.add %0, %0 nsw : i32

You are missing the point and going on a tangent.

I don't quite get why you believe this is a tangent? This seems to be an exact situation that explain why we don't implement such verifiers, and it should apply here just fine!

If you are going to go through larger changes I would first ask on Discourse. As a user of MLIR such changes and "loosening" of verification can have very disruptive effects. I would strongly recommend that we discuss this on discourse first cause such blanket principles without some more nuance on cases-by-case basis is BAD.

Sorry: I won't. If I find verifiers in MLIR that are violating this principle without a very strong exceptional argument (I know of a couple of these out-of-tree that I don't want to get into here), I will remove these without any Discourse thread. We have a policy and guideline on this, I am sorry you don't seem to understand the basic principle of the design behind the IR consistency and validity here, I'm happy to explain it to you further, but I won't jump through these hoops you seem to intend to put here in the way of keeping the code consistent from a system design point of view.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am saying I challenge the principle and we need to look at it. This is a community project. No one person gets to ram through their changes without consensus. Consensus can change with time too

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, but the current consensus and principle exists for now and so this is what we live with at the moment.. Feel free to challenge it but you have to run through things through first principles, and present strong argument and examples to reverse such policy.

Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me.

@brod4910 brod4910 force-pushed the fix-fold-empty-tensor-with-dim-op branch from 44e507b to 5037012 Compare October 17, 2024 05:45
@brod4910 brod4910 force-pushed the fix-fold-empty-tensor-with-dim-op branch from f0b84ef to af420ff Compare October 17, 2024 06:01
Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a test.

@brod4910
Copy link
Contributor Author

@MaheshRavishankar

Please add a test.

This good to merge?

@brod4910 brod4910 force-pushed the fix-fold-empty-tensor-with-dim-op branch from 2550a76 to 6f58d06 Compare October 29, 2024 02:14
@brod4910 brod4910 requested a review from joker-eph October 29, 2024 02:15
@brod4910
Copy link
Contributor Author

brod4910 commented Nov 6, 2024

@lipracer fix for issue in #111270

Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is another instance where we should just fix this in the verifier of tensor.dim operations instead of fixing it everywhere we look at which dim is being accessed

@joker-eph
Copy link
Collaborator

This is another instance

What are the other ones? This is the PR where we had the discussion on the verifier...

@joker-eph joker-eph merged commit 40556d0 into llvm:main Nov 6, 2024
8 checks passed
@github-actions
Copy link

github-actions bot commented Nov 6, 2024

@brod4910 Congratulations on having your first Pull Request (PR) merged into the LLVM Project!

Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR.

Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues.

How to do this, and the rest of the post-merge process, is covered in detail here.

If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again.

If you don't get any reports, no action is required from you. Your changes are working as expected, well done!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants