-
Notifications
You must be signed in to change notification settings - Fork 15.2k
[MLIR][Tensor] Fix out-of-bounds FoldEmptyTensorWithDimOp crash #111270 #112196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
joker-eph
merged 1 commit into
llvm:main
from
brod4910:fix-fold-empty-tensor-with-dim-op
Nov 6, 2024
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine, but can you provide a repro. for this. Also this doesnt seem to be an error here, but rather a caller error AFAICS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @MaheshRavishankar thanks for your review!
Here is the original repro.
mlir-opt --linalg-fold-unit-extent-dims a.mlira.mlir:
From my understanding, OOBs
tensor.dimoperations result in UB but that doesn't mean that the IR is invalid (or does it?). The crash here is unexpected due to the assertion check OOBs accesses inisDynamicDim.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should fix the crash, but I would change the verifier of
tensor.dimto assert this as a verification error. If theidx10is truly dynamic then I dont think the crash happens.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, so are you suggesting to add
hasVerifier = 1in ODS, then verify the condition? This doesn't seem to be inline with the spec oftensor.dimsince an OOBs access is not a verification error but UB.This comment from the
tensor.dimopfoldmethod seems to also suggest that as well:I do agree that if the index is truly dynamic, then this fold shouldn't be possible and the crash doesn't occur.
I may be totally confused here but handling the OOBs check in any of the code related to
tensor.dimop seems to counter the spec of the op itself.I found a similar check in the
BufferizationdialectFoldDimOfAllocTensorOppattern (mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp:329):which also suggests I missed a check for when the index is
< 0.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont see any benefit in not flagging in the verifier when a tensor.dim is accessed with index greater than the tensor dim (or negative dim). In dynamic index cases you cant do much, its undefined behavior. If you know this statically though its better to flag in the verifier closer to when it is created, rather than letting it plumb through compilation. In all probability this will lead to a runtime error which would be hard to track down. Flagging it in the verifier on creation is better to diagnose errors since I cant think of a valid reason one would create a
tensor.dimoperation with negative index or index out of bounds.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are going to go through larger changes I would first ask on Discourse. As a user of MLIR such changes and "loosening" of verification can have very disruptive effects. I would strongly recommend that we discuss this on discourse first cause such blanket principles without some more nuance on cases-by-case basis is BAD.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are missing the point and going on a tangent. The same check is being done in many places.. so your suggestions seems to be
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand your point actually: this check only applies to places where you would try to fold the constant. This shouldn't happen in so many places.
Regardless, this is something common to every op that has UB on specific input values, that happens all the time!
For example you seems like you'd want to make the LLVM verifier reject this IR (which is runtime UB):
I don't quite get why you believe this is a tangent? This seems to be an exact situation that explain why we don't implement such verifiers, and it should apply here just fine!
Sorry: I won't. If I find verifiers in MLIR that are violating this principle without a very strong exceptional argument (I know of a couple of these out-of-tree that I don't want to get into here), I will remove these without any Discourse thread. We have a policy and guideline on this, I am sorry you don't seem to understand the basic principle of the design behind the IR consistency and validity here, I'm happy to explain it to you further, but I won't jump through these hoops you seem to intend to put here in the way of keeping the code consistent from a system design point of view.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am saying I challenge the principle and we need to look at it. This is a community project. No one person gets to ram through their changes without consensus. Consensus can change with time too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, but the current consensus and principle exists for now and so this is what we live with at the moment.. Feel free to challenge it but you have to run through things through first principles, and present strong argument and examples to reverse such policy.