-
Notifications
You must be signed in to change notification settings - Fork 15.2k
[MLIR] Fix assert expressions #112474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MLIR] Fix assert expressions #112474
Conversation
|
@llvm/pr-subscribers-mlir @llvm/pr-subscribers-mlir-linalg Author: Alexander Pivovarov (apivovarov) ChangesI noticed that several assertions in MLIR codebase have issues with operator precedence The issue with operator precedence in these assertions is due to the way logical operators are evaluated. The We should add parentheses around the entire expression involving Full diff: https://github.com/llvm/llvm-project/pull/112474.diff 3 Files Affected:
diff --git a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
index e628fb152b52f8..0d6ff2fd908db9 100644
--- a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
+++ b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
@@ -892,8 +892,8 @@ FlatLinearValueConstraints::FlatLinearValueConstraints(IntegerSet set,
set.getNumDims() + set.getNumSymbols() + 1,
set.getNumDims(), set.getNumSymbols(),
/*numLocals=*/0) {
- assert(operands.empty() ||
- set.getNumInputs() == operands.size() && "operand count mismatch");
+ assert((operands.empty() || set.getNumInputs() == operands.size()) &&
+ "operand count mismatch");
// Set the values for the non-local variables.
for (unsigned i = 0, e = operands.size(); i < e; ++i)
setValue(i, operands[i]);
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..635273bcbc0208 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -840,11 +840,11 @@ enum VectorMemoryAccessKind { ScalarBroadcast, Contiguous, Gather };
/// TODO: Statically shaped loops + vector masking
static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
SmallVector<int64_t> loopRanges = linalgOp.getStaticLoopRanges();
- assert(linalgOp.hasDynamicShape() ||
- llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) ==
- 1 &&
- "For statically shaped Linalg Ops, only one "
- "non-unit loop dim is expected");
+ assert(
+ (linalgOp.hasDynamicShape() ||
+ llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) == 1) &&
+ "For statically shaped Linalg Ops, only one "
+ "non-unit loop dim is expected");
size_t idx = loopRanges.size() - 1;
for (; idx >= 0; idx--)
diff --git a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
index e0b91f323b0e64..5c16e538ac2420 100644
--- a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
@@ -27,9 +27,9 @@ PadOp mlir::tensor::createPadHighOp(RankedTensorType resType, Value source,
OpBuilder &b,
SmallVector<Value> dynOutDims) {
- assert((resType.getNumDynamicDims() == dynOutDims.size()) ||
- dynOutDims.empty() &&
- "Either none or all output dynamic dims must be specified!");
+ assert(((resType.getNumDynamicDims() == dynOutDims.size()) ||
+ dynOutDims.empty()) &&
+ "Either none or all output dynamic dims must be specified!");
// Init "low" and "high" padding values ("low" is kept as is, "high" is
// computed below).
|
|
@llvm/pr-subscribers-mlir-tensor Author: Alexander Pivovarov (apivovarov) ChangesI noticed that several assertions in MLIR codebase have issues with operator precedence The issue with operator precedence in these assertions is due to the way logical operators are evaluated. The We should add parentheses around the entire expression involving Full diff: https://github.com/llvm/llvm-project/pull/112474.diff 3 Files Affected:
diff --git a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
index e628fb152b52f8..0d6ff2fd908db9 100644
--- a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
+++ b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
@@ -892,8 +892,8 @@ FlatLinearValueConstraints::FlatLinearValueConstraints(IntegerSet set,
set.getNumDims() + set.getNumSymbols() + 1,
set.getNumDims(), set.getNumSymbols(),
/*numLocals=*/0) {
- assert(operands.empty() ||
- set.getNumInputs() == operands.size() && "operand count mismatch");
+ assert((operands.empty() || set.getNumInputs() == operands.size()) &&
+ "operand count mismatch");
// Set the values for the non-local variables.
for (unsigned i = 0, e = operands.size(); i < e; ++i)
setValue(i, operands[i]);
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..635273bcbc0208 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -840,11 +840,11 @@ enum VectorMemoryAccessKind { ScalarBroadcast, Contiguous, Gather };
/// TODO: Statically shaped loops + vector masking
static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
SmallVector<int64_t> loopRanges = linalgOp.getStaticLoopRanges();
- assert(linalgOp.hasDynamicShape() ||
- llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) ==
- 1 &&
- "For statically shaped Linalg Ops, only one "
- "non-unit loop dim is expected");
+ assert(
+ (linalgOp.hasDynamicShape() ||
+ llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) == 1) &&
+ "For statically shaped Linalg Ops, only one "
+ "non-unit loop dim is expected");
size_t idx = loopRanges.size() - 1;
for (; idx >= 0; idx--)
diff --git a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
index e0b91f323b0e64..5c16e538ac2420 100644
--- a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
@@ -27,9 +27,9 @@ PadOp mlir::tensor::createPadHighOp(RankedTensorType resType, Value source,
OpBuilder &b,
SmallVector<Value> dynOutDims) {
- assert((resType.getNumDynamicDims() == dynOutDims.size()) ||
- dynOutDims.empty() &&
- "Either none or all output dynamic dims must be specified!");
+ assert(((resType.getNumDynamicDims() == dynOutDims.size()) ||
+ dynOutDims.empty()) &&
+ "Either none or all output dynamic dims must be specified!");
// Init "low" and "high" padding values ("low" is kept as is, "high" is
// computed below).
|
MaheshRavishankar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
|
LGTM, thanks for the fix! |
I noticed that several assertions in MLIR codebase have issues with operator precedence
The issue with operator precedence in these assertions is due to the way logical operators are evaluated. The
&&operator has higher precedence than the||operator, which means the assertion is currently evaluating incorrectly, like this:We should add parentheses around the entire expression involving
dynOutDims.empty()to ensure that the logical conditions are grouped correctly. Here’s the corrected version: