Skip to content

Conversation

@apivovarov
Copy link
Member

I noticed that several assertions in MLIR codebase have issues with operator precedence

The issue with operator precedence in these assertions is due to the way logical operators are evaluated. The && operator has higher precedence than the || operator, which means the assertion is currently evaluating incorrectly, like this:

assert((resType.getNumDynamicDims() == dynOutDims.size()) ||
       (dynOutDims.empty() && "Either none or all output dynamic dims must be specified!"));

We should add parentheses around the entire expression involving dynOutDims.empty() to ensure that the logical conditions are grouped correctly. Here’s the corrected version:

assert(((resType.getNumDynamicDims() == dynOutDims.size()) || dynOutDims.empty()) &&
       "Either none or all output dynamic dims must be specified!");

@llvmbot
Copy link
Member

llvmbot commented Oct 16, 2024

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-linalg

Author: Alexander Pivovarov (apivovarov)

Changes

I noticed that several assertions in MLIR codebase have issues with operator precedence

The issue with operator precedence in these assertions is due to the way logical operators are evaluated. The && operator has higher precedence than the || operator, which means the assertion is currently evaluating incorrectly, like this:

assert((resType.getNumDynamicDims() == dynOutDims.size()) ||
       (dynOutDims.empty() && "Either none or all output dynamic dims must be specified!"));

We should add parentheses around the entire expression involving dynOutDims.empty() to ensure that the logical conditions are grouped correctly. Here’s the corrected version:

assert(((resType.getNumDynamicDims() == dynOutDims.size()) || dynOutDims.empty()) &&
       "Either none or all output dynamic dims must be specified!");


Full diff: https://github.com/llvm/llvm-project/pull/112474.diff

3 Files Affected:

  • (modified) mlir/lib/Analysis/FlatLinearValueConstraints.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Tensor/Utils/Utils.cpp (+3-3)
diff --git a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
index e628fb152b52f8..0d6ff2fd908db9 100644
--- a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
+++ b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
@@ -892,8 +892,8 @@ FlatLinearValueConstraints::FlatLinearValueConstraints(IntegerSet set,
                             set.getNumDims() + set.getNumSymbols() + 1,
                             set.getNumDims(), set.getNumSymbols(),
                             /*numLocals=*/0) {
-  assert(operands.empty() ||
-         set.getNumInputs() == operands.size() && "operand count mismatch");
+  assert((operands.empty() || set.getNumInputs() == operands.size()) &&
+         "operand count mismatch");
   // Set the values for the non-local variables.
   for (unsigned i = 0, e = operands.size(); i < e; ++i)
     setValue(i, operands[i]);
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..635273bcbc0208 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -840,11 +840,11 @@ enum VectorMemoryAccessKind { ScalarBroadcast, Contiguous, Gather };
 /// TODO: Statically shaped loops + vector masking
 static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
   SmallVector<int64_t> loopRanges = linalgOp.getStaticLoopRanges();
-  assert(linalgOp.hasDynamicShape() ||
-         llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) ==
-                 1 &&
-             "For statically shaped Linalg Ops, only one "
-             "non-unit loop dim is expected");
+  assert(
+      (linalgOp.hasDynamicShape() ||
+       llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) == 1) &&
+      "For statically shaped Linalg Ops, only one "
+      "non-unit loop dim is expected");
 
   size_t idx = loopRanges.size() - 1;
   for (; idx >= 0; idx--)
diff --git a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
index e0b91f323b0e64..5c16e538ac2420 100644
--- a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
@@ -27,9 +27,9 @@ PadOp mlir::tensor::createPadHighOp(RankedTensorType resType, Value source,
                                     OpBuilder &b,
                                     SmallVector<Value> dynOutDims) {
 
-  assert((resType.getNumDynamicDims() == dynOutDims.size()) ||
-         dynOutDims.empty() &&
-             "Either none or all output dynamic dims must be specified!");
+  assert(((resType.getNumDynamicDims() == dynOutDims.size()) ||
+          dynOutDims.empty()) &&
+         "Either none or all output dynamic dims must be specified!");
 
   // Init "low" and "high" padding values ("low" is kept as is, "high" is
   // computed below).

@llvmbot
Copy link
Member

llvmbot commented Oct 16, 2024

@llvm/pr-subscribers-mlir-tensor

Author: Alexander Pivovarov (apivovarov)

Changes

I noticed that several assertions in MLIR codebase have issues with operator precedence

The issue with operator precedence in these assertions is due to the way logical operators are evaluated. The &amp;&amp; operator has higher precedence than the || operator, which means the assertion is currently evaluating incorrectly, like this:

assert((resType.getNumDynamicDims() == dynOutDims.size()) ||
       (dynOutDims.empty() &amp;&amp; "Either none or all output dynamic dims must be specified!"));

We should add parentheses around the entire expression involving dynOutDims.empty() to ensure that the logical conditions are grouped correctly. Here’s the corrected version:

assert(((resType.getNumDynamicDims() == dynOutDims.size()) || dynOutDims.empty()) &amp;&amp;
       "Either none or all output dynamic dims must be specified!");


Full diff: https://github.com/llvm/llvm-project/pull/112474.diff

3 Files Affected:

  • (modified) mlir/lib/Analysis/FlatLinearValueConstraints.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Tensor/Utils/Utils.cpp (+3-3)
diff --git a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
index e628fb152b52f8..0d6ff2fd908db9 100644
--- a/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
+++ b/mlir/lib/Analysis/FlatLinearValueConstraints.cpp
@@ -892,8 +892,8 @@ FlatLinearValueConstraints::FlatLinearValueConstraints(IntegerSet set,
                             set.getNumDims() + set.getNumSymbols() + 1,
                             set.getNumDims(), set.getNumSymbols(),
                             /*numLocals=*/0) {
-  assert(operands.empty() ||
-         set.getNumInputs() == operands.size() && "operand count mismatch");
+  assert((operands.empty() || set.getNumInputs() == operands.size()) &&
+         "operand count mismatch");
   // Set the values for the non-local variables.
   for (unsigned i = 0, e = operands.size(); i < e; ++i)
     setValue(i, operands[i]);
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..635273bcbc0208 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -840,11 +840,11 @@ enum VectorMemoryAccessKind { ScalarBroadcast, Contiguous, Gather };
 /// TODO: Statically shaped loops + vector masking
 static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
   SmallVector<int64_t> loopRanges = linalgOp.getStaticLoopRanges();
-  assert(linalgOp.hasDynamicShape() ||
-         llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) ==
-                 1 &&
-             "For statically shaped Linalg Ops, only one "
-             "non-unit loop dim is expected");
+  assert(
+      (linalgOp.hasDynamicShape() ||
+       llvm::count_if(loopRanges, [](int64_t dim) { return dim != 1; }) == 1) &&
+      "For statically shaped Linalg Ops, only one "
+      "non-unit loop dim is expected");
 
   size_t idx = loopRanges.size() - 1;
   for (; idx >= 0; idx--)
diff --git a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
index e0b91f323b0e64..5c16e538ac2420 100644
--- a/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Tensor/Utils/Utils.cpp
@@ -27,9 +27,9 @@ PadOp mlir::tensor::createPadHighOp(RankedTensorType resType, Value source,
                                     OpBuilder &b,
                                     SmallVector<Value> dynOutDims) {
 
-  assert((resType.getNumDynamicDims() == dynOutDims.size()) ||
-         dynOutDims.empty() &&
-             "Either none or all output dynamic dims must be specified!");
+  assert(((resType.getNumDynamicDims() == dynOutDims.size()) ||
+          dynOutDims.empty()) &&
+         "Either none or all output dynamic dims must be specified!");
 
   // Init "low" and "high" padding values ("low" is kept as is, "high" is
   // computed below).

@apivovarov apivovarov requested a review from hanhanW October 16, 2024 04:06
Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@banach-space
Copy link
Contributor

LGTM, thanks for the fix!

@apivovarov apivovarov merged commit a24c468 into llvm:main Oct 16, 2024
12 checks passed
@apivovarov apivovarov deleted the fix_assert branch October 16, 2024 22:22
@apivovarov apivovarov removed the request for review from ftynse October 16, 2024 22:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants