Skip to content

Conversation

@yangtetris
Copy link
Contributor

@yangtetris yangtetris commented Jun 5, 2025

Description

This change introduces a new canonicalization pattern for the MLIR Vector dialect that optimizes chains of insertions. The optimization identifies when a vector is completely initialized through a series of vector.insert operations and replaces the entire chain with a single vector.from_elements operation.

Please be aware that the new pattern doesn't work for poison vectors where only some elements are set, as MLIR doesn't support partial poison vectors for now.

New Pattern: InsertChainFullyInitialized

  • Detects chains of vector.insert operations.
  • Validates that all insertions are at static positions, and all intermediate insertions have only one use.
  • Ensures the entire vector is completely initialized.
  • Replaces the entire chain with a single vector.from_elementts operation.

Refactored Helper Function

  • Extracted calculateInsertPosition from foldDenseElementsAttrDestInsertOp to avoid code duplication.

Example

// Before:
%v1 = vector.insert %c10, %v0[0] : i64 into vector<2xi64>
%v2 = vector.insert %c20, %v1[1] : i64 into vector<2xi64>

// After:
%v2 = vector.from_elements %c10, %c20 : vector<2xi64>

It also works for multidimensional vectors.

// Before:
%v1 = vector.insert %cv0, %v0[0] : vector<3xi64> into vector<2x3xi64>
%v2 = vector.insert %cv1, %v1[1] : vector<3xi64> into vector<2x3xi64>

// After:
%0:3 = vector.to_elements %arg1 : vector<3xi64>
%1:3 = vector.to_elements %arg2 : vector<3xi64>
%v2 = vector.from_elements %0#0, %0#1, %0#2, %1#0, %1#1, %1#2 : vector<2x3xi64>

@github-actions
Copy link

github-actions bot commented Jun 5, 2025

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Jun 5, 2025

@llvm/pr-subscribers-mlir-spirv
@llvm/pr-subscribers-mlir-vector

@llvm/pr-subscribers-mlir

Author: Yang Bai (yangtetris)

Changes

Description

This change introduces a new canonicalization pattern for the MLIR Vector dialect that optimizes chains of constant insertions into vectors initialized with ub.poison. The optimization identifies when a vector is completely initialized through a series of vector.insert operations and replaces the entire chain with a single arith.constant operation.

Please be aware that the new pattern doesn't work for poison vectors where only some elements are set, as MLIR doesn't support partial poison vectors for now.

New Pattern: InsertConstantToPoison

  • Detects chains of vector.insert operations that start from an ub.poison operation.
  • Validates that all insertions use constant values at static positions.
  • Ensures the entire vector is completely initialized.
  • Replaces the entire chain with a single arith.constant operation containing a DenseElementsAttr.

Refactored Helper Function

  • Extracted calculateInsertPositionAndExtractValues from foldDenseElementsAttrDestInsertOp to avoid code duplication.

Example

// Before:
%poison = ub.poison : vector&lt;2xi64&gt;
%v1 = vector.insert %c10, %poison[0] : i64 into vector&lt;2xi64&gt;
%v2 = vector.insert %c20, %v1[1] : i64 into vector&lt;2xi64&gt;

// After:
%result = arith.constant dense&lt;[10, 20]&gt; : vector&lt;2xi64&gt;

It also works for multidimensional vectors.

// Before:
%poison = ub.poison : vector&lt;2x3xi64&gt;
%cv0 = arith.constant dense&lt;[1, 2, 3]&gt; : vector&lt;3xi64&gt;
%cv1 = arith.constant dense&lt;[4, 5, 6]&gt; : vector&lt;3xi64&gt;
%v1 = vector.insert %cv0, %poison[0] : vector&lt;3xi64&gt; into vector&lt;2x3xi64&gt;
%v2 = vector.insert %cv1, %v1[1] : vector&lt;3xi64&gt; into vector&lt;2x3xi64&gt;

// After:
%result = arith.constant dense&lt;[[1, 2, 3], [4, 5, 6]]&gt; : vector&lt;2x3xi64&gt;

---
Full diff: https://github.com/llvm/llvm-project/pull/142944.diff


2 Files Affected:

- (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+145-29) 
- (modified) mlir/test/Dialect/Vector/canonicalize.mlir (+32) 


``````````diff
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index fcfb401fd9867..253d148072dc0 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -3149,6 +3149,42 @@ LogicalResult InsertOp::verify() {
   return success();
 }
 
+// Calculate the linearized position for inserting elements and extract values
+// from the source attribute. Returns the starting position in the destination
+// vector where elements should be inserted.
+static int64_t calculateInsertPositionAndExtractValues(
+    VectorType destTy, const ArrayRef<int64_t> &positions, Attribute srcAttr,
+    SmallVector<Attribute> &valueToInsert) {
+  llvm::SmallVector<int64_t> completePositions(destTy.getRank(), 0);
+  copy(positions, completePositions.begin());
+  int64_t insertBeginPosition =
+      linearize(completePositions, computeStrides(destTy.getShape()));
+
+  Type destEltType = destTy.getElementType();
+
+  /// Converts the expected type to an IntegerAttr if there's
+  /// a mismatch.
+  auto convertIntegerAttr = [](Attribute attr, Type expectedType) -> Attribute {
+    if (auto intAttr = mlir::dyn_cast<IntegerAttr>(attr)) {
+      if (intAttr.getType() != expectedType)
+        return IntegerAttr::get(expectedType, intAttr.getInt());
+    }
+    return attr;
+  };
+
+  // The `convertIntegerAttr` method specifically handles the case
+  // for `llvm.mlir.constant` which can hold an attribute with a
+  // different type than the return type.
+  if (auto denseSource = llvm::dyn_cast<DenseElementsAttr>(srcAttr)) {
+    for (auto value : denseSource.getValues<Attribute>())
+      valueToInsert.push_back(convertIntegerAttr(value, destEltType));
+  } else {
+    valueToInsert.push_back(convertIntegerAttr(srcAttr, destEltType));
+  }
+
+  return insertBeginPosition;
+}
+
 namespace {
 
 // If insertOp is only inserting unit dimensions it can be transformed to a
@@ -3191,6 +3227,109 @@ class InsertSplatToSplat final : public OpRewritePattern<InsertOp> {
   }
 };
 
+// Pattern to optimize a chain of constant insertions into a poison vector.
+//
+// This pattern identifies chains of vector.insert operations that:
+// 1. Start from an ub.poison operation.
+// 2. Insert only constant values at static positions.
+// 3. Completely initialize all elements in the resulting vector.
+//
+// When these conditions are met, the entire chain can be replaced with a
+// single arith.constant operation containing a dense elements attribute.
+//
+// Example transformation:
+//   %poison = ub.poison : vector<2xi32>
+//   %0 = vector.insert %c1, %poison[0] : i32 into vector<2xi32>
+//   %1 = vector.insert %c2, %0[1] : i32 into vector<2xi32>
+// ->
+//   %result = arith.constant dense<[1, 2]> : vector<2xi32>
+
+// TODO: Support the case where only some elements of the poison vector are set.
+//       Currently, MLIR doesn't support partial poison vectors.
+
+class InsertConstantToPoison final : public OpRewritePattern<InsertOp> {
+public:
+  using OpRewritePattern::OpRewritePattern;
+  LogicalResult matchAndRewrite(InsertOp op,
+                                PatternRewriter &rewriter) const override {
+
+    VectorType destTy = op.getDestVectorType();
+    if (destTy.isScalable())
+      return failure();
+    // Check if the result is used as the dest operand of another vector.insert
+    // Only care about the last op in a chain of insertions.
+    for (Operation *user : op.getResult().getUsers())
+      if (auto insertOp = dyn_cast<InsertOp>(user))
+        if (insertOp.getDest() == op.getResult())
+          return failure();
+
+    InsertOp firstInsertOp;
+    InsertOp previousInsertOp = op;
+    SmallVector<InsertOp> chainInsertOps;
+    SmallVector<Attribute> srcAttrs;
+    while (previousInsertOp) {
+      // Dynamic position is not supported.
+      if (previousInsertOp.hasDynamicPosition())
+        return failure();
+
+      // The inserted content must be constant.
+      chainInsertOps.push_back(previousInsertOp);
+      srcAttrs.push_back(Attribute());
+      matchPattern(previousInsertOp.getValueToStore(),
+                   m_Constant(&srcAttrs.back()));
+      if (!srcAttrs.back())
+        return failure();
+
+      // An insertion at poison index makes the entire chain poisoned.
+      if (is_contained(previousInsertOp.getStaticPosition(),
+                       InsertOp::kPoisonIndex))
+        return failure();
+
+      firstInsertOp = previousInsertOp;
+      previousInsertOp = previousInsertOp.getDest().getDefiningOp<InsertOp>();
+    }
+
+    if (!firstInsertOp.getDest().getDefiningOp<ub::PoisonOp>())
+      return failure();
+
+    // Need to make sure all elements are initialized.
+    int64_t vectorSize = destTy.getNumElements();
+    int64_t initializedCount = 0;
+    SmallVector<bool> initialized(vectorSize, false);
+    SmallVector<Attribute> initValues(vectorSize);
+
+    for (auto [insertOp, srcAttr] : llvm::zip(chainInsertOps, srcAttrs)) {
+      // Calculate the linearized position for inserting elements, as well as
+      // convert the source attribute to the proper type.
+      SmallVector<Attribute> valueToInsert;
+      int64_t insertBeginPosition = calculateInsertPositionAndExtractValues(
+          destTy, insertOp.getStaticPosition(), srcAttr, valueToInsert);
+      for (auto index :
+           llvm::seq<int64_t>(insertBeginPosition,
+                              insertBeginPosition + valueToInsert.size())) {
+        if (initialized[index])
+          continue;
+
+        initialized[index] = true;
+        ++initializedCount;
+        initValues[index] = valueToInsert[index - insertBeginPosition];
+      }
+      // If all elements in the vector have been initialized, we can stop
+      // processing the remaining insert operations in the chain.
+      if (initializedCount == vectorSize)
+        break;
+    }
+
+    // some positions are not initialized.
+    if (initializedCount != vectorSize)
+      return failure();
+
+    auto newAttr = DenseElementsAttr::get(destTy, initValues);
+    rewriter.replaceOpWithNewOp<arith::ConstantOp>(op, destTy, newAttr);
+    return success();
+  }
+};
+
 } // namespace
 
 static Attribute
@@ -3217,35 +3356,11 @@ foldDenseElementsAttrDestInsertOp(InsertOp insertOp, Attribute srcAttr,
       !insertOp->hasOneUse())
     return {};
 
-  // Calculate the linearized position of the continuous chunk of elements to
-  // insert.
-  llvm::SmallVector<int64_t> completePositions(destTy.getRank(), 0);
-  copy(insertOp.getStaticPosition(), completePositions.begin());
-  int64_t insertBeginPosition =
-      linearize(completePositions, computeStrides(destTy.getShape()));
-
+  // Calculate the linearized position for inserting elements, as well as
+  // convert the source attribute to the proper type.
   SmallVector<Attribute> insertedValues;
-  Type destEltType = destTy.getElementType();
-
-  /// Converts the expected type to an IntegerAttr if there's
-  /// a mismatch.
-  auto convertIntegerAttr = [](Attribute attr, Type expectedType) -> Attribute {
-    if (auto intAttr = mlir::dyn_cast<IntegerAttr>(attr)) {
-      if (intAttr.getType() != expectedType)
-        return IntegerAttr::get(expectedType, intAttr.getInt());
-    }
-    return attr;
-  };
-
-  // The `convertIntegerAttr` method specifically handles the case
-  // for `llvm.mlir.constant` which can hold an attribute with a
-  // different type than the return type.
-  if (auto denseSource = llvm::dyn_cast<DenseElementsAttr>(srcAttr)) {
-    for (auto value : denseSource.getValues<Attribute>())
-      insertedValues.push_back(convertIntegerAttr(value, destEltType));
-  } else {
-    insertedValues.push_back(convertIntegerAttr(srcAttr, destEltType));
-  }
+  int64_t insertBeginPosition = calculateInsertPositionAndExtractValues(
+      destTy, insertOp.getStaticPosition(), srcAttr, insertedValues);
 
   auto allValues = llvm::to_vector(denseDst.getValues<Attribute>());
   copy(insertedValues, allValues.begin() + insertBeginPosition);
@@ -3256,7 +3371,8 @@ foldDenseElementsAttrDestInsertOp(InsertOp insertOp, Attribute srcAttr,
 
 void InsertOp::getCanonicalizationPatterns(RewritePatternSet &results,
                                            MLIRContext *context) {
-  results.add<InsertToBroadcast, BroadcastFolder, InsertSplatToSplat>(context);
+  results.add<InsertToBroadcast, BroadcastFolder, InsertSplatToSplat,
+              InsertConstantToPoison>(context);
 }
 
 OpFoldResult vector::InsertOp::fold(FoldAdaptor adaptor) {
diff --git a/mlir/test/Dialect/Vector/canonicalize.mlir b/mlir/test/Dialect/Vector/canonicalize.mlir
index a06a9f67d54dc..36f3d7196bb93 100644
--- a/mlir/test/Dialect/Vector/canonicalize.mlir
+++ b/mlir/test/Dialect/Vector/canonicalize.mlir
@@ -2320,6 +2320,38 @@ func.func @insert_2d_constant() -> (vector<2x3xi32>, vector<2x3xi32>, vector<2x3
 
 // -----
 
+// CHECK-LABEL: func.func @fully_insert_scalar_constant_to_poison_vector
+//       CHECK: %[[VAL0:.+]] = arith.constant dense<[10, 20]> : vector<2xi64>
+//  CHECK-NEXT: return %[[VAL0]]
+func.func @fully_insert_scalar_constant_to_poison_vector() -> vector<2xi64> {
+  %poison = ub.poison : vector<2xi64>
+  %c0 = arith.constant 0 : index
+  %c1 = arith.constant 1 : index
+  %e0 = arith.constant 10 : i64
+  %e1 = arith.constant 20 : i64
+  %v1 = vector.insert %e0, %poison[%c0] : i64 into vector<2xi64>
+  %v2 = vector.insert %e1, %v1[%c1] : i64 into vector<2xi64>
+  return %v2 : vector<2xi64>
+}
+
+// -----
+
+// CHECK-LABEL: func.func @fully_insert_vector_constant_to_poison_vector
+//       CHECK: %[[VAL0:.+]] = arith.constant dense<{{\[\[1, 2, 3\], \[4, 5, 6\]\]}}> : vector<2x3xi64>
+//  CHECK-NEXT: return %[[VAL0]]
+func.func @fully_insert_vector_constant_to_poison_vector() -> vector<2x3xi64> {
+  %poison = ub.poison : vector<2x3xi64>
+  %cv0 = arith.constant dense<[1, 2, 3]> : vector<3xi64>
+  %cv1 = arith.constant dense<[4, 5, 6]> : vector<3xi64>
+  %c0 = arith.constant 0 : index
+  %c1 = arith.constant 1 : index
+  %v1 = vector.insert %cv0, %poison[%c0] : vector<3xi64> into vector<2x3xi64>
+  %v2 = vector.insert %cv1, %v1[%c1] : vector<3xi64> into vector<2x3xi64>
+  return %v2 : vector<2x3xi64>
+}
+
+// -----
+
 // CHECK-LABEL: func.func @insert_2d_splat_constant
 //   CHECK-DAG: %[[ACST:.*]] = arith.constant dense<0> : vector<2x3xi32>
 //   CHECK-DAG: %[[BCST:.*]] = arith.constant dense<{{\[\[99, 0, 0\], \[0, 0, 0\]\]}}> : vector<2x3xi32>

Copy link
Contributor

@dcaballe dcaballe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments. Otherwise, it LGTM!

@yangtetris
Copy link
Contributor Author

Almost all of the changes have been completed. However, while updating the test base, I found that VectorFromElementsLowering currently does not support vectors with rank > 1. So, replacing insert chains with from_elements would break some lower-to-llvm tests. Let's pause until that issue is fixed.

@dcaballe
Copy link
Contributor

What is the current state of this? Any blockers?

@yangtetris
Copy link
Contributor Author

What is the current state of this? Any blockers?

It is still blocked due to the missing from_elements to llvm conversion for multi-dim vectors. I think we can

  1. Support lowering multi-dim vectors to llvm first.
  2. Or, restrict this pattern to make it only work for 1-dim vectors.

@dcaballe
Copy link
Contributor

Support lowering multi-dim vectors to llvm first.

It sounds like this shouldn't be too complicated. Is that something you could help with?

@yangtetris
Copy link
Contributor Author

It sounds like this shouldn't be too complicated. Is that something you could help with?

I'm not familiar with that part, but I think I can give it a try.

@dcaballe
Copy link
Contributor

dcaballe commented Aug 6, 2025

Hey @Groverkss, could you take another look/consider removing the blocker? Thanks!

Copy link
Member

@Groverkss Groverkss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very well implemented. LGTM!

@kuhar
Copy link
Member

kuhar commented Aug 19, 2025

Are we waiting on something/ someone before merging this?

@yangtetris
Copy link
Contributor Author

Are we waiting on something/ someone before merging this?

The dependent PR was just merged yesterday. Please help merge this PR. Thanks.

@Groverkss Groverkss merged commit b4c31dc into llvm:main Aug 19, 2025
9 checks passed
@Groverkss
Copy link
Member

Are we waiting on something/ someone before merging this?

The dependent PR was just merged yesterday. Please help merge this PR. Thanks.

done!

@github-actions
Copy link

@yangtetris Congratulations on having your first Pull Request (PR) merged into the LLVM Project!

Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR.

Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues.

How to do this, and the rest of the post-merge process, is covered in detail here.

If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again.

If you don't get any reports, no action is required from you. Your changes are working as expected, well done!

@kuhar
Copy link
Member

kuhar commented Aug 19, 2025

Are we waiting on something/ someone before merging this?

The dependent PR was just merged yesterday. Please help merge this PR. Thanks.

Oh thanks for sharing this, I didn't notice the other PR. Do we know if from_elements unrolling works for SPIR-V? When we start canonicalizing to from_elements, we will have to support lowering to SPIR-V too. We do have support for converting from_elemtents 1d vectors to spirv.CompositeConstruct, but I think we'd have to at least have a test that exercises this in combination with unrolling.

@yangtetris yangtetris deleted the canonicalization_patter_insert_poison branch August 19, 2025 12:52
@yangtetris
Copy link
Contributor Author

Oh thanks for sharing this, I didn't notice the other PR. Do we know if from_elements unrolling works for SPIR-V? When we start canonicalizing to from_elements, we will have to support lowering to SPIR-V too. We do have support for converting from_elemtents 1d vectors to spirv.CompositeConstruct, but I think we'd have to at least have a test that exercises this in combination with unrolling.

Theoretically it should work, but I haven't tested this before. I will add a test in the next PR, perhaps along with the flattening-based N-D -> 1-D transformation that we discussed in the multi-dimensional from_elements PR.

@hanhanW
Copy link
Contributor

hanhanW commented Aug 20, 2025

Hey, I just want to flag that this breaks our downstream project, and I'm still triaging the issue. The error seems to happen in LLVM conversion.

WARNING: ConvertTypesPass (--iree-input-demote-*-to-*) changed public function signatures; callers at runtime must match the new expected I/O types:

  Old signature:
  │ @matmul_f64f64f64_dynamic(tensor<?x?xf64>, tensor<?x?xf64>, tensor<?x?xf64>) -> tensor<?x?xf64>
  New signature:
  │ @matmul_f64f64f64_dynamic(tensor<?x?xf32>, tensor<?x?xf32>, tensor<?x?xf32>) -> tensor<?x?xf32>

../matmul_dyn.mlir:3:8: error: cannot be converted to LLVM IR: missing `LLVMTranslationDialectInterface` registration for dialect for op: vector.from_elements
  %0 = linalg.matmul ins(%arg0, %arg1 : tensor<?x?xf64>, tensor<?x?xf64>)

My guess is that it introduces 2-D vector before LLVM conversion because I'm seeing below in the IR, while the pattern requires 1-D vector.

%102 = vector.from_elements ... : vector<16x256xf32> 

https://github.com/llvm/llvm-project/blob/0db57ab586d5456a6205172b8bc120f94c39d001/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp#L1885C14-L1912

@hanhanW
Copy link
Contributor

hanhanW commented Aug 20, 2025

Based on the review comments, it looks like we'll need populateVectorFromElementsLoweringPatterns in the pipeline. I need to find a right place to populate the patterns, as we have progressively lowering for vectors in IREE. The quick fix is adding the pattern in the LLVM conversion.

@yangtetris
Copy link
Contributor Author

Based on the review comments, it looks like we'll need populateVectorFromElementsLoweringPatterns in the pipeline. I need to find a right place to populate the patterns, as we have progressively lowering for vectors in IREE. The quick fix is adding the pattern in the LLVM conversion.

@hanhanW Sorry for the breakage. Yeah, populateVectorFromElementsLoweringPatterns should fix your issue.

FYI, here's some discussion about why we didn't add this pattern to upstream LLVM conversion.

@yangtetris
Copy link
Contributor Author

@kuhar I'm trying to check whether the from_elements unrolling works for SPIR-V. However, it's not going smoothly since I'm completely new to SPIR-V. Do you know how we typically convert the following ops to SPIR-V?

%0 = ub.poison : vector<2x2xf32>
%2 = vector.insert %1, %0 [0] : vector<2xf32> into vector<2x2xf32>

The test-convert-to-spirv pass complained about the multi-dimensional vector types and I don't know which pattern to use to preprocess them before conversion.

Groverkss added a commit to iree-org/llvm-project that referenced this pull request Aug 21, 2025
…convert a chain of insertions to vector.from_elements (llvm#142944)"

This reverts commit b4c31dc.
@akuegel
Copy link
Member

akuegel commented Aug 21, 2025

It seems this PR is triggering another canonicalization pattern that was added in f3cc854

Let's take this IR:

func.func @wrapped_bitcast_convert(%arg0: tensor<2xi4>, %arg1: tensor<2xi4>) -> tensor<2xi4> {
  %c0 = arith.constant 0 : index
  %cst = arith.constant dense<0> : vector<2xi4>
  %0 = ub.poison : i4
  %1 = vector.transfer_read %arg0[%c0], %0 {in_bounds = [true]} : tensor<2xi4>, vector<2xi4>
  %2 = vector.extract %1[0] : i4 from vector<2xi4>
  %3 = vector.insert %2, %cst [0] : i4 into vector<2xi4>
  %4 = vector.extract %1[1] : i4 from vector<2xi4>
  %5 = vector.insert %4, %3 [1] : i4 into vector<2xi4>
  %6 = vector.transfer_write %5, %arg1[%c0] {in_bounds = [true]} : vector<2xi4>, tensor<2xi4>
  return %6 : tensor<2xi4>
}

This gets now canonicalized to:

func.func @wrapped_bitcast_convert(%arg0: tensor<2xi4>, %arg1: tensor<2xi4>) -> tensor<2xi4> { 
  %c0 = arith.constant 0 : index
  %0 = ub.poison : i4 
  %1 = vector.transfer_read %arg0[%c0], %0 {in_bounds = [true]} : tensor<2xi4>, vector<2xi4> 
  %2 = vector.transfer_write %1, %arg1[%c0] {in_bounds = [true]} : vector<2xi4>, tensor<2xi4> 
  return %2 : tensor<2xi4> 
} 

And then the other pattern applies and folds this into returning %arg0. I believe that pattern is wrong and should only be applied if the base argument is the same (so that we read and write to/from the same memory). Given this PR only triggers this potential bug, I guess I will have to look into fixing this myself.

Another issue I noticed is that apparently there are some GPU related tests that do not run by default which are still broken. These are the tests:

mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir
mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir

Most likely the GPU related pipeline that they use needs some adjustment to handle vector::FromElementOp with rank > 1.

@Groverkss
Copy link
Member

And then the other pattern applies and folds this into returning %arg0. I believe that pattern is wrong and should only be applied if the base argument is the same (so that we read and write to/from the same memory). Given this PR only triggers this potential bug, I guess I will have to look into fixing this myself.

The pattern is applying correctly. tensors have value semantics, there is no "memory" here. The transfer_write here does a complete overlap on the destination, so we can fold completly here without needing to use the destination tensor values.

@yangtetris
Copy link
Contributor Author

Another issue I noticed is that apparently there are some GPU related tests that do not run by default which are still broken. These are the tests:

mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir
mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir

Most likely the GPU related pipeline that they use needs some adjustment to handle vector::FromElementOp with rank > 1.

Thank you for the notification. I created this PR to fix these two tests. BTW, could you please let me know if we have an option to enable all tests in local environments?

rupprecht pushed a commit that referenced this pull request Aug 22, 2025
…4774)

### Problem

PR #142944 introduced a new canonicalization pattern which caused
failures in the following GPU-related integration tests:

-
mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir
-
mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir

The issue occurs because the new canonicalization pattern can generate
multi-dimensional `vector.from_elements` operations (rank > 1), but the
GPU lowering pipelines were not equipped to handle these during the
conversion to LLVM.

### Fix

This PR adds `vector::populateVectorFromElementsLoweringPatterns` to the
GPU lowering passes that are integrated in `gpu-lower-to-nvvm-pipeline`:

- `GpuToLLVMConversionPass`: the general GPU-to-LLVM conversion pass.
- `LowerGpuOpsToNVVMOpsPass`: the NVVM-specific lowering pass.

Co-authored-by: Yang Bai <[email protected]>
@akuegel
Copy link
Member

akuegel commented Aug 22, 2025

And then the other pattern applies and folds this into returning %arg0. I believe that pattern is wrong and should only be applied if the base argument is the same (so that we read and write to/from the same memory). Given this PR only triggers this potential bug, I guess I will have to look into fixing this myself.

The pattern is applying correctly. tensors have value semantics, there is no "memory" here. The transfer_write here does a complete overlap on the destination, so we can fold completly here without needing to use the destination tensor values.

@Groverkss Does it mean the pre-canonicalization IR we use is wrong? We want to copy memory from one location to another location. What would be the proper way to do that?

Edit: thinking more about this, I guess the way to do that would be to use memrefs. Because for that IR snippet we do already have buffers assigned (we start our MLIR lowering pipeline after assigning buffers to XLA HLO IR). It seems we relied on some assumption so far that happened to work out fine because not everything between vector.transfer_read and vector.transfer_write was simplified away.

IanWood1 pushed a commit to iree-org/llvm-project that referenced this pull request Aug 27, 2025
…convert a chain of insertions to vector.from_elements (llvm#142944)"

This reverts commit b4c31dc.
IanWood1 pushed a commit to iree-org/llvm-project that referenced this pull request Aug 28, 2025
…convert a chain of insertions to vector.from_elements (llvm#142944)"

This reverts commit b4c31dc.
Muzammiluddin-Syed-ECE pushed a commit to iree-org/llvm-project that referenced this pull request Sep 3, 2025
…convert a chain of insertions to vector.from_elements (llvm#142944)"

This reverts commit b4c31dc.
fabianmcg pushed a commit to iree-org/llvm-project that referenced this pull request Sep 3, 2025
…convert a chain of insertions to vector.from_elements (llvm#142944)"

This reverts commit b4c31dc.
sebvince pushed a commit to sebvince/llvm-project that referenced this pull request Sep 11, 2025
…convert a chain of insertions to vector.from_elements (llvm#142944)"

This reverts commit b4c31dc.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants