Skip to content

Conversation

@Evanyl
Copy link
Contributor

@Evanyl Evanyl commented Mar 11, 2025

One fusion pattern for collapse_shape -> expand_shape was added in a95ad2d, however if the intermediate tensor between a collapse and expand is a 0-D tensor, then the reassociation_map for these two are special cases and can't be generally fused in this function BubbleUpExpandThroughParallelCollapse.

@github-actions
Copy link

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Mar 11, 2025

@llvm/pr-subscribers-mlir

Author: Evan Liu (Evanyl)

Changes

…llapse

One fusion pattern for collapse_shape -> expand_shape was added in a95ad2d, however if the intermediate tensor between a collapse and expand is a scalar, then the reassociation_map for these two are special cases and can't be generally fused in this function BubbleUpExpandThroughParallelCollapse.


Full diff: https://github.com/llvm/llvm-project/pull/130838.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp (+6)
  • (modified) mlir/test/Dialect/Tensor/bubble-reshapes.mlir (+14)
diff --git a/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp b/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp
index ae8e3528b02e0..44e9519ad2693 100644
--- a/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp
+++ b/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp
@@ -160,6 +160,12 @@ struct BubbleUpExpandThroughParallelCollapse
     auto expandReInds = expandOp.getReassociationIndices();
     auto collapseReInds = collapseOp.getReassociationIndices();
 
+    // Special case where the collapsed tensor to expand is a scalar,
+    // then the reassociation maps will be empty and not produce valid results.
+    if (expandReInds.size() == 0) {
+      return failure();
+    }
+
     // Reshapes are parallel to each other if none of the reassociation indices
     // have greater than 1 index for both reshapes.
     for (auto [expandReassociation, collapseReassociation] :
diff --git a/mlir/test/Dialect/Tensor/bubble-reshapes.mlir b/mlir/test/Dialect/Tensor/bubble-reshapes.mlir
index cf6b12852bcd3..81de9749b86b4 100644
--- a/mlir/test/Dialect/Tensor/bubble-reshapes.mlir
+++ b/mlir/test/Dialect/Tensor/bubble-reshapes.mlir
@@ -45,3 +45,17 @@ func.func @no_bubble_partial_intersecting_reshapes(%arg0: tensor<?x?x?x?xf32>, %
 //      CHECK:   %[[COLLAPSE:.+]] = tensor.collapse_shape %[[ARG0]] {{\[}}[0, 1, 2], [3]]
 //      CHECK:   %[[EXPAND:.+]] = tensor.expand_shape %[[COLLAPSE]] {{\[}}[0, 1], [2, 3]]
 //      CHECK:   return %[[EXPAND]]
+
+// -----
+
+func.func @no_bubble_scalar_reshapes(%arg0: tensor<?xf32>, %s0: index, %s1: index, %s2: index, %s3: index) -> tensor<?x?x?x?xf32> {
+  %collapse = tensor.collapse_shape %arg0 [] : tensor<?xf32> into tensor<f32>
+  %expand = tensor.expand_shape %collapse []
+              output_shape [%s0, %s1, %s2, %s3] : tensor<f32> into tensor<?x?x?x?xf32>
+  return %expand : tensor<?x?x?x?xf32>
+}
+//      CHECK: func @no_bubble_scalar_reshapes
+// CHECK-SAME:   %[[ARG0:.+]]: tensor<?xf32>
+//      CHECK:   %[[COLLAPSE:.+]] = tensor.collapse_shape %[[ARG0]] {{\[}}]
+//      CHECK:   %[[EXPAND:.+]] = tensor.expand_shape %[[COLLAPSE]] {{\[}}]
+//      CHECK:   return %[[EXPAND]]

@llvmbot
Copy link
Member

llvmbot commented Mar 11, 2025

@llvm/pr-subscribers-mlir-tensor

Author: Evan Liu (Evanyl)

Changes

…llapse

One fusion pattern for collapse_shape -> expand_shape was added in a95ad2d, however if the intermediate tensor between a collapse and expand is a scalar, then the reassociation_map for these two are special cases and can't be generally fused in this function BubbleUpExpandThroughParallelCollapse.


Full diff: https://github.com/llvm/llvm-project/pull/130838.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp (+6)
  • (modified) mlir/test/Dialect/Tensor/bubble-reshapes.mlir (+14)
diff --git a/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp b/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp
index ae8e3528b02e0..44e9519ad2693 100644
--- a/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp
+++ b/mlir/lib/Dialect/Tensor/Transforms/ReshapePatterns.cpp
@@ -160,6 +160,12 @@ struct BubbleUpExpandThroughParallelCollapse
     auto expandReInds = expandOp.getReassociationIndices();
     auto collapseReInds = collapseOp.getReassociationIndices();
 
+    // Special case where the collapsed tensor to expand is a scalar,
+    // then the reassociation maps will be empty and not produce valid results.
+    if (expandReInds.size() == 0) {
+      return failure();
+    }
+
     // Reshapes are parallel to each other if none of the reassociation indices
     // have greater than 1 index for both reshapes.
     for (auto [expandReassociation, collapseReassociation] :
diff --git a/mlir/test/Dialect/Tensor/bubble-reshapes.mlir b/mlir/test/Dialect/Tensor/bubble-reshapes.mlir
index cf6b12852bcd3..81de9749b86b4 100644
--- a/mlir/test/Dialect/Tensor/bubble-reshapes.mlir
+++ b/mlir/test/Dialect/Tensor/bubble-reshapes.mlir
@@ -45,3 +45,17 @@ func.func @no_bubble_partial_intersecting_reshapes(%arg0: tensor<?x?x?x?xf32>, %
 //      CHECK:   %[[COLLAPSE:.+]] = tensor.collapse_shape %[[ARG0]] {{\[}}[0, 1, 2], [3]]
 //      CHECK:   %[[EXPAND:.+]] = tensor.expand_shape %[[COLLAPSE]] {{\[}}[0, 1], [2, 3]]
 //      CHECK:   return %[[EXPAND]]
+
+// -----
+
+func.func @no_bubble_scalar_reshapes(%arg0: tensor<?xf32>, %s0: index, %s1: index, %s2: index, %s3: index) -> tensor<?x?x?x?xf32> {
+  %collapse = tensor.collapse_shape %arg0 [] : tensor<?xf32> into tensor<f32>
+  %expand = tensor.expand_shape %collapse []
+              output_shape [%s0, %s1, %s2, %s3] : tensor<f32> into tensor<?x?x?x?xf32>
+  return %expand : tensor<?x?x?x?xf32>
+}
+//      CHECK: func @no_bubble_scalar_reshapes
+// CHECK-SAME:   %[[ARG0:.+]]: tensor<?xf32>
+//      CHECK:   %[[COLLAPSE:.+]] = tensor.collapse_shape %[[ARG0]] {{\[}}]
+//      CHECK:   %[[EXPAND:.+]] = tensor.expand_shape %[[COLLAPSE]] {{\[}}]
+//      CHECK:   return %[[EXPAND]]

@hanhanW hanhanW requested a review from IanWood1 March 11, 2025 21:21
Copy link
Contributor

@hanhanW hanhanW left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good to me. I'd replace the scalar term with 0-D tensor. My understanding is that scalar is something like f32, and tensor<f32> is a 0-D tensor.

@Evanyl Evanyl force-pushed the user/evanyl/03_11_bubble_up_expand_through_parallel_collapse_scalar_fix branch from 26da754 to b57b180 Compare March 11, 2025 21:27
@Evanyl Evanyl changed the title [mlir] Add special case for scalar in BubbleUpExpandThroughParallelCo… [mlir] Add special case for 0-D tensor in BubbleUpExpandThroughParallelCo… Mar 11, 2025
@Evanyl Evanyl requested a review from hanhanW March 11, 2025 21:27
@Evanyl Evanyl changed the title [mlir] Add special case for 0-D tensor in BubbleUpExpandThroughParallelCo… [mlir] Add special case for 0-D tensor when fusing expand from collapse Mar 11, 2025
@hanhanW
Copy link
Contributor

hanhanW commented Mar 11, 2025

@Evanyl just in case, do you need me to help merge the PR into the main branch?

@illyaveksler
Copy link

lgtm

@Evanyl
Copy link
Contributor Author

Evanyl commented Mar 11, 2025

@Evanyl just in case, do you need me to help merge the PR into the main branch?

@hanhanW yes please, and thanks!

@hanhanW hanhanW merged commit 634e253 into llvm:main Mar 11, 2025
8 of 10 checks passed
@github-actions
Copy link

@Evanyl Congratulations on having your first Pull Request (PR) merged into the LLVM Project!

Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR.

Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues.

How to do this, and the rest of the post-merge process, is covered in detail here.

If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again.

If you don't get any reports, no action is required from you. Your changes are working as expected, well done!

@gtaharaedmonds
Copy link

looks good to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants