Skip to content

Conversation

@zhaoqi5
Copy link
Contributor

@zhaoqi5 zhaoqi5 commented Aug 1, 2025

No description provided.

@zhaoqi5 zhaoqi5 requested a review from tangaac August 1, 2025 03:45
@zhaoqi5 zhaoqi5 requested a review from SixWeining August 1, 2025 03:45
@llvmbot
Copy link
Member

llvmbot commented Aug 1, 2025

@llvm/pr-subscribers-backend-loongarch

Author: ZhaoQi (zhaoqi5)

Changes

Full diff: https://github.com/llvm/llvm-project/pull/151634.diff

2 Files Affected:

  • (modified) llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp (+44)
  • (modified) llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll (+4-14)
diff --git a/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp b/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp
index 4f534f1666eaa..5f2512d33b96c 100644
--- a/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp
+++ b/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp
@@ -1832,6 +1832,48 @@ static SDValue lowerVECTOR_SHUFFLE_XVSHUF4I(const SDLoc &DL, ArrayRef<int> Mask,
   return lowerVECTOR_SHUFFLE_VSHUF4I(DL, Mask, VT, V1, V2, DAG);
 }
 
+/// Lower VECTOR_SHUFFLE into XVPERM (if possible).
+static SDValue lowerVECTOR_SHUFFLE_XVPERM(const SDLoc &DL, ArrayRef<int> Mask,
+                                          MVT VT, SDValue V1, SDValue V2,
+                                          SelectionDAG &DAG) {
+  // LoongArch LASX only have XVPERM_W.
+  if (Mask.size() != 8 || (VT != MVT::v8i32 && VT != MVT::v8f32))
+    return SDValue();
+
+  unsigned NumElts = VT.getVectorNumElements();
+  unsigned HalfSize = NumElts / 2;
+  bool FrontLo = true, FrontHi = true;
+  bool BackLo = true, BackHi = true;
+
+  auto inRange = [](int val, int low, int high) {
+    return (val == -1) || (val >= low && val < high);
+  };
+
+  for (unsigned i = 0; i < HalfSize; ++i) {
+    int Fronti = Mask[i];
+    int Backi = Mask[i + HalfSize];
+
+    FrontLo &= inRange(Fronti, 0, HalfSize);
+    FrontHi &= inRange(Fronti, HalfSize, NumElts);
+    BackLo &= inRange(Backi, 0, HalfSize);
+    BackHi &= inRange(Backi, HalfSize, NumElts);
+  }
+
+  // If both the lower and upper 128-bit parts access only one half of the
+  // vector (either lower or upper), avoid using xvperm.w. The latency of
+  // xvperm.w(3) is higher than using xvshuf(1) and xvori(1).
+  if ((FrontLo && (BackLo || BackHi)) || (FrontHi && (BackLo || BackHi)))
+    return SDValue();
+
+  SmallVector<SDValue, 8> Masks;
+  for (unsigned i = 0; i < NumElts; ++i)
+    Masks.push_back(Mask[i] == -1 ? DAG.getUNDEF(MVT::i64)
+                                  : DAG.getConstant(Mask[i], DL, MVT::i64));
+  SDValue MaskVec = DAG.getBuildVector(MVT::v8i32, DL, Masks);
+
+  return DAG.getNode(LoongArchISD::XVPERM, DL, VT, V1, MaskVec);
+}
+
 /// Lower VECTOR_SHUFFLE into XVPACKEV (if possible).
 static SDValue lowerVECTOR_SHUFFLE_XVPACKEV(const SDLoc &DL, ArrayRef<int> Mask,
                                             MVT VT, SDValue V1, SDValue V2,
@@ -2235,6 +2277,8 @@ static SDValue lower256BitShuffle(const SDLoc &DL, ArrayRef<int> Mask, MVT VT,
       return Result;
     if ((Result = lowerVECTOR_SHUFFLE_XVSHUF4I(DL, NewMask, VT, V1, V2, DAG)))
       return Result;
+    if ((Result = lowerVECTOR_SHUFFLE_XVPERM(DL, NewMask, VT, V1, V2, DAG)))
+      return Result;
     if ((Result = lowerVECTOR_SHUFFLEAsLanePermuteAndShuffle(DL, NewMask, VT,
                                                              V1, V2, DAG)))
       return Result;
diff --git a/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll b/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll
index fed085843485a..5f76d9951df9c 100644
--- a/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll
+++ b/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll
@@ -61,13 +61,8 @@ define <8 x i32> @shuffle_v8i32(<8 x i32> %a) {
 ; CHECK-LABEL: shuffle_v8i32:
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    pcalau12i $a0, %pc_hi20(.LCPI4_0)
-; CHECK-NEXT:    xvld $xr2, $a0, %pc_lo12(.LCPI4_0)
-; CHECK-NEXT:    pcalau12i $a0, %pc_hi20(.LCPI4_1)
-; CHECK-NEXT:    xvld $xr1, $a0, %pc_lo12(.LCPI4_1)
-; CHECK-NEXT:    xvpermi.d $xr3, $xr0, 78
-; CHECK-NEXT:    xvshuf.d $xr2, $xr0, $xr3
-; CHECK-NEXT:    xvshuf.d $xr1, $xr2, $xr0
-; CHECK-NEXT:    xvori.b $xr0, $xr1, 0
+; CHECK-NEXT:    xvld $xr1, $a0, %pc_lo12(.LCPI4_0)
+; CHECK-NEXT:    xvperm.w $xr0, $xr0, $xr1
 ; CHECK-NEXT:    ret
   %shuffle = shufflevector <8 x i32> %a, <8 x i32> poison, <8 x i32> <i32 4, i32 5, i32 0, i32 1, i32 4, i32 5, i32 6, i32 7>
   ret <8 x i32> %shuffle
@@ -117,13 +112,8 @@ define <8 x float> @shuffle_v8f32(<8 x float> %a) {
 ; CHECK-LABEL: shuffle_v8f32:
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    pcalau12i $a0, %pc_hi20(.LCPI8_0)
-; CHECK-NEXT:    xvld $xr2, $a0, %pc_lo12(.LCPI8_0)
-; CHECK-NEXT:    pcalau12i $a0, %pc_hi20(.LCPI8_1)
-; CHECK-NEXT:    xvld $xr1, $a0, %pc_lo12(.LCPI8_1)
-; CHECK-NEXT:    xvpermi.d $xr3, $xr0, 78
-; CHECK-NEXT:    xvshuf.d $xr2, $xr0, $xr3
-; CHECK-NEXT:    xvshuf.d $xr1, $xr2, $xr0
-; CHECK-NEXT:    xvori.b $xr0, $xr1, 0
+; CHECK-NEXT:    xvld $xr1, $a0, %pc_lo12(.LCPI8_0)
+; CHECK-NEXT:    xvperm.w $xr0, $xr0, $xr1
 ; CHECK-NEXT:    ret
   %shuffle = shufflevector <8 x float> %a, <8 x float> poison, <8 x i32> <i32 4, i32 5, i32 0, i32 1, i32 4, i32 5, i32 6, i32 7>
   ret <8 x float> %shuffle

@tangaac
Copy link
Member

tangaac commented Aug 1, 2025

tangaac/loong-opt-cov-ts@51dae8b

Base automatically changed from users/zhaoqi5/test-permute-and-shuffle-samelane to users/zhaoqi5/opt-extractelement-idx August 9, 2025 10:11
@zhaoqi5 zhaoqi5 force-pushed the users/zhaoqi5/opt-extractelement-idx branch from d764815 to f8b7d4c Compare August 9, 2025 11:07
@zhaoqi5 zhaoqi5 force-pushed the users/zhaoqi5/opt-xvperm branch from 875f353 to f934beb Compare August 9, 2025 11:13
@zhaoqi5
Copy link
Contributor Author

zhaoqi5 commented Sep 2, 2025

Ping.


// If both the lower and upper 128-bit parts access only one half of the
// vector (either lower or upper), avoid using xvperm.w. The latency of
// xvperm.w(3) is higher than using xvshuf(1) and xvori(1).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a shuffle that swaps the upper and lower 128-bit halves, xvperm.w alone should be enough and likely faster.

define <8 x i32> @shuffle_v8i32(<8 x i32> %a) {
  %shuffle = shufflevector <8 x i32> %a, <8 x i32> poison, <8 x i32> <i32 5, i32 4, i32 6, i32 7, i32 3, i32 2, i32 0, i32 1>
  ret <8 x i32> %shuffle
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, xvperm.w is actually enough for this case.

But when legalizing vector_shuffle, canonicalizeShuffleVectorByLane() will firstly generate xvpermi.d to avoid cross-lane access (similar to discussion in #151633 (comment)).
When comming to here, the masks has already changed to be 1,0,2,3,3,2,1,0, the already generated xvpermi.d is unable to be removed anyway.
I haven't come up with a good solution about this yet.

However, based solely on the masks obtained here, it is necessary to avoid conversion here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it’s not something that can be done quickly, feel free to handle it in a separate PR. LGTM

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Thanks for your review.

@zhaoqi5 zhaoqi5 force-pushed the users/zhaoqi5/opt-extractelement-idx branch from f8b7d4c to 2ffde95 Compare September 2, 2025 09:05
@zhaoqi5 zhaoqi5 force-pushed the users/zhaoqi5/opt-xvperm branch from f934beb to 619a9c4 Compare September 2, 2025 09:09
Copy link
Member

@tangaac tangaac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Base automatically changed from users/zhaoqi5/opt-extractelement-idx to main September 4, 2025 01:27
@zhaoqi5 zhaoqi5 force-pushed the users/zhaoqi5/opt-xvperm branch from 619a9c4 to b9b2cc4 Compare September 4, 2025 01:31
@zhaoqi5 zhaoqi5 merged commit d7a3ab2 into main Sep 4, 2025
9 checks passed
@zhaoqi5 zhaoqi5 deleted the users/zhaoqi5/opt-xvperm branch September 4, 2025 02:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants