Skip to content

Conversation

@RKSimon
Copy link
Collaborator

@RKSimon RKSimon commented Nov 5, 2024

If we're only demanding the LSB of a SRL node and that is shifting down an extended sign bit, see if we can change the SRL to shift down the MSB directly.

These patterns can occur during legalisation when we've sign extended to a wider type but the SRL is still shifting from the subreg.

There's potentially a more general fold we could do here if we're just shifting a block of sign bits, but we only seem to currently benefit from demanding just the MSB, as this is a pretty common pattern for other folds.

Fixes the remaining regression in #112588

… down sign bit, try to shift down the MSB directly

If we're only demanding the LSB of a SRL node and that is shifting down an extended sign bit, see if we can change the SRL to shift down the MSB directly.

These patterns can occur during legalisation when we've sign extended to a wider type but the SRL is still shifting from the subreg.

There's potentially a more general fold we could do here if we're just shifting a block of sign bits, but we only seem to currently benefit from demanding just the MSB, as this is a pretty common pattern for other folds.

Fixes the remaining regression in llvm#112588
@llvmbot
Copy link
Member

llvmbot commented Nov 5, 2024

@llvm/pr-subscribers-llvm-selectiondag
@llvm/pr-subscribers-backend-aarch64

@llvm/pr-subscribers-backend-powerpc

Author: Simon Pilgrim (RKSimon)

Changes

If we're only demanding the LSB of a SRL node and that is shifting down an extended sign bit, see if we can change the SRL to shift down the MSB directly.

These patterns can occur during legalisation when we've sign extended to a wider type but the SRL is still shifting from the subreg.

There's potentially a more general fold we could do here if we're just shifting a block of sign bits, but we only seem to currently benefit from demanding just the MSB, as this is a pretty common pattern for other folds.

Fixes the remaining regression in #112588


Patch is 20.77 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/114967.diff

10 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+16)
  • (modified) llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll (+12-14)
  • (modified) llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll (+10-12)
  • (modified) llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/div-by-constant.ll (+28-36)
  • (modified) llvm/test/CodeGen/RISCV/div.ll (+7-9)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll (+20-22)
  • (modified) llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll (+12-16)
  • (modified) llvm/test/CodeGen/Thumb2/srem-seteq-illegal-types.ll (+2-2)
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index a16ec19e7a6888..05b00ec1ff543d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -1978,6 +1978,22 @@ bool TargetLowering::SimplifyDemandedBits(
         }
       }
 
+      // If we are shifting down an extended sign bit, see if we can simplify
+      // this to shifting the MSB directly to expose further simplifications.
+      // This pattern often appears after sext_inreg legalization.
+      // 
+      // NOTE: We might be able to generalize this and merge with the SRA fold
+      // above, but there are currently regressions.
+      if (DemandedBits == 1 && (BitWidth - 1) > ShAmt) {
+        unsigned NumSignBits =
+            TLO.DAG.ComputeNumSignBits(Op0, DemandedElts, Depth + 1);
+        if (ShAmt >= (BitWidth - NumSignBits))
+          return TLO.CombineTo(
+              Op, TLO.DAG.getNode(
+                      ISD::SRL, dl, VT, Op0,
+                      TLO.DAG.getShiftAmountConstant(BitWidth - 1, VT, dl)));
+      }
+
       APInt InDemandedMask = (DemandedBits << ShAmt);
 
       // If the shift is exact, then it does demand the low bits (and knows that
diff --git a/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll
index 9fbce05eee1775..884d668157e5f7 100644
--- a/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll
@@ -25,8 +25,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    sbfx w8, w0, #0, #4
 ; CHECK-NEXT:    add w8, w8, w8, lsl #1
-; CHECK-NEXT:    ubfx w9, w8, #7, #1
-; CHECK-NEXT:    add w8, w9, w8, lsr #4
+; CHECK-NEXT:    lsr w9, w8, #4
+; CHECK-NEXT:    add w8, w9, w8, lsr #31
 ; CHECK-NEXT:    mov w9, #6 // =0x6
 ; CHECK-NEXT:    msub w8, w8, w9, w0
 ; CHECK-NEXT:    and w8, w8, #0xf
diff --git a/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll
index 7f56215b9b4123..973362462f7355 100644
--- a/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll
@@ -115,11 +115,10 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM5-LABEL: test_srem_even:
 ; ARM5:       @ %bb.0:
 ; ARM5-NEXT:    lsl r1, r0, #28
-; ARM5-NEXT:    mov r2, #1
 ; ARM5-NEXT:    asr r1, r1, #28
 ; ARM5-NEXT:    add r1, r1, r1, lsl #1
-; ARM5-NEXT:    and r2, r2, r1, lsr #7
-; ARM5-NEXT:    add r1, r2, r1, lsr #4
+; ARM5-NEXT:    lsr r2, r1, #4
+; ARM5-NEXT:    add r1, r2, r1, lsr #31
 ; ARM5-NEXT:    add r1, r1, r1, lsl #1
 ; ARM5-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM5-NEXT:    and r0, r0, #15
@@ -131,11 +130,10 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM6-LABEL: test_srem_even:
 ; ARM6:       @ %bb.0:
 ; ARM6-NEXT:    lsl r1, r0, #28
-; ARM6-NEXT:    mov r2, #1
 ; ARM6-NEXT:    asr r1, r1, #28
 ; ARM6-NEXT:    add r1, r1, r1, lsl #1
-; ARM6-NEXT:    and r2, r2, r1, lsr #7
-; ARM6-NEXT:    add r1, r2, r1, lsr #4
+; ARM6-NEXT:    lsr r2, r1, #4
+; ARM6-NEXT:    add r1, r2, r1, lsr #31
 ; ARM6-NEXT:    add r1, r1, r1, lsl #1
 ; ARM6-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM6-NEXT:    and r0, r0, #15
@@ -148,8 +146,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM7:       @ %bb.0:
 ; ARM7-NEXT:    sbfx r1, r0, #0, #4
 ; ARM7-NEXT:    add r1, r1, r1, lsl #1
-; ARM7-NEXT:    ubfx r2, r1, #7, #1
-; ARM7-NEXT:    add r1, r2, r1, lsr #4
+; ARM7-NEXT:    lsr r2, r1, #4
+; ARM7-NEXT:    add r1, r2, r1, lsr #31
 ; ARM7-NEXT:    add r1, r1, r1, lsl #1
 ; ARM7-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM7-NEXT:    and r0, r0, #15
@@ -162,8 +160,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM8:       @ %bb.0:
 ; ARM8-NEXT:    sbfx r1, r0, #0, #4
 ; ARM8-NEXT:    add r1, r1, r1, lsl #1
-; ARM8-NEXT:    ubfx r2, r1, #7, #1
-; ARM8-NEXT:    add r1, r2, r1, lsr #4
+; ARM8-NEXT:    lsr r2, r1, #4
+; ARM8-NEXT:    add r1, r2, r1, lsr #31
 ; ARM8-NEXT:    add r1, r1, r1, lsl #1
 ; ARM8-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM8-NEXT:    and r0, r0, #15
@@ -176,8 +174,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; NEON7:       @ %bb.0:
 ; NEON7-NEXT:    sbfx r1, r0, #0, #4
 ; NEON7-NEXT:    add r1, r1, r1, lsl #1
-; NEON7-NEXT:    ubfx r2, r1, #7, #1
-; NEON7-NEXT:    add r1, r2, r1, lsr #4
+; NEON7-NEXT:    lsr r2, r1, #4
+; NEON7-NEXT:    add r1, r2, r1, lsr #31
 ; NEON7-NEXT:    add r1, r1, r1, lsl #1
 ; NEON7-NEXT:    sub r0, r0, r1, lsl #1
 ; NEON7-NEXT:    and r0, r0, #15
@@ -190,8 +188,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; NEON8:       @ %bb.0:
 ; NEON8-NEXT:    sbfx r1, r0, #0, #4
 ; NEON8-NEXT:    add r1, r1, r1, lsl #1
-; NEON8-NEXT:    ubfx r2, r1, #7, #1
-; NEON8-NEXT:    add r1, r2, r1, lsr #4
+; NEON8-NEXT:    lsr r2, r1, #4
+; NEON8-NEXT:    add r1, r2, r1, lsr #31
 ; NEON8-NEXT:    add r1, r1, r1, lsl #1
 ; NEON8-NEXT:    sub r0, r0, r1, lsl #1
 ; NEON8-NEXT:    and r0, r0, #15
diff --git a/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll
index 37cca8687890a6..f4c78fb0fe160e 100644
--- a/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll
@@ -47,17 +47,16 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; MIPSEL-NEXT:    sra $1, $1, 28
 ; MIPSEL-NEXT:    sll $2, $1, 1
 ; MIPSEL-NEXT:    addu $1, $2, $1
-; MIPSEL-NEXT:    srl $2, $1, 4
-; MIPSEL-NEXT:    srl $1, $1, 7
-; MIPSEL-NEXT:    andi $1, $1, 1
-; MIPSEL-NEXT:    addiu $3, $zero, 1
-; MIPSEL-NEXT:    addu $1, $2, $1
-; MIPSEL-NEXT:    sll $2, $1, 1
-; MIPSEL-NEXT:    sll $1, $1, 2
+; MIPSEL-NEXT:    srl $2, $1, 31
+; MIPSEL-NEXT:    srl $1, $1, 4
 ; MIPSEL-NEXT:    addu $1, $1, $2
+; MIPSEL-NEXT:    addiu $2, $zero, 1
+; MIPSEL-NEXT:    sll $3, $1, 1
+; MIPSEL-NEXT:    sll $1, $1, 2
+; MIPSEL-NEXT:    addu $1, $1, $3
 ; MIPSEL-NEXT:    subu $1, $4, $1
 ; MIPSEL-NEXT:    andi $1, $1, 15
-; MIPSEL-NEXT:    xor $1, $1, $3
+; MIPSEL-NEXT:    xor $1, $1, $2
 ; MIPSEL-NEXT:    jr $ra
 ; MIPSEL-NEXT:    sltiu $2, $1, 1
 ;
@@ -69,10 +68,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; MIPS64EL-NEXT:    sll $3, $2, 1
 ; MIPS64EL-NEXT:    addu $2, $3, $2
 ; MIPS64EL-NEXT:    addiu $3, $zero, 1
-; MIPS64EL-NEXT:    srl $4, $2, 4
-; MIPS64EL-NEXT:    srl $2, $2, 7
-; MIPS64EL-NEXT:    andi $2, $2, 1
-; MIPS64EL-NEXT:    addu $2, $4, $2
+; MIPS64EL-NEXT:    srl $4, $2, 31
+; MIPS64EL-NEXT:    srl $2, $2, 4
+; MIPS64EL-NEXT:    addu $2, $2, $4
 ; MIPS64EL-NEXT:    sll $4, $2, 1
 ; MIPS64EL-NEXT:    sll $2, $2, 2
 ; MIPS64EL-NEXT:    addu $2, $2, $4
diff --git a/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll
index 2b07f27be021b1..18b07b2aa5cec3 100644
--- a/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll
@@ -46,7 +46,7 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; PPC-NEXT:    slwi 4, 3, 28
 ; PPC-NEXT:    srawi 4, 4, 28
 ; PPC-NEXT:    mulli 4, 4, 3
-; PPC-NEXT:    rlwinm 5, 4, 25, 31, 31
+; PPC-NEXT:    srwi 5, 4, 31
 ; PPC-NEXT:    srwi 4, 4, 4
 ; PPC-NEXT:    add 4, 4, 5
 ; PPC-NEXT:    mulli 4, 4, 6
@@ -65,7 +65,7 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; PPC64LE-NEXT:    srawi 4, 4, 28
 ; PPC64LE-NEXT:    slwi 5, 4, 1
 ; PPC64LE-NEXT:    add 4, 4, 5
-; PPC64LE-NEXT:    rlwinm 5, 4, 25, 31, 31
+; PPC64LE-NEXT:    srwi 5, 4, 31
 ; PPC64LE-NEXT:    srwi 4, 4, 4
 ; PPC64LE-NEXT:    add 4, 4, 5
 ; PPC64LE-NEXT:    mulli 4, 4, 6
diff --git a/llvm/test/CodeGen/RISCV/div-by-constant.ll b/llvm/test/CodeGen/RISCV/div-by-constant.ll
index 91ac7c5ddae3ff..3d9fb91e3adf82 100644
--- a/llvm/test/CodeGen/RISCV/div-by-constant.ll
+++ b/llvm/test/CodeGen/RISCV/div-by-constant.ll
@@ -488,10 +488,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV32IM-NEXT:    srai a0, a0, 24
 ; RV32IM-NEXT:    li a1, 86
 ; RV32IM-NEXT:    mul a0, a0, a1
-; RV32IM-NEXT:    srli a1, a0, 8
-; RV32IM-NEXT:    slli a0, a0, 16
-; RV32IM-NEXT:    srli a0, a0, 31
-; RV32IM-NEXT:    add a0, a1, a0
+; RV32IM-NEXT:    srli a1, a0, 31
+; RV32IM-NEXT:    srli a0, a0, 8
+; RV32IM-NEXT:    add a0, a0, a1
 ; RV32IM-NEXT:    ret
 ;
 ; RV32IMZB-LABEL: sdiv8_constant_no_srai:
@@ -499,10 +498,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV32IMZB-NEXT:    sext.b a0, a0
 ; RV32IMZB-NEXT:    li a1, 86
 ; RV32IMZB-NEXT:    mul a0, a0, a1
-; RV32IMZB-NEXT:    srli a1, a0, 8
-; RV32IMZB-NEXT:    slli a0, a0, 16
-; RV32IMZB-NEXT:    srli a0, a0, 31
-; RV32IMZB-NEXT:    add a0, a1, a0
+; RV32IMZB-NEXT:    srli a1, a0, 31
+; RV32IMZB-NEXT:    srli a0, a0, 8
+; RV32IMZB-NEXT:    add a0, a0, a1
 ; RV32IMZB-NEXT:    ret
 ;
 ; RV64IM-LABEL: sdiv8_constant_no_srai:
@@ -511,10 +509,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV64IM-NEXT:    srai a0, a0, 56
 ; RV64IM-NEXT:    li a1, 86
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srli a1, a0, 8
-; RV64IM-NEXT:    slli a0, a0, 48
-; RV64IM-NEXT:    srli a0, a0, 63
-; RV64IM-NEXT:    add a0, a1, a0
+; RV64IM-NEXT:    srli a1, a0, 63
+; RV64IM-NEXT:    srli a0, a0, 8
+; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
 ;
 ; RV64IMZB-LABEL: sdiv8_constant_no_srai:
@@ -522,10 +519,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV64IMZB-NEXT:    sext.b a0, a0
 ; RV64IMZB-NEXT:    li a1, 86
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srli a1, a0, 8
-; RV64IMZB-NEXT:    slli a0, a0, 48
-; RV64IMZB-NEXT:    srli a0, a0, 63
-; RV64IMZB-NEXT:    add a0, a1, a0
+; RV64IMZB-NEXT:    srli a1, a0, 63
+; RV64IMZB-NEXT:    srli a0, a0, 8
+; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
   %1 = sdiv i8 %a, 3
   ret i8 %1
@@ -538,10 +534,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV32IM-NEXT:    srai a0, a0, 24
 ; RV32IM-NEXT:    li a1, 103
 ; RV32IM-NEXT:    mul a0, a0, a1
-; RV32IM-NEXT:    srai a1, a0, 9
-; RV32IM-NEXT:    slli a0, a0, 16
-; RV32IM-NEXT:    srli a0, a0, 31
-; RV32IM-NEXT:    add a0, a1, a0
+; RV32IM-NEXT:    srli a1, a0, 31
+; RV32IM-NEXT:    srai a0, a0, 9
+; RV32IM-NEXT:    add a0, a0, a1
 ; RV32IM-NEXT:    ret
 ;
 ; RV32IMZB-LABEL: sdiv8_constant_srai:
@@ -549,10 +544,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV32IMZB-NEXT:    sext.b a0, a0
 ; RV32IMZB-NEXT:    li a1, 103
 ; RV32IMZB-NEXT:    mul a0, a0, a1
-; RV32IMZB-NEXT:    srai a1, a0, 9
-; RV32IMZB-NEXT:    slli a0, a0, 16
-; RV32IMZB-NEXT:    srli a0, a0, 31
-; RV32IMZB-NEXT:    add a0, a1, a0
+; RV32IMZB-NEXT:    srli a1, a0, 31
+; RV32IMZB-NEXT:    srai a0, a0, 9
+; RV32IMZB-NEXT:    add a0, a0, a1
 ; RV32IMZB-NEXT:    ret
 ;
 ; RV64IM-LABEL: sdiv8_constant_srai:
@@ -561,10 +555,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV64IM-NEXT:    srai a0, a0, 56
 ; RV64IM-NEXT:    li a1, 103
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srai a1, a0, 9
-; RV64IM-NEXT:    slli a0, a0, 48
-; RV64IM-NEXT:    srli a0, a0, 63
-; RV64IM-NEXT:    add a0, a1, a0
+; RV64IM-NEXT:    srli a1, a0, 63
+; RV64IM-NEXT:    srai a0, a0, 9
+; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
 ;
 ; RV64IMZB-LABEL: sdiv8_constant_srai:
@@ -572,10 +565,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV64IMZB-NEXT:    sext.b a0, a0
 ; RV64IMZB-NEXT:    li a1, 103
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srai a1, a0, 9
-; RV64IMZB-NEXT:    slli a0, a0, 48
-; RV64IMZB-NEXT:    srli a0, a0, 63
-; RV64IMZB-NEXT:    add a0, a1, a0
+; RV64IMZB-NEXT:    srli a1, a0, 63
+; RV64IMZB-NEXT:    srai a0, a0, 9
+; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
   %1 = sdiv i8 %a, 5
   ret i8 %1
@@ -728,7 +720,7 @@ define i16 @sdiv16_constant_no_srai(i16 %a) nounwind {
 ; RV64IM-NEXT:    lui a1, 5
 ; RV64IM-NEXT:    addiw a1, a1, 1366
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srliw a1, a0, 31
+; RV64IM-NEXT:    srli a1, a0, 63
 ; RV64IM-NEXT:    srli a0, a0, 16
 ; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
@@ -739,7 +731,7 @@ define i16 @sdiv16_constant_no_srai(i16 %a) nounwind {
 ; RV64IMZB-NEXT:    lui a1, 5
 ; RV64IMZB-NEXT:    addiw a1, a1, 1366
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srliw a1, a0, 31
+; RV64IMZB-NEXT:    srli a1, a0, 63
 ; RV64IMZB-NEXT:    srli a0, a0, 16
 ; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
@@ -778,7 +770,7 @@ define i16 @sdiv16_constant_srai(i16 %a) nounwind {
 ; RV64IM-NEXT:    lui a1, 6
 ; RV64IM-NEXT:    addiw a1, a1, 1639
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srliw a1, a0, 31
+; RV64IM-NEXT:    srli a1, a0, 63
 ; RV64IM-NEXT:    srai a0, a0, 17
 ; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
@@ -789,7 +781,7 @@ define i16 @sdiv16_constant_srai(i16 %a) nounwind {
 ; RV64IMZB-NEXT:    lui a1, 6
 ; RV64IMZB-NEXT:    addiw a1, a1, 1639
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srliw a1, a0, 31
+; RV64IMZB-NEXT:    srli a1, a0, 63
 ; RV64IMZB-NEXT:    srai a0, a0, 17
 ; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/div.ll b/llvm/test/CodeGen/RISCV/div.ll
index f4e67698473151..e94efbea8376d5 100644
--- a/llvm/test/CodeGen/RISCV/div.ll
+++ b/llvm/test/CodeGen/RISCV/div.ll
@@ -980,10 +980,9 @@ define i8 @sdiv8_constant(i8 %a) nounwind {
 ; RV32IM-NEXT:    srai a0, a0, 24
 ; RV32IM-NEXT:    li a1, 103
 ; RV32IM-NEXT:    mul a0, a0, a1
-; RV32IM-NEXT:    srai a1, a0, 9
-; RV32IM-NEXT:    slli a0, a0, 16
-; RV32IM-NEXT:    srli a0, a0, 31
-; RV32IM-NEXT:    add a0, a1, a0
+; RV32IM-NEXT:    srli a1, a0, 31
+; RV32IM-NEXT:    srai a0, a0, 9
+; RV32IM-NEXT:    add a0, a0, a1
 ; RV32IM-NEXT:    ret
 ;
 ; RV64I-LABEL: sdiv8_constant:
@@ -1004,10 +1003,9 @@ define i8 @sdiv8_constant(i8 %a) nounwind {
 ; RV64IM-NEXT:    srai a0, a0, 56
 ; RV64IM-NEXT:    li a1, 103
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srai a1, a0, 9
-; RV64IM-NEXT:    slli a0, a0, 48
-; RV64IM-NEXT:    srli a0, a0, 63
-; RV64IM-NEXT:    add a0, a1, a0
+; RV64IM-NEXT:    srli a1, a0, 63
+; RV64IM-NEXT:    srai a0, a0, 9
+; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
   %1 = sdiv i8 %a, 5
   ret i8 %1
@@ -1193,7 +1191,7 @@ define i16 @sdiv16_constant(i16 %a) nounwind {
 ; RV64IM-NEXT:    lui a1, 6
 ; RV64IM-NEXT:    addiw a1, a1, 1639
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srliw a1, a0, 31
+; RV64IM-NEXT:    srli a1, a0, 63
 ; RV64IM-NEXT:    srai a0, a0, 17
 ; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
index f32795ed03c7ec..df7b3eb8d45480 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
@@ -8,34 +8,33 @@ define <8 x i1> @v8i1_v16i1(<16 x i1>) {
 ; RV32:       # %bb.0:
 ; RV32-NEXT:    vsetivli zero, 1, e16, m1, ta, ma
 ; RV32-NEXT:    vmv.x.s a0, v0
-; RV32-NEXT:    slli a1, a0, 19
+; RV32-NEXT:    slli a1, a0, 18
 ; RV32-NEXT:    srli a1, a1, 31
-; RV32-NEXT:    slli a2, a0, 26
-; RV32-NEXT:    srli a2, a2, 31
+; RV32-NEXT:    srli a2, a0, 31
 ; RV32-NEXT:    vsetivli zero, 8, e8, mf2, ta, mu
 ; RV32-NEXT:    vmv.v.x v8, a2
 ; RV32-NEXT:    vslide1down.vx v8, v8, a1
-; RV32-NEXT:    slli a1, a0, 24
+; RV32-NEXT:    slli a1, a0, 27
 ; RV32-NEXT:    srli a1, a1, 31
 ; RV32-NEXT:    vslide1down.vx v8, v8, a1
-; RV32-NEXT:    slli a1, a0, 29
+; RV32-NEXT:    slli a1, a0, 28
 ; RV32-NEXT:    srli a1, a1, 31
 ; RV32-NEXT:    vslide1down.vx v8, v8, a1
-; RV32-NEXT:    slli a1, a0, 18
+; RV32-NEXT:    slli a1, a0, 19
 ; RV32-NEXT:    srli a1, a1, 31
-; RV32-NEXT:    slli a2, a0, 16
+; RV32-NEXT:    slli a2, a0, 26
 ; RV32-NEXT:    srli a2, a2, 31
 ; RV32-NEXT:    vmv.v.x v9, a2
 ; RV32-NEXT:    vslide1down.vx v9, v9, a1
-; RV32-NEXT:    slli a1, a0, 27
+; RV32-NEXT:    slli a1, a0, 24
 ; RV32-NEXT:    srli a1, a1, 31
 ; RV32-NEXT:    vslide1down.vx v9, v9, a1
-; RV32-NEXT:    slli a0, a0, 28
+; RV32-NEXT:    slli a0, a0, 29
 ; RV32-NEXT:    srli a0, a0, 31
 ; RV32-NEXT:    vmv.v.i v0, 15
 ; RV32-NEXT:    vslide1down.vx v9, v9, a0
-; RV32-NEXT:    vslidedown.vi v9, v8, 4, v0.t
-; RV32-NEXT:    vand.vi v8, v9, 1
+; RV32-NEXT:    vslidedown.vi v8, v9, 4, v0.t
+; RV32-NEXT:    vand.vi v8, v8, 1
 ; RV32-NEXT:    vmsne.vi v0, v8, 0
 ; RV32-NEXT:    ret
 ;
@@ -43,34 +42,33 @@ define <8 x i1> @v8i1_v16i1(<16 x i1>) {
 ; RV64:       # %bb.0:
 ; RV64-NEXT:    vsetivli zero, 1, e16, m1, ta, ma
 ; RV64-NEXT:    vmv.x.s a0, v0
-; RV64-NEXT:    slli a1, a0, 51
+; RV64-NEXT:    slli a1, a0, 50
 ; RV64-NEXT:    srli a1, a1, 63
-; RV64-NEXT:    slli a2, a0, 58
-; RV64-NEXT:    srli a2, a2, 63
+; RV64-NEXT:    srli a2, a0, 63
 ; RV64-NEXT:    vsetivli zero, 8, e8, mf2, ta, mu
 ; RV64-NEXT:    vmv.v.x v8, a2
 ; RV64-NEXT:    vslide1down.vx v8, v8, a1
-; RV64-NEXT:    slli a1, a0, 56
+; RV64-NEXT:    slli a1, a0, 59
 ; RV64-NEXT:    srli a1, a1, 63
 ; RV64-NEXT:    vslide1down.vx v8, v8, a1
-; RV64-NEXT:    slli a1, a0, 61
+; RV64-NEXT:    slli a1, a0, 60
 ; RV64-NEXT:    srli a1, a1, 63
 ; RV64-NEXT:    vslide1down.vx v8, v8, a1
-; RV64-NEXT:    slli a1, a0, 50
+; RV64-NEXT:    slli a1, a0, 51
 ; RV64-NEXT:    srli a1, a1, 63
-; RV64-NEXT:    slli a2, a0, 48
+; RV64-NEXT:    slli a2, a0, 58
 ; RV64-NEXT:    srli a2, a2, 63
 ; RV64-NEXT:    vmv.v.x v9, a2
 ; RV64-NEXT:    vslide1down.vx v9, v9, a1
-; RV64-NEXT:    slli a1, a0, 59
+; RV64-NEXT:    slli a1, a0, 56
 ; RV64-NEXT:    srli a1, a1, 63
 ; RV64-NEXT:    vslide1down.vx v9, v9, a1
-; RV64-NEXT:    slli a0, a0, 60
+; RV64-NEXT:    slli a0, a0, 61
 ; RV64-NEXT:    srli a0, a0, 63
 ; RV64-NEXT:    vmv.v.i v0, 15
 ; RV64-NEXT:    vslide1down.vx v9, v9, a0
-; RV64-NEXT:    vslidedown.vi v9, v8, 4, v0.t
-; RV64-NEXT:    vand.vi v8, v9, 1
+; RV64-NEXT:    vslidedown.vi v8, v9, 4, v0.t
+; RV64-NEXT:    vand.vi v8, v8, 1
 ; RV64-NEXT:    vmsne.vi v0, v8, 0
 ; RV64-NEXT:    ret
   %2 = shufflevector <16 x i1> %0, <16 x i1> poison, <8 x i32> <i32 5, i32 12, i32 7, i32 2, i32 15, i32 13, i32 4, i32 3>
diff --git a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
index 307a0531cf0296..3ccad02fbb2bf3 100644
--- a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
@@ -144,10 +144,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV32M-NEXT:    srai a1, a1, 28
 ; RV32M-NEXT:    slli a2, a1, 1
 ; RV32M-NEXT:    add a1, a2, a1
-; RV32M-NEXT:    srli a2, a1, 4
-; RV32M-NEXT:    slli a1, a1, 24
-; RV32M-NEXT:    srli a1, a1, 31
-; RV32M-NEXT:    add a1, a2, a1
+; RV32M-NEXT:    srli a2, a1, 31
+; RV32M-NEXT:    srli a1, a1, 4
+; RV32M-NEXT:    add a1, a1, a2
 ; RV32M-NEXT:    slli a2, a1, 3
 ; RV32M-NEXT:    slli a1, a1, 1
 ; RV32M-NEXT:    sub a1, a1, a2
@@ -163,10 +162,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV64M-NEXT:    srai a1, a1, 60
 ; RV64M-NEXT:    slli a2, a1, 1
 ; RV64M-NEXT:    add a1, a2, a1
-; RV64M-NEXT:    srli a2, a1, 4
-; RV64M-NEXT:    slli a1, a1, 56
-; RV64M-NEXT:    srli a1, a1, 63
-; RV64M-NEXT:    add a1, a2, a1
+; RV64M-NEXT:    srli a2, a1, 63
+; RV64M-NEXT:    srli a1, a1, 4
+; RV64M-NEXT:    add a1, a1, a2
 ; RV64M-NEXT:    slli a2, a1, 3
 ; RV64M-NEXT:    slli a1, a1, 1
 ; RV64M-NEXT:    subw a1, a1, a2
@@ -182,10 +180,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV32MV-NEXT:    srai a1, a1, 28
 ; RV32MV-NEXT:    slli a2, a1, 1
 ; RV32MV-NEXT:    add a1, a2, a1
-; RV32MV-NEXT:    srli a2, a1, 4
-; RV32MV-NEXT:    slli a1, a1, 24
-; RV32MV-NEXT:    srli a1, a1, 31
-; RV32MV-NEXT:    add a1, a2, a1
+; RV32MV-NEXT:    srli a2, a1, 31
+; RV32MV-NEXT:    srli a1, a1, 4
+; RV32MV-NEXT:    add a1, a1, a2
 ; RV32MV-NEXT:    slli a2, a1, 3
 ; RV32MV-NEXT:    slli a1, a1, 1
 ; RV32MV-NEXT:    sub a1, a1, a2
@@ -201,10 +198,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV64MV-NEXT:    srai a1, a1, 60
 ; RV64MV-NEXT:    slli a2, a1, 1
 ; RV64MV-NEXT:    add a1, a2, a1
-; RV64MV-NEXT:    srli a2, a1, 4
-; RV64MV-NEXT:    slli a1, a1, 56
-; RV64MV-NEXT:    srli a1, a1, 63
-; RV64MV-NEXT:    add a1, a2, a1
+; RV64MV-NEXT:    srli a2, a1, 63
+; RV64MV-NEXT:    srli a1, a1, 4
+; RV64MV-NEXT:    add a1, a1, a2
 ; RV64MV-NEXT:    slli a2, a1, 3
 ; RV64MV-...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Nov 5, 2024

@llvm/pr-subscribers-backend-arm

Author: Simon Pilgrim (RKSimon)

Changes

If we're only demanding the LSB of a SRL node and that is shifting down an extended sign bit, see if we can change the SRL to shift down the MSB directly.

These patterns can occur during legalisation when we've sign extended to a wider type but the SRL is still shifting from the subreg.

There's potentially a more general fold we could do here if we're just shifting a block of sign bits, but we only seem to currently benefit from demanding just the MSB, as this is a pretty common pattern for other folds.

Fixes the remaining regression in #112588


Patch is 20.77 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/114967.diff

10 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+16)
  • (modified) llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll (+12-14)
  • (modified) llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll (+10-12)
  • (modified) llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/div-by-constant.ll (+28-36)
  • (modified) llvm/test/CodeGen/RISCV/div.ll (+7-9)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll (+20-22)
  • (modified) llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll (+12-16)
  • (modified) llvm/test/CodeGen/Thumb2/srem-seteq-illegal-types.ll (+2-2)
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index a16ec19e7a6888..05b00ec1ff543d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -1978,6 +1978,22 @@ bool TargetLowering::SimplifyDemandedBits(
         }
       }
 
+      // If we are shifting down an extended sign bit, see if we can simplify
+      // this to shifting the MSB directly to expose further simplifications.
+      // This pattern often appears after sext_inreg legalization.
+      // 
+      // NOTE: We might be able to generalize this and merge with the SRA fold
+      // above, but there are currently regressions.
+      if (DemandedBits == 1 && (BitWidth - 1) > ShAmt) {
+        unsigned NumSignBits =
+            TLO.DAG.ComputeNumSignBits(Op0, DemandedElts, Depth + 1);
+        if (ShAmt >= (BitWidth - NumSignBits))
+          return TLO.CombineTo(
+              Op, TLO.DAG.getNode(
+                      ISD::SRL, dl, VT, Op0,
+                      TLO.DAG.getShiftAmountConstant(BitWidth - 1, VT, dl)));
+      }
+
       APInt InDemandedMask = (DemandedBits << ShAmt);
 
       // If the shift is exact, then it does demand the low bits (and knows that
diff --git a/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll
index 9fbce05eee1775..884d668157e5f7 100644
--- a/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/AArch64/srem-seteq-illegal-types.ll
@@ -25,8 +25,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    sbfx w8, w0, #0, #4
 ; CHECK-NEXT:    add w8, w8, w8, lsl #1
-; CHECK-NEXT:    ubfx w9, w8, #7, #1
-; CHECK-NEXT:    add w8, w9, w8, lsr #4
+; CHECK-NEXT:    lsr w9, w8, #4
+; CHECK-NEXT:    add w8, w9, w8, lsr #31
 ; CHECK-NEXT:    mov w9, #6 // =0x6
 ; CHECK-NEXT:    msub w8, w8, w9, w0
 ; CHECK-NEXT:    and w8, w8, #0xf
diff --git a/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll
index 7f56215b9b4123..973362462f7355 100644
--- a/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/ARM/srem-seteq-illegal-types.ll
@@ -115,11 +115,10 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM5-LABEL: test_srem_even:
 ; ARM5:       @ %bb.0:
 ; ARM5-NEXT:    lsl r1, r0, #28
-; ARM5-NEXT:    mov r2, #1
 ; ARM5-NEXT:    asr r1, r1, #28
 ; ARM5-NEXT:    add r1, r1, r1, lsl #1
-; ARM5-NEXT:    and r2, r2, r1, lsr #7
-; ARM5-NEXT:    add r1, r2, r1, lsr #4
+; ARM5-NEXT:    lsr r2, r1, #4
+; ARM5-NEXT:    add r1, r2, r1, lsr #31
 ; ARM5-NEXT:    add r1, r1, r1, lsl #1
 ; ARM5-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM5-NEXT:    and r0, r0, #15
@@ -131,11 +130,10 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM6-LABEL: test_srem_even:
 ; ARM6:       @ %bb.0:
 ; ARM6-NEXT:    lsl r1, r0, #28
-; ARM6-NEXT:    mov r2, #1
 ; ARM6-NEXT:    asr r1, r1, #28
 ; ARM6-NEXT:    add r1, r1, r1, lsl #1
-; ARM6-NEXT:    and r2, r2, r1, lsr #7
-; ARM6-NEXT:    add r1, r2, r1, lsr #4
+; ARM6-NEXT:    lsr r2, r1, #4
+; ARM6-NEXT:    add r1, r2, r1, lsr #31
 ; ARM6-NEXT:    add r1, r1, r1, lsl #1
 ; ARM6-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM6-NEXT:    and r0, r0, #15
@@ -148,8 +146,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM7:       @ %bb.0:
 ; ARM7-NEXT:    sbfx r1, r0, #0, #4
 ; ARM7-NEXT:    add r1, r1, r1, lsl #1
-; ARM7-NEXT:    ubfx r2, r1, #7, #1
-; ARM7-NEXT:    add r1, r2, r1, lsr #4
+; ARM7-NEXT:    lsr r2, r1, #4
+; ARM7-NEXT:    add r1, r2, r1, lsr #31
 ; ARM7-NEXT:    add r1, r1, r1, lsl #1
 ; ARM7-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM7-NEXT:    and r0, r0, #15
@@ -162,8 +160,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; ARM8:       @ %bb.0:
 ; ARM8-NEXT:    sbfx r1, r0, #0, #4
 ; ARM8-NEXT:    add r1, r1, r1, lsl #1
-; ARM8-NEXT:    ubfx r2, r1, #7, #1
-; ARM8-NEXT:    add r1, r2, r1, lsr #4
+; ARM8-NEXT:    lsr r2, r1, #4
+; ARM8-NEXT:    add r1, r2, r1, lsr #31
 ; ARM8-NEXT:    add r1, r1, r1, lsl #1
 ; ARM8-NEXT:    sub r0, r0, r1, lsl #1
 ; ARM8-NEXT:    and r0, r0, #15
@@ -176,8 +174,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; NEON7:       @ %bb.0:
 ; NEON7-NEXT:    sbfx r1, r0, #0, #4
 ; NEON7-NEXT:    add r1, r1, r1, lsl #1
-; NEON7-NEXT:    ubfx r2, r1, #7, #1
-; NEON7-NEXT:    add r1, r2, r1, lsr #4
+; NEON7-NEXT:    lsr r2, r1, #4
+; NEON7-NEXT:    add r1, r2, r1, lsr #31
 ; NEON7-NEXT:    add r1, r1, r1, lsl #1
 ; NEON7-NEXT:    sub r0, r0, r1, lsl #1
 ; NEON7-NEXT:    and r0, r0, #15
@@ -190,8 +188,8 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; NEON8:       @ %bb.0:
 ; NEON8-NEXT:    sbfx r1, r0, #0, #4
 ; NEON8-NEXT:    add r1, r1, r1, lsl #1
-; NEON8-NEXT:    ubfx r2, r1, #7, #1
-; NEON8-NEXT:    add r1, r2, r1, lsr #4
+; NEON8-NEXT:    lsr r2, r1, #4
+; NEON8-NEXT:    add r1, r2, r1, lsr #31
 ; NEON8-NEXT:    add r1, r1, r1, lsl #1
 ; NEON8-NEXT:    sub r0, r0, r1, lsl #1
 ; NEON8-NEXT:    and r0, r0, #15
diff --git a/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll
index 37cca8687890a6..f4c78fb0fe160e 100644
--- a/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/Mips/srem-seteq-illegal-types.ll
@@ -47,17 +47,16 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; MIPSEL-NEXT:    sra $1, $1, 28
 ; MIPSEL-NEXT:    sll $2, $1, 1
 ; MIPSEL-NEXT:    addu $1, $2, $1
-; MIPSEL-NEXT:    srl $2, $1, 4
-; MIPSEL-NEXT:    srl $1, $1, 7
-; MIPSEL-NEXT:    andi $1, $1, 1
-; MIPSEL-NEXT:    addiu $3, $zero, 1
-; MIPSEL-NEXT:    addu $1, $2, $1
-; MIPSEL-NEXT:    sll $2, $1, 1
-; MIPSEL-NEXT:    sll $1, $1, 2
+; MIPSEL-NEXT:    srl $2, $1, 31
+; MIPSEL-NEXT:    srl $1, $1, 4
 ; MIPSEL-NEXT:    addu $1, $1, $2
+; MIPSEL-NEXT:    addiu $2, $zero, 1
+; MIPSEL-NEXT:    sll $3, $1, 1
+; MIPSEL-NEXT:    sll $1, $1, 2
+; MIPSEL-NEXT:    addu $1, $1, $3
 ; MIPSEL-NEXT:    subu $1, $4, $1
 ; MIPSEL-NEXT:    andi $1, $1, 15
-; MIPSEL-NEXT:    xor $1, $1, $3
+; MIPSEL-NEXT:    xor $1, $1, $2
 ; MIPSEL-NEXT:    jr $ra
 ; MIPSEL-NEXT:    sltiu $2, $1, 1
 ;
@@ -69,10 +68,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; MIPS64EL-NEXT:    sll $3, $2, 1
 ; MIPS64EL-NEXT:    addu $2, $3, $2
 ; MIPS64EL-NEXT:    addiu $3, $zero, 1
-; MIPS64EL-NEXT:    srl $4, $2, 4
-; MIPS64EL-NEXT:    srl $2, $2, 7
-; MIPS64EL-NEXT:    andi $2, $2, 1
-; MIPS64EL-NEXT:    addu $2, $4, $2
+; MIPS64EL-NEXT:    srl $4, $2, 31
+; MIPS64EL-NEXT:    srl $2, $2, 4
+; MIPS64EL-NEXT:    addu $2, $2, $4
 ; MIPS64EL-NEXT:    sll $4, $2, 1
 ; MIPS64EL-NEXT:    sll $2, $2, 2
 ; MIPS64EL-NEXT:    addu $2, $2, $4
diff --git a/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll
index 2b07f27be021b1..18b07b2aa5cec3 100644
--- a/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/PowerPC/srem-seteq-illegal-types.ll
@@ -46,7 +46,7 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; PPC-NEXT:    slwi 4, 3, 28
 ; PPC-NEXT:    srawi 4, 4, 28
 ; PPC-NEXT:    mulli 4, 4, 3
-; PPC-NEXT:    rlwinm 5, 4, 25, 31, 31
+; PPC-NEXT:    srwi 5, 4, 31
 ; PPC-NEXT:    srwi 4, 4, 4
 ; PPC-NEXT:    add 4, 4, 5
 ; PPC-NEXT:    mulli 4, 4, 6
@@ -65,7 +65,7 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; PPC64LE-NEXT:    srawi 4, 4, 28
 ; PPC64LE-NEXT:    slwi 5, 4, 1
 ; PPC64LE-NEXT:    add 4, 4, 5
-; PPC64LE-NEXT:    rlwinm 5, 4, 25, 31, 31
+; PPC64LE-NEXT:    srwi 5, 4, 31
 ; PPC64LE-NEXT:    srwi 4, 4, 4
 ; PPC64LE-NEXT:    add 4, 4, 5
 ; PPC64LE-NEXT:    mulli 4, 4, 6
diff --git a/llvm/test/CodeGen/RISCV/div-by-constant.ll b/llvm/test/CodeGen/RISCV/div-by-constant.ll
index 91ac7c5ddae3ff..3d9fb91e3adf82 100644
--- a/llvm/test/CodeGen/RISCV/div-by-constant.ll
+++ b/llvm/test/CodeGen/RISCV/div-by-constant.ll
@@ -488,10 +488,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV32IM-NEXT:    srai a0, a0, 24
 ; RV32IM-NEXT:    li a1, 86
 ; RV32IM-NEXT:    mul a0, a0, a1
-; RV32IM-NEXT:    srli a1, a0, 8
-; RV32IM-NEXT:    slli a0, a0, 16
-; RV32IM-NEXT:    srli a0, a0, 31
-; RV32IM-NEXT:    add a0, a1, a0
+; RV32IM-NEXT:    srli a1, a0, 31
+; RV32IM-NEXT:    srli a0, a0, 8
+; RV32IM-NEXT:    add a0, a0, a1
 ; RV32IM-NEXT:    ret
 ;
 ; RV32IMZB-LABEL: sdiv8_constant_no_srai:
@@ -499,10 +498,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV32IMZB-NEXT:    sext.b a0, a0
 ; RV32IMZB-NEXT:    li a1, 86
 ; RV32IMZB-NEXT:    mul a0, a0, a1
-; RV32IMZB-NEXT:    srli a1, a0, 8
-; RV32IMZB-NEXT:    slli a0, a0, 16
-; RV32IMZB-NEXT:    srli a0, a0, 31
-; RV32IMZB-NEXT:    add a0, a1, a0
+; RV32IMZB-NEXT:    srli a1, a0, 31
+; RV32IMZB-NEXT:    srli a0, a0, 8
+; RV32IMZB-NEXT:    add a0, a0, a1
 ; RV32IMZB-NEXT:    ret
 ;
 ; RV64IM-LABEL: sdiv8_constant_no_srai:
@@ -511,10 +509,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV64IM-NEXT:    srai a0, a0, 56
 ; RV64IM-NEXT:    li a1, 86
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srli a1, a0, 8
-; RV64IM-NEXT:    slli a0, a0, 48
-; RV64IM-NEXT:    srli a0, a0, 63
-; RV64IM-NEXT:    add a0, a1, a0
+; RV64IM-NEXT:    srli a1, a0, 63
+; RV64IM-NEXT:    srli a0, a0, 8
+; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
 ;
 ; RV64IMZB-LABEL: sdiv8_constant_no_srai:
@@ -522,10 +519,9 @@ define i8 @sdiv8_constant_no_srai(i8 %a) nounwind {
 ; RV64IMZB-NEXT:    sext.b a0, a0
 ; RV64IMZB-NEXT:    li a1, 86
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srli a1, a0, 8
-; RV64IMZB-NEXT:    slli a0, a0, 48
-; RV64IMZB-NEXT:    srli a0, a0, 63
-; RV64IMZB-NEXT:    add a0, a1, a0
+; RV64IMZB-NEXT:    srli a1, a0, 63
+; RV64IMZB-NEXT:    srli a0, a0, 8
+; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
   %1 = sdiv i8 %a, 3
   ret i8 %1
@@ -538,10 +534,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV32IM-NEXT:    srai a0, a0, 24
 ; RV32IM-NEXT:    li a1, 103
 ; RV32IM-NEXT:    mul a0, a0, a1
-; RV32IM-NEXT:    srai a1, a0, 9
-; RV32IM-NEXT:    slli a0, a0, 16
-; RV32IM-NEXT:    srli a0, a0, 31
-; RV32IM-NEXT:    add a0, a1, a0
+; RV32IM-NEXT:    srli a1, a0, 31
+; RV32IM-NEXT:    srai a0, a0, 9
+; RV32IM-NEXT:    add a0, a0, a1
 ; RV32IM-NEXT:    ret
 ;
 ; RV32IMZB-LABEL: sdiv8_constant_srai:
@@ -549,10 +544,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV32IMZB-NEXT:    sext.b a0, a0
 ; RV32IMZB-NEXT:    li a1, 103
 ; RV32IMZB-NEXT:    mul a0, a0, a1
-; RV32IMZB-NEXT:    srai a1, a0, 9
-; RV32IMZB-NEXT:    slli a0, a0, 16
-; RV32IMZB-NEXT:    srli a0, a0, 31
-; RV32IMZB-NEXT:    add a0, a1, a0
+; RV32IMZB-NEXT:    srli a1, a0, 31
+; RV32IMZB-NEXT:    srai a0, a0, 9
+; RV32IMZB-NEXT:    add a0, a0, a1
 ; RV32IMZB-NEXT:    ret
 ;
 ; RV64IM-LABEL: sdiv8_constant_srai:
@@ -561,10 +555,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV64IM-NEXT:    srai a0, a0, 56
 ; RV64IM-NEXT:    li a1, 103
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srai a1, a0, 9
-; RV64IM-NEXT:    slli a0, a0, 48
-; RV64IM-NEXT:    srli a0, a0, 63
-; RV64IM-NEXT:    add a0, a1, a0
+; RV64IM-NEXT:    srli a1, a0, 63
+; RV64IM-NEXT:    srai a0, a0, 9
+; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
 ;
 ; RV64IMZB-LABEL: sdiv8_constant_srai:
@@ -572,10 +565,9 @@ define i8 @sdiv8_constant_srai(i8 %a) nounwind {
 ; RV64IMZB-NEXT:    sext.b a0, a0
 ; RV64IMZB-NEXT:    li a1, 103
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srai a1, a0, 9
-; RV64IMZB-NEXT:    slli a0, a0, 48
-; RV64IMZB-NEXT:    srli a0, a0, 63
-; RV64IMZB-NEXT:    add a0, a1, a0
+; RV64IMZB-NEXT:    srli a1, a0, 63
+; RV64IMZB-NEXT:    srai a0, a0, 9
+; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
   %1 = sdiv i8 %a, 5
   ret i8 %1
@@ -728,7 +720,7 @@ define i16 @sdiv16_constant_no_srai(i16 %a) nounwind {
 ; RV64IM-NEXT:    lui a1, 5
 ; RV64IM-NEXT:    addiw a1, a1, 1366
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srliw a1, a0, 31
+; RV64IM-NEXT:    srli a1, a0, 63
 ; RV64IM-NEXT:    srli a0, a0, 16
 ; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
@@ -739,7 +731,7 @@ define i16 @sdiv16_constant_no_srai(i16 %a) nounwind {
 ; RV64IMZB-NEXT:    lui a1, 5
 ; RV64IMZB-NEXT:    addiw a1, a1, 1366
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srliw a1, a0, 31
+; RV64IMZB-NEXT:    srli a1, a0, 63
 ; RV64IMZB-NEXT:    srli a0, a0, 16
 ; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
@@ -778,7 +770,7 @@ define i16 @sdiv16_constant_srai(i16 %a) nounwind {
 ; RV64IM-NEXT:    lui a1, 6
 ; RV64IM-NEXT:    addiw a1, a1, 1639
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srliw a1, a0, 31
+; RV64IM-NEXT:    srli a1, a0, 63
 ; RV64IM-NEXT:    srai a0, a0, 17
 ; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
@@ -789,7 +781,7 @@ define i16 @sdiv16_constant_srai(i16 %a) nounwind {
 ; RV64IMZB-NEXT:    lui a1, 6
 ; RV64IMZB-NEXT:    addiw a1, a1, 1639
 ; RV64IMZB-NEXT:    mul a0, a0, a1
-; RV64IMZB-NEXT:    srliw a1, a0, 31
+; RV64IMZB-NEXT:    srli a1, a0, 63
 ; RV64IMZB-NEXT:    srai a0, a0, 17
 ; RV64IMZB-NEXT:    add a0, a0, a1
 ; RV64IMZB-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/div.ll b/llvm/test/CodeGen/RISCV/div.ll
index f4e67698473151..e94efbea8376d5 100644
--- a/llvm/test/CodeGen/RISCV/div.ll
+++ b/llvm/test/CodeGen/RISCV/div.ll
@@ -980,10 +980,9 @@ define i8 @sdiv8_constant(i8 %a) nounwind {
 ; RV32IM-NEXT:    srai a0, a0, 24
 ; RV32IM-NEXT:    li a1, 103
 ; RV32IM-NEXT:    mul a0, a0, a1
-; RV32IM-NEXT:    srai a1, a0, 9
-; RV32IM-NEXT:    slli a0, a0, 16
-; RV32IM-NEXT:    srli a0, a0, 31
-; RV32IM-NEXT:    add a0, a1, a0
+; RV32IM-NEXT:    srli a1, a0, 31
+; RV32IM-NEXT:    srai a0, a0, 9
+; RV32IM-NEXT:    add a0, a0, a1
 ; RV32IM-NEXT:    ret
 ;
 ; RV64I-LABEL: sdiv8_constant:
@@ -1004,10 +1003,9 @@ define i8 @sdiv8_constant(i8 %a) nounwind {
 ; RV64IM-NEXT:    srai a0, a0, 56
 ; RV64IM-NEXT:    li a1, 103
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srai a1, a0, 9
-; RV64IM-NEXT:    slli a0, a0, 48
-; RV64IM-NEXT:    srli a0, a0, 63
-; RV64IM-NEXT:    add a0, a1, a0
+; RV64IM-NEXT:    srli a1, a0, 63
+; RV64IM-NEXT:    srai a0, a0, 9
+; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
   %1 = sdiv i8 %a, 5
   ret i8 %1
@@ -1193,7 +1191,7 @@ define i16 @sdiv16_constant(i16 %a) nounwind {
 ; RV64IM-NEXT:    lui a1, 6
 ; RV64IM-NEXT:    addiw a1, a1, 1639
 ; RV64IM-NEXT:    mul a0, a0, a1
-; RV64IM-NEXT:    srliw a1, a0, 31
+; RV64IM-NEXT:    srli a1, a0, 63
 ; RV64IM-NEXT:    srai a0, a0, 17
 ; RV64IM-NEXT:    add a0, a0, a1
 ; RV64IM-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
index f32795ed03c7ec..df7b3eb8d45480 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
@@ -8,34 +8,33 @@ define <8 x i1> @v8i1_v16i1(<16 x i1>) {
 ; RV32:       # %bb.0:
 ; RV32-NEXT:    vsetivli zero, 1, e16, m1, ta, ma
 ; RV32-NEXT:    vmv.x.s a0, v0
-; RV32-NEXT:    slli a1, a0, 19
+; RV32-NEXT:    slli a1, a0, 18
 ; RV32-NEXT:    srli a1, a1, 31
-; RV32-NEXT:    slli a2, a0, 26
-; RV32-NEXT:    srli a2, a2, 31
+; RV32-NEXT:    srli a2, a0, 31
 ; RV32-NEXT:    vsetivli zero, 8, e8, mf2, ta, mu
 ; RV32-NEXT:    vmv.v.x v8, a2
 ; RV32-NEXT:    vslide1down.vx v8, v8, a1
-; RV32-NEXT:    slli a1, a0, 24
+; RV32-NEXT:    slli a1, a0, 27
 ; RV32-NEXT:    srli a1, a1, 31
 ; RV32-NEXT:    vslide1down.vx v8, v8, a1
-; RV32-NEXT:    slli a1, a0, 29
+; RV32-NEXT:    slli a1, a0, 28
 ; RV32-NEXT:    srli a1, a1, 31
 ; RV32-NEXT:    vslide1down.vx v8, v8, a1
-; RV32-NEXT:    slli a1, a0, 18
+; RV32-NEXT:    slli a1, a0, 19
 ; RV32-NEXT:    srli a1, a1, 31
-; RV32-NEXT:    slli a2, a0, 16
+; RV32-NEXT:    slli a2, a0, 26
 ; RV32-NEXT:    srli a2, a2, 31
 ; RV32-NEXT:    vmv.v.x v9, a2
 ; RV32-NEXT:    vslide1down.vx v9, v9, a1
-; RV32-NEXT:    slli a1, a0, 27
+; RV32-NEXT:    slli a1, a0, 24
 ; RV32-NEXT:    srli a1, a1, 31
 ; RV32-NEXT:    vslide1down.vx v9, v9, a1
-; RV32-NEXT:    slli a0, a0, 28
+; RV32-NEXT:    slli a0, a0, 29
 ; RV32-NEXT:    srli a0, a0, 31
 ; RV32-NEXT:    vmv.v.i v0, 15
 ; RV32-NEXT:    vslide1down.vx v9, v9, a0
-; RV32-NEXT:    vslidedown.vi v9, v8, 4, v0.t
-; RV32-NEXT:    vand.vi v8, v9, 1
+; RV32-NEXT:    vslidedown.vi v8, v9, 4, v0.t
+; RV32-NEXT:    vand.vi v8, v8, 1
 ; RV32-NEXT:    vmsne.vi v0, v8, 0
 ; RV32-NEXT:    ret
 ;
@@ -43,34 +42,33 @@ define <8 x i1> @v8i1_v16i1(<16 x i1>) {
 ; RV64:       # %bb.0:
 ; RV64-NEXT:    vsetivli zero, 1, e16, m1, ta, ma
 ; RV64-NEXT:    vmv.x.s a0, v0
-; RV64-NEXT:    slli a1, a0, 51
+; RV64-NEXT:    slli a1, a0, 50
 ; RV64-NEXT:    srli a1, a1, 63
-; RV64-NEXT:    slli a2, a0, 58
-; RV64-NEXT:    srli a2, a2, 63
+; RV64-NEXT:    srli a2, a0, 63
 ; RV64-NEXT:    vsetivli zero, 8, e8, mf2, ta, mu
 ; RV64-NEXT:    vmv.v.x v8, a2
 ; RV64-NEXT:    vslide1down.vx v8, v8, a1
-; RV64-NEXT:    slli a1, a0, 56
+; RV64-NEXT:    slli a1, a0, 59
 ; RV64-NEXT:    srli a1, a1, 63
 ; RV64-NEXT:    vslide1down.vx v8, v8, a1
-; RV64-NEXT:    slli a1, a0, 61
+; RV64-NEXT:    slli a1, a0, 60
 ; RV64-NEXT:    srli a1, a1, 63
 ; RV64-NEXT:    vslide1down.vx v8, v8, a1
-; RV64-NEXT:    slli a1, a0, 50
+; RV64-NEXT:    slli a1, a0, 51
 ; RV64-NEXT:    srli a1, a1, 63
-; RV64-NEXT:    slli a2, a0, 48
+; RV64-NEXT:    slli a2, a0, 58
 ; RV64-NEXT:    srli a2, a2, 63
 ; RV64-NEXT:    vmv.v.x v9, a2
 ; RV64-NEXT:    vslide1down.vx v9, v9, a1
-; RV64-NEXT:    slli a1, a0, 59
+; RV64-NEXT:    slli a1, a0, 56
 ; RV64-NEXT:    srli a1, a1, 63
 ; RV64-NEXT:    vslide1down.vx v9, v9, a1
-; RV64-NEXT:    slli a0, a0, 60
+; RV64-NEXT:    slli a0, a0, 61
 ; RV64-NEXT:    srli a0, a0, 63
 ; RV64-NEXT:    vmv.v.i v0, 15
 ; RV64-NEXT:    vslide1down.vx v9, v9, a0
-; RV64-NEXT:    vslidedown.vi v9, v8, 4, v0.t
-; RV64-NEXT:    vand.vi v8, v9, 1
+; RV64-NEXT:    vslidedown.vi v8, v9, 4, v0.t
+; RV64-NEXT:    vand.vi v8, v8, 1
 ; RV64-NEXT:    vmsne.vi v0, v8, 0
 ; RV64-NEXT:    ret
   %2 = shufflevector <16 x i1> %0, <16 x i1> poison, <8 x i32> <i32 5, i32 12, i32 7, i32 2, i32 15, i32 13, i32 4, i32 3>
diff --git a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
index 307a0531cf0296..3ccad02fbb2bf3 100644
--- a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
@@ -144,10 +144,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV32M-NEXT:    srai a1, a1, 28
 ; RV32M-NEXT:    slli a2, a1, 1
 ; RV32M-NEXT:    add a1, a2, a1
-; RV32M-NEXT:    srli a2, a1, 4
-; RV32M-NEXT:    slli a1, a1, 24
-; RV32M-NEXT:    srli a1, a1, 31
-; RV32M-NEXT:    add a1, a2, a1
+; RV32M-NEXT:    srli a2, a1, 31
+; RV32M-NEXT:    srli a1, a1, 4
+; RV32M-NEXT:    add a1, a1, a2
 ; RV32M-NEXT:    slli a2, a1, 3
 ; RV32M-NEXT:    slli a1, a1, 1
 ; RV32M-NEXT:    sub a1, a1, a2
@@ -163,10 +162,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV64M-NEXT:    srai a1, a1, 60
 ; RV64M-NEXT:    slli a2, a1, 1
 ; RV64M-NEXT:    add a1, a2, a1
-; RV64M-NEXT:    srli a2, a1, 4
-; RV64M-NEXT:    slli a1, a1, 56
-; RV64M-NEXT:    srli a1, a1, 63
-; RV64M-NEXT:    add a1, a2, a1
+; RV64M-NEXT:    srli a2, a1, 63
+; RV64M-NEXT:    srli a1, a1, 4
+; RV64M-NEXT:    add a1, a1, a2
 ; RV64M-NEXT:    slli a2, a1, 3
 ; RV64M-NEXT:    slli a1, a1, 1
 ; RV64M-NEXT:    subw a1, a1, a2
@@ -182,10 +180,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV32MV-NEXT:    srai a1, a1, 28
 ; RV32MV-NEXT:    slli a2, a1, 1
 ; RV32MV-NEXT:    add a1, a2, a1
-; RV32MV-NEXT:    srli a2, a1, 4
-; RV32MV-NEXT:    slli a1, a1, 24
-; RV32MV-NEXT:    srli a1, a1, 31
-; RV32MV-NEXT:    add a1, a2, a1
+; RV32MV-NEXT:    srli a2, a1, 31
+; RV32MV-NEXT:    srli a1, a1, 4
+; RV32MV-NEXT:    add a1, a1, a2
 ; RV32MV-NEXT:    slli a2, a1, 3
 ; RV32MV-NEXT:    slli a1, a1, 1
 ; RV32MV-NEXT:    sub a1, a1, a2
@@ -201,10 +198,9 @@ define i1 @test_srem_even(i4 %X) nounwind {
 ; RV64MV-NEXT:    srai a1, a1, 60
 ; RV64MV-NEXT:    slli a2, a1, 1
 ; RV64MV-NEXT:    add a1, a2, a1
-; RV64MV-NEXT:    srli a2, a1, 4
-; RV64MV-NEXT:    slli a1, a1, 56
-; RV64MV-NEXT:    srli a1, a1, 63
-; RV64MV-NEXT:    add a1, a2, a1
+; RV64MV-NEXT:    srli a2, a1, 63
+; RV64MV-NEXT:    srli a1, a1, 4
+; RV64MV-NEXT:    add a1, a1, a2
 ; RV64MV-NEXT:    slli a2, a1, 3
 ; RV64MV-...
[truncated]

@github-actions
Copy link

github-actions bot commented Nov 5, 2024

⚠️ C/C++ code formatter, clang-format found issues in your code. ⚠️

You can test this locally with the following command:
git-clang-format --diff 6d2f4dd79d0106b8f4c743b2fb08ae0ea29411e0 58cc3a4e7e2853b91a0c6fc75d8fc07b65af8c6a --extensions cpp -- llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
View the diff from clang-format here.
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 05b00ec1ff..4b9cc9d09f 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -1981,7 +1981,7 @@ bool TargetLowering::SimplifyDemandedBits(
       // If we are shifting down an extended sign bit, see if we can simplify
       // this to shifting the MSB directly to expose further simplifications.
       // This pattern often appears after sext_inreg legalization.
-      // 
+      //
       // NOTE: We might be able to generalize this and merge with the SRA fold
       // above, but there are currently regressions.
       if (DemandedBits == 1 && (BitWidth - 1) > ShAmt) {

@jayfoad
Copy link
Contributor

jayfoad commented Nov 5, 2024

Does this only help when the result of the SRL is actually ANDed with 1, so changing it to SRL by 31 allows you to remove the AND? If so, could you implement this as a simplification of the AND instead of a demanded bits thing?

@RKSimon
Copy link
Collaborator Author

RKSimon commented Nov 5, 2024

Yes at the moment its always the (and (srl X, C), 1) pattern AFAICT - if we're not going to try to extend this in the future then we can move it DAGCombine

@jayfoad
Copy link
Contributor

jayfoad commented Nov 5, 2024

Yes at the moment its always the (and (srl X, C), 1) pattern AFAICT - if we're not going to try to extend this in the future then we can move it DAGCombine

I'm struggling to think of any other pattern where SRL by 31 would be better than SRL by a different constant.

@RKSimon
Copy link
Collaborator Author

RKSimon commented Nov 5, 2024

Yes at the moment its always the (and (srl X, C), 1) pattern AFAICT - if we're not going to try to extend this in the future then we can move it DAGCombine

I'm struggling to think of any other pattern where SRL by 31 would be better than SRL by a different constant.

Well, the SRL amount would be adjusted depending on the signbit and demandedbit counts - but I can't think of many patterns that would benefit (maybe mask generation from vector comparison results?).

RKSimon added a commit to RKSimon/llvm-project that referenced this pull request Nov 5, 2024
…t extraction

If we're masking the LSB of a SRL node result and that is shifting down an extended sign bit, see if we can change the SRL to shift down the MSB directly.

These patterns can occur during legalisation when we've sign extended to a wider type but the SRL is still shifting from the subreg.

Alternative to llvm#114967

Fixes the remaining regression in llvm#112588
RKSimon added a commit that referenced this pull request Nov 5, 2024
…t extraction (#114992)

If we're masking the LSB of a SRL node result and that is shifting down an extended sign bit, see if we can change the SRL to shift down the MSB directly.

These patterns can occur during legalisation when we've sign extended to a wider type but the SRL is still shifting from the subreg.

Alternative to #114967

Fixes the remaining regression in #112588
@RKSimon RKSimon closed this Nov 5, 2024
PhilippRados pushed a commit to PhilippRados/llvm-project that referenced this pull request Nov 6, 2024
…t extraction (llvm#114992)

If we're masking the LSB of a SRL node result and that is shifting down an extended sign bit, see if we can change the SRL to shift down the MSB directly.

These patterns can occur during legalisation when we've sign extended to a wider type but the SRL is still shifting from the subreg.

Alternative to llvm#114967

Fixes the remaining regression in llvm#112588
@RKSimon RKSimon deleted the dag-simplifybits-srl-msb-signbit branch November 7, 2024 13:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants