Skip to content

Conversation

@RKSimon
Copy link
Collaborator

@RKSimon RKSimon commented Jul 22, 2025

Similar to InstCombinerImpl::freezeOtherUses, attempt to ensure that we merge multiple frozen/unfrozen uses of a SDValue. This fixes a number of hasOneUse() problems when trying to push FREEZE nodes through the DAG.

Remove SimplifyMultipleUseDemandedBits handling of FREEZE nodes as we now want to keep the common node, and not bypass for some nodes just because of DemandedElts.

Fixes #149799

@RKSimon RKSimon force-pushed the dag-freeze-extrause branch from 86ce7c5 to b38c702 Compare July 22, 2025 15:01
Copy link
Collaborator Author

@RKSimon RKSimon Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will eventually get fixed by #150204

@RKSimon RKSimon force-pushed the dag-freeze-extrause branch from b38c702 to 073f741 Compare July 22, 2025 16:18
…ue with just the frozen node

Similar to InstCombinerImpl::freezeOtherUses, attempt to ensure that we merge multiple frozen/unfrozen uses of an SDValue. This fixes a number of hasOneUse() problems when trying to push FREEZE nodes through the DAG.

Remove SimplifyMultipleUseDemandedBits handling of FREEZE nodes as we now want to keep the common node, and not bypass for some nodes just because of DemandedElts.

Fixes llvm#149799
@RKSimon RKSimon force-pushed the dag-freeze-extrause branch from 073f741 to f9eadbc Compare July 29, 2025 12:31
@RKSimon RKSimon marked this pull request as ready for review July 29, 2025 13:45
@RKSimon RKSimon requested a review from topperc July 29, 2025 13:45
@RKSimon RKSimon requested review from arsenm and nikic July 29, 2025 13:45
@llvmbot
Copy link
Member

llvmbot commented Jul 29, 2025

@llvm/pr-subscribers-backend-nvptx
@llvm/pr-subscribers-backend-aarch64

@llvm/pr-subscribers-backend-risc-v

Author: Simon Pilgrim (RKSimon)

Changes

Similar to InstCombinerImpl::freezeOtherUses, attempt to ensure that we merge multiple frozen/unfrozen uses of a SDValue. This fixes a number of hasOneUse() problems when trying to push FREEZE nodes through the DAG.

Remove SimplifyMultipleUseDemandedBits handling of FREEZE nodes as we now want to keep the common node, and not bypass for some nodes just because of DemandedElts.

Fixes #149799


Patch is 559.57 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/150017.diff

39 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+12-8)
  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (-7)
  • (modified) llvm/test/CodeGen/AArch64/midpoint-int.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/div_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/freeze.ll (+12-63)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i1.ll (+179-178)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i16.ll (+378-376)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+909-907)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i16.ll (+275-273)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i8.ll (+783-782)
  • (modified) llvm/test/CodeGen/AMDGPU/load-local-i16.ll (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/rem_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv.ll (+398-386)
  • (modified) llvm/test/CodeGen/AMDGPU/sext-in-reg-vector-shuffle.ll (+11-10)
  • (modified) llvm/test/CodeGen/NVPTX/i1-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/abds.ll (+108-108)
  • (modified) llvm/test/CodeGen/RISCV/fpclamptosat.ll (+104-104)
  • (modified) llvm/test/CodeGen/RISCV/iabs.ll (+40-40)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbb.ll (+28-28)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbs.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll (+170-170)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vec3-setcc-crash.ll (+21-21)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vp-splice.ll (+68-68)
  • (modified) llvm/test/CodeGen/VE/Scalar/min.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/combine-sdiv.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/freeze-binary.ll (+12-10)
  • (modified) llvm/test/CodeGen/X86/freeze-vector.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-256.ll (+211-211)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-512.ll (+80-80)
  • (modified) llvm/test/CodeGen/X86/midpoint-int.ll (+169-188)
  • (modified) llvm/test/CodeGen/X86/oddsubvector.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/pr38539.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/vector-compress.ll (+507-469)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-256.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-256.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-256.ll (+8-8)
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 20b96f5e1bc00..59808db5458e4 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -4068,18 +4068,11 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
   unsigned BitWidth = VT.getScalarSizeInBits();
   SDLoc DL(N);
 
-  auto PeekThroughFreeze = [](SDValue N) {
-    if (N->getOpcode() == ISD::FREEZE && N.hasOneUse())
-      return N->getOperand(0);
-    return N;
-  };
-
   if (SDValue V = foldSubCtlzNot<EmptyMatchContext>(N, DAG))
     return V;
 
   // fold (sub x, x) -> 0
-  // FIXME: Refactor this and xor and other similar operations together.
-  if (PeekThroughFreeze(N0) == PeekThroughFreeze(N1))
+  if (N0 == N1)
     return tryFoldToZero(DL, TLI, VT, DAG, LegalOperations);
 
   // fold (sub c1, c2) -> c3
@@ -16735,6 +16728,17 @@ SDValue DAGCombiner::visitFREEZE(SDNode *N) {
   if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, /*PoisonOnly*/ false))
     return N0;
 
+  // If we have frozen and unfrozen users of N0, update so everything uses N.
+  if (!N0.isUndef() && !N0.hasOneUse()) {
+    SDValue FrozenN0 = SDValue(N, 0);
+    DAG.ReplaceAllUsesOfValueWith(N0, FrozenN0);
+    // ReplaceAllUsesOfValueWith will have also updated the use in N, thus
+    // creating a cycle in a DAG. Let's undo that by mutating the freeze.
+    assert(N->getOperand(0) == FrozenN0 && "Expected cycle in DAG");
+    DAG.UpdateNodeOperands(N, N0);
+    return FrozenN0;
+  }
+
   // We currently avoid folding freeze over SRA/SRL, due to the problems seen
   // with (freeze (assert ext)) blocking simplifications of SRA/SRL. See for
   // example https://reviews.llvm.org/D136529#4120959.
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 1764910861df4..0df0d5a479385 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -775,13 +775,6 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
 
     break;
   }
-  case ISD::FREEZE: {
-    SDValue N0 = Op.getOperand(0);
-    if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, DemandedElts,
-                                             /*PoisonOnly=*/false, Depth + 1))
-      return N0;
-    break;
-  }
   case ISD::AND: {
     LHSKnown = DAG.computeKnownBits(Op.getOperand(0), DemandedElts, Depth + 1);
     RHSKnown = DAG.computeKnownBits(Op.getOperand(1), DemandedElts, Depth + 1);
diff --git a/llvm/test/CodeGen/AArch64/midpoint-int.ll b/llvm/test/CodeGen/AArch64/midpoint-int.ll
index bbdce7c6e933b..a8993e3542cfb 100644
--- a/llvm/test/CodeGen/AArch64/midpoint-int.ll
+++ b/llvm/test/CodeGen/AArch64/midpoint-int.ll
@@ -61,11 +61,10 @@ define i32 @scalar_i32_signed_mem_reg(ptr %a1_addr, i32 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w1
 ; CHECK-NEXT:    sub w10, w1, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w11, w9, w1
 ; CHECK-NEXT:    csel w10, w11, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -86,11 +85,10 @@ define i32 @scalar_i32_signed_reg_mem(i32 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w0, w9
 ; CHECK-NEXT:    sub w10, w9, w0
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w9, w0, w9
 ; CHECK-NEXT:    csel w9, w9, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w9, w9, #1
 ; CHECK-NEXT:    madd w0, w9, w8, w0
 ; CHECK-NEXT:    ret
@@ -112,11 +110,10 @@ define i32 @scalar_i32_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    ldr w10, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w10
 ; CHECK-NEXT:    sub w11, w10, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w10, w9, w10
 ; CHECK-NEXT:    csel w10, w10, w11, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -190,11 +187,10 @@ define i64 @scalar_i64_signed_mem_reg(ptr %a1_addr, i64 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x1
 ; CHECK-NEXT:    sub x10, x1, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x11, x9, x1
 ; CHECK-NEXT:    csel x10, x11, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
@@ -215,11 +211,10 @@ define i64 @scalar_i64_signed_reg_mem(i64 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x0, x9
 ; CHECK-NEXT:    sub x10, x9, x0
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x9, x0, x9
 ; CHECK-NEXT:    csel x9, x9, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x9, x9, #1
 ; CHECK-NEXT:    madd x0, x9, x8, x0
 ; CHECK-NEXT:    ret
@@ -241,11 +236,10 @@ define i64 @scalar_i64_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    ldr x10, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x10
 ; CHECK-NEXT:    sub x11, x10, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x10, x9, x10
 ; CHECK-NEXT:    csel x10, x10, x11, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/AMDGPU/div_i128.ll b/llvm/test/CodeGen/AMDGPU/div_i128.ll
index e6c38d29be949..55067023116f0 100644
--- a/llvm/test/CodeGen/AMDGPU/div_i128.ll
+++ b/llvm/test/CodeGen/AMDGPU/div_i128.ll
@@ -495,8 +495,9 @@ define i128 @v_sdiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
@@ -2679,8 +2680,9 @@ define i128 @v_udiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
diff --git a/llvm/test/CodeGen/AMDGPU/freeze.ll b/llvm/test/CodeGen/AMDGPU/freeze.ll
index ac4f0df7506ae..308e86bbaf8fd 100644
--- a/llvm/test/CodeGen/AMDGPU/freeze.ll
+++ b/llvm/test/CodeGen/AMDGPU/freeze.ll
@@ -5692,10 +5692,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -5725,10 +5721,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -6351,10 +6343,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -6384,10 +6372,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -12347,14 +12331,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GFX6-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12392,14 +12371,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12474,11 +12448,7 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-SDAG-NEXT:    global_load_dword v0, v[0:1], off
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-SDAG-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX10-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX10-SDAG-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX10-SDAG-NEXT:    v_or_b32_sdwa v0, v0, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
-; GFX10-SDAG-NEXT:    global_store_byte v[2:3], v4, off offset:2
+; GFX10-SDAG-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-SDAG-NEXT:    global_store_short v[2:3], v0, off
 ; GFX10-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12499,36 +12469,15 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-GISEL-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-GISEL-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX11-SDAG-TRUE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-TRUE16:       ; %bb.0:
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    global_load_b32 v1, v[0:1], off
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.h, 0
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    v_lshrrev_b16 v0.l, 8, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_and_b16 v0.h, 0xff, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.l, v1.h
-; GFX11-SDAG-TRUE16-NEXT:    v_lshlrev_b16 v0.l, 8, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    v_or_b16 v0.l, v0.h, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b8 v[2:3], v4, off offset:2
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b16 v[2:3], v0, off
-; GFX11-SDAG-TRUE16-NEXT:    s_setpc_b64 s[30:31]
-;
-; GFX11-SDAG-FAKE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-FAKE16:       ; %bb.0:
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    global_load_b32 v0, v[0:1], off
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_and_b32_e32 v4, 0xff, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX11-SDAG-FAKE16-NEXT:    v_or_b32_e32 v1, v4, v1
-; GFX11-SDAG-FAKE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b8 v[2:3], v0, off offset:2
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b16 v[2:3], v1, off
-; GFX11-SDAG-FAKE16-NEXT:    s_setpc_b64 s[30:31]
+; GFX11-SDAG-LABEL: freeze_v3i8:
+; GFX11-SDAG:       ; %bb.0:
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-SDAG-NEXT:    global_load_b32 v0, v[0:1], off
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0)
+; GFX11-SDAG-NEXT:    s_clause 0x1
+; GFX11-SDAG-NEXT:    global_store_d16_hi_b8 v[2:3], v0, off offset:2
+; GFX11-SDAG-NEXT:    global_store_b16 v[2:3], v0, off
+; GFX11-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-GISEL-LABEL: freeze_v3i8:
 ; GFX11-GISEL:       ; %bb.0:
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index bfc01ef138721..d59f72ad7a1ac 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8343,53 +8343,53 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_mov_b32 s2, -1
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX6-NEXT:    s_lshr_b32 s42, s5, 30
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 28
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 29
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 26
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 24
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 25
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 22
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 23
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 20
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 21
-; GFX6-NEXT:    s_lshr_b32 s14, s5, 18
-; GFX6-NEXT:    s_lshr_b32 s16, s5, 19
-; GFX6-NEXT:    s_lshr_b32 s10, s5, 16
-; GFX6-NEXT:    s_lshr_b32 s12, s5, 17
-; GFX6-NEXT:    s_lshr_b32 s6, s5, 14
-; GFX6-NEXT:    s_lshr_b32 s8, s5, 15
-; GFX6-NEXT:    s_mov_b32 s40, s5
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 30
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 31
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 28
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 29
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 26
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 27
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 24
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 25
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 22
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 23
+; GFX6-NEXT:    s_lshr_b32 s14, s4, 20
+; GFX6-NEXT:    s_lshr_b32 s16, s4, 21
+; GFX6-NEXT:    s_lshr_b32 s10, s4, 18
+; GFX6-NEXT:    s_lshr_b32 s12, s4, 19
+; GFX6-NEXT:    s_lshr_b32 s6, s4, 16
+; GFX6-NEXT:    s_lshr_b32 s8, s4, 17
 ; GFX6-NEXT:    s_ashr_i32 s7, s5, 31
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[40:41], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s7
-; GFX6-NEXT:    s_lshr_b32 s40, s5, 12
+; GFX6-NEXT:    s_lshr_b32 s40, s4, 14
 ; GFX6-NEXT:    v_mov_b32_e32 v0, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v1, s45
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
+; GFX6-NEXT:    s_mov_b32 s44, s5
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[44:45], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[42:43], s[42:43], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v6, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v7, s45
-; GFX6-NEXT:    s_lshr_b32 s44, s5, 13
+; GFX6-NEXT:    s_lshr_b32 s44, s4, 15
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s42
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s43
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 10
+; GFX6-NEXT:    s_lshr_b32 s42, s4, 12
 ; GFX6-NEXT:    s_bfe_i64 s[36:37], s[36:37], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[38:39], s[38:39], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s36
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s37
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 11
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 13
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s38
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s39
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 8
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 10
 ; GFX6-NEXT:    s_bfe_i64 s[30:31], s[30:31], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[34:35], s[34:35], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s30
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s31
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 9
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 11
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s34
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s35
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 6
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 8
 ; GFX6-NEXT:    s_bfe_i64 s[28:29], s[28:29], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s7
@@ -8397,190 +8397,191 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s26
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 7
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 9
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s28
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s29
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 4
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 6
 ; GFX6-NEXT:    s_bfe_i64 s[24:25], s[24:25], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
+; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:240
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s22
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s23
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 5
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 7
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s24
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s25
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 2
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 4
 ; GFX6-NEXT:    s_bfe_i64 s[20:21], s[20:21], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[18:19], s[18:19], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:464
+; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:224
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s18
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s19
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 3
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 5
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s20
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s21
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 1
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 2
 ; GFX6-NEXT:    s_bfe_i64 s[16:17], s[16:17], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[14:15], s[14:15], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:448
+; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:208
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s14
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s15
-; G...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 29, 2025

@llvm/pr-subscribers-llvm-selectiondag

Author: Simon Pilgrim (RKSimon)

Changes

Similar to InstCombinerImpl::freezeOtherUses, attempt to ensure that we merge multiple frozen/unfrozen uses of a SDValue. This fixes a number of hasOneUse() problems when trying to push FREEZE nodes through the DAG.

Remove SimplifyMultipleUseDemandedBits handling of FREEZE nodes as we now want to keep the common node, and not bypass for some nodes just because of DemandedElts.

Fixes #149799


Patch is 559.57 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/150017.diff

39 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+12-8)
  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (-7)
  • (modified) llvm/test/CodeGen/AArch64/midpoint-int.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/div_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/freeze.ll (+12-63)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i1.ll (+179-178)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i16.ll (+378-376)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+909-907)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i16.ll (+275-273)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i8.ll (+783-782)
  • (modified) llvm/test/CodeGen/AMDGPU/load-local-i16.ll (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/rem_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv.ll (+398-386)
  • (modified) llvm/test/CodeGen/AMDGPU/sext-in-reg-vector-shuffle.ll (+11-10)
  • (modified) llvm/test/CodeGen/NVPTX/i1-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/abds.ll (+108-108)
  • (modified) llvm/test/CodeGen/RISCV/fpclamptosat.ll (+104-104)
  • (modified) llvm/test/CodeGen/RISCV/iabs.ll (+40-40)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbb.ll (+28-28)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbs.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll (+170-170)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vec3-setcc-crash.ll (+21-21)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vp-splice.ll (+68-68)
  • (modified) llvm/test/CodeGen/VE/Scalar/min.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/combine-sdiv.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/freeze-binary.ll (+12-10)
  • (modified) llvm/test/CodeGen/X86/freeze-vector.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-256.ll (+211-211)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-512.ll (+80-80)
  • (modified) llvm/test/CodeGen/X86/midpoint-int.ll (+169-188)
  • (modified) llvm/test/CodeGen/X86/oddsubvector.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/pr38539.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/vector-compress.ll (+507-469)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-256.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-256.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-256.ll (+8-8)
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 20b96f5e1bc00..59808db5458e4 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -4068,18 +4068,11 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
   unsigned BitWidth = VT.getScalarSizeInBits();
   SDLoc DL(N);
 
-  auto PeekThroughFreeze = [](SDValue N) {
-    if (N->getOpcode() == ISD::FREEZE && N.hasOneUse())
-      return N->getOperand(0);
-    return N;
-  };
-
   if (SDValue V = foldSubCtlzNot<EmptyMatchContext>(N, DAG))
     return V;
 
   // fold (sub x, x) -> 0
-  // FIXME: Refactor this and xor and other similar operations together.
-  if (PeekThroughFreeze(N0) == PeekThroughFreeze(N1))
+  if (N0 == N1)
     return tryFoldToZero(DL, TLI, VT, DAG, LegalOperations);
 
   // fold (sub c1, c2) -> c3
@@ -16735,6 +16728,17 @@ SDValue DAGCombiner::visitFREEZE(SDNode *N) {
   if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, /*PoisonOnly*/ false))
     return N0;
 
+  // If we have frozen and unfrozen users of N0, update so everything uses N.
+  if (!N0.isUndef() && !N0.hasOneUse()) {
+    SDValue FrozenN0 = SDValue(N, 0);
+    DAG.ReplaceAllUsesOfValueWith(N0, FrozenN0);
+    // ReplaceAllUsesOfValueWith will have also updated the use in N, thus
+    // creating a cycle in a DAG. Let's undo that by mutating the freeze.
+    assert(N->getOperand(0) == FrozenN0 && "Expected cycle in DAG");
+    DAG.UpdateNodeOperands(N, N0);
+    return FrozenN0;
+  }
+
   // We currently avoid folding freeze over SRA/SRL, due to the problems seen
   // with (freeze (assert ext)) blocking simplifications of SRA/SRL. See for
   // example https://reviews.llvm.org/D136529#4120959.
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 1764910861df4..0df0d5a479385 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -775,13 +775,6 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
 
     break;
   }
-  case ISD::FREEZE: {
-    SDValue N0 = Op.getOperand(0);
-    if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, DemandedElts,
-                                             /*PoisonOnly=*/false, Depth + 1))
-      return N0;
-    break;
-  }
   case ISD::AND: {
     LHSKnown = DAG.computeKnownBits(Op.getOperand(0), DemandedElts, Depth + 1);
     RHSKnown = DAG.computeKnownBits(Op.getOperand(1), DemandedElts, Depth + 1);
diff --git a/llvm/test/CodeGen/AArch64/midpoint-int.ll b/llvm/test/CodeGen/AArch64/midpoint-int.ll
index bbdce7c6e933b..a8993e3542cfb 100644
--- a/llvm/test/CodeGen/AArch64/midpoint-int.ll
+++ b/llvm/test/CodeGen/AArch64/midpoint-int.ll
@@ -61,11 +61,10 @@ define i32 @scalar_i32_signed_mem_reg(ptr %a1_addr, i32 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w1
 ; CHECK-NEXT:    sub w10, w1, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w11, w9, w1
 ; CHECK-NEXT:    csel w10, w11, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -86,11 +85,10 @@ define i32 @scalar_i32_signed_reg_mem(i32 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w0, w9
 ; CHECK-NEXT:    sub w10, w9, w0
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w9, w0, w9
 ; CHECK-NEXT:    csel w9, w9, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w9, w9, #1
 ; CHECK-NEXT:    madd w0, w9, w8, w0
 ; CHECK-NEXT:    ret
@@ -112,11 +110,10 @@ define i32 @scalar_i32_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    ldr w10, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w10
 ; CHECK-NEXT:    sub w11, w10, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w10, w9, w10
 ; CHECK-NEXT:    csel w10, w10, w11, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -190,11 +187,10 @@ define i64 @scalar_i64_signed_mem_reg(ptr %a1_addr, i64 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x1
 ; CHECK-NEXT:    sub x10, x1, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x11, x9, x1
 ; CHECK-NEXT:    csel x10, x11, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
@@ -215,11 +211,10 @@ define i64 @scalar_i64_signed_reg_mem(i64 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x0, x9
 ; CHECK-NEXT:    sub x10, x9, x0
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x9, x0, x9
 ; CHECK-NEXT:    csel x9, x9, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x9, x9, #1
 ; CHECK-NEXT:    madd x0, x9, x8, x0
 ; CHECK-NEXT:    ret
@@ -241,11 +236,10 @@ define i64 @scalar_i64_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    ldr x10, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x10
 ; CHECK-NEXT:    sub x11, x10, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x10, x9, x10
 ; CHECK-NEXT:    csel x10, x10, x11, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/AMDGPU/div_i128.ll b/llvm/test/CodeGen/AMDGPU/div_i128.ll
index e6c38d29be949..55067023116f0 100644
--- a/llvm/test/CodeGen/AMDGPU/div_i128.ll
+++ b/llvm/test/CodeGen/AMDGPU/div_i128.ll
@@ -495,8 +495,9 @@ define i128 @v_sdiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
@@ -2679,8 +2680,9 @@ define i128 @v_udiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
diff --git a/llvm/test/CodeGen/AMDGPU/freeze.ll b/llvm/test/CodeGen/AMDGPU/freeze.ll
index ac4f0df7506ae..308e86bbaf8fd 100644
--- a/llvm/test/CodeGen/AMDGPU/freeze.ll
+++ b/llvm/test/CodeGen/AMDGPU/freeze.ll
@@ -5692,10 +5692,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -5725,10 +5721,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -6351,10 +6343,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -6384,10 +6372,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -12347,14 +12331,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GFX6-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12392,14 +12371,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12474,11 +12448,7 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-SDAG-NEXT:    global_load_dword v0, v[0:1], off
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-SDAG-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX10-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX10-SDAG-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX10-SDAG-NEXT:    v_or_b32_sdwa v0, v0, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
-; GFX10-SDAG-NEXT:    global_store_byte v[2:3], v4, off offset:2
+; GFX10-SDAG-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-SDAG-NEXT:    global_store_short v[2:3], v0, off
 ; GFX10-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12499,36 +12469,15 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-GISEL-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-GISEL-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX11-SDAG-TRUE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-TRUE16:       ; %bb.0:
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    global_load_b32 v1, v[0:1], off
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.h, 0
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    v_lshrrev_b16 v0.l, 8, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_and_b16 v0.h, 0xff, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.l, v1.h
-; GFX11-SDAG-TRUE16-NEXT:    v_lshlrev_b16 v0.l, 8, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    v_or_b16 v0.l, v0.h, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b8 v[2:3], v4, off offset:2
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b16 v[2:3], v0, off
-; GFX11-SDAG-TRUE16-NEXT:    s_setpc_b64 s[30:31]
-;
-; GFX11-SDAG-FAKE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-FAKE16:       ; %bb.0:
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    global_load_b32 v0, v[0:1], off
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_and_b32_e32 v4, 0xff, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX11-SDAG-FAKE16-NEXT:    v_or_b32_e32 v1, v4, v1
-; GFX11-SDAG-FAKE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b8 v[2:3], v0, off offset:2
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b16 v[2:3], v1, off
-; GFX11-SDAG-FAKE16-NEXT:    s_setpc_b64 s[30:31]
+; GFX11-SDAG-LABEL: freeze_v3i8:
+; GFX11-SDAG:       ; %bb.0:
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-SDAG-NEXT:    global_load_b32 v0, v[0:1], off
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0)
+; GFX11-SDAG-NEXT:    s_clause 0x1
+; GFX11-SDAG-NEXT:    global_store_d16_hi_b8 v[2:3], v0, off offset:2
+; GFX11-SDAG-NEXT:    global_store_b16 v[2:3], v0, off
+; GFX11-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-GISEL-LABEL: freeze_v3i8:
 ; GFX11-GISEL:       ; %bb.0:
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index bfc01ef138721..d59f72ad7a1ac 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8343,53 +8343,53 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_mov_b32 s2, -1
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX6-NEXT:    s_lshr_b32 s42, s5, 30
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 28
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 29
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 26
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 24
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 25
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 22
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 23
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 20
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 21
-; GFX6-NEXT:    s_lshr_b32 s14, s5, 18
-; GFX6-NEXT:    s_lshr_b32 s16, s5, 19
-; GFX6-NEXT:    s_lshr_b32 s10, s5, 16
-; GFX6-NEXT:    s_lshr_b32 s12, s5, 17
-; GFX6-NEXT:    s_lshr_b32 s6, s5, 14
-; GFX6-NEXT:    s_lshr_b32 s8, s5, 15
-; GFX6-NEXT:    s_mov_b32 s40, s5
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 30
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 31
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 28
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 29
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 26
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 27
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 24
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 25
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 22
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 23
+; GFX6-NEXT:    s_lshr_b32 s14, s4, 20
+; GFX6-NEXT:    s_lshr_b32 s16, s4, 21
+; GFX6-NEXT:    s_lshr_b32 s10, s4, 18
+; GFX6-NEXT:    s_lshr_b32 s12, s4, 19
+; GFX6-NEXT:    s_lshr_b32 s6, s4, 16
+; GFX6-NEXT:    s_lshr_b32 s8, s4, 17
 ; GFX6-NEXT:    s_ashr_i32 s7, s5, 31
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[40:41], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s7
-; GFX6-NEXT:    s_lshr_b32 s40, s5, 12
+; GFX6-NEXT:    s_lshr_b32 s40, s4, 14
 ; GFX6-NEXT:    v_mov_b32_e32 v0, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v1, s45
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
+; GFX6-NEXT:    s_mov_b32 s44, s5
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[44:45], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[42:43], s[42:43], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v6, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v7, s45
-; GFX6-NEXT:    s_lshr_b32 s44, s5, 13
+; GFX6-NEXT:    s_lshr_b32 s44, s4, 15
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s42
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s43
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 10
+; GFX6-NEXT:    s_lshr_b32 s42, s4, 12
 ; GFX6-NEXT:    s_bfe_i64 s[36:37], s[36:37], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[38:39], s[38:39], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s36
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s37
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 11
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 13
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s38
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s39
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 8
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 10
 ; GFX6-NEXT:    s_bfe_i64 s[30:31], s[30:31], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[34:35], s[34:35], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s30
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s31
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 9
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 11
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s34
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s35
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 6
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 8
 ; GFX6-NEXT:    s_bfe_i64 s[28:29], s[28:29], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s7
@@ -8397,190 +8397,191 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s26
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 7
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 9
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s28
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s29
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 4
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 6
 ; GFX6-NEXT:    s_bfe_i64 s[24:25], s[24:25], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
+; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:240
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s22
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s23
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 5
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 7
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s24
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s25
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 2
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 4
 ; GFX6-NEXT:    s_bfe_i64 s[20:21], s[20:21], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[18:19], s[18:19], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:464
+; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:224
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s18
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s19
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 3
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 5
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s20
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s21
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 1
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 2
 ; GFX6-NEXT:    s_bfe_i64 s[16:17], s[16:17], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[14:15], s[14:15], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:448
+; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:208
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s14
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s15
-; G...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 29, 2025

@llvm/pr-subscribers-backend-x86

Author: Simon Pilgrim (RKSimon)

Changes

Similar to InstCombinerImpl::freezeOtherUses, attempt to ensure that we merge multiple frozen/unfrozen uses of a SDValue. This fixes a number of hasOneUse() problems when trying to push FREEZE nodes through the DAG.

Remove SimplifyMultipleUseDemandedBits handling of FREEZE nodes as we now want to keep the common node, and not bypass for some nodes just because of DemandedElts.

Fixes #149799


Patch is 559.57 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/150017.diff

39 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+12-8)
  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (-7)
  • (modified) llvm/test/CodeGen/AArch64/midpoint-int.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/div_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/freeze.ll (+12-63)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i1.ll (+179-178)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i16.ll (+378-376)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+909-907)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i16.ll (+275-273)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i8.ll (+783-782)
  • (modified) llvm/test/CodeGen/AMDGPU/load-local-i16.ll (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/rem_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv.ll (+398-386)
  • (modified) llvm/test/CodeGen/AMDGPU/sext-in-reg-vector-shuffle.ll (+11-10)
  • (modified) llvm/test/CodeGen/NVPTX/i1-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/abds.ll (+108-108)
  • (modified) llvm/test/CodeGen/RISCV/fpclamptosat.ll (+104-104)
  • (modified) llvm/test/CodeGen/RISCV/iabs.ll (+40-40)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbb.ll (+28-28)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbs.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll (+170-170)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vec3-setcc-crash.ll (+21-21)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vp-splice.ll (+68-68)
  • (modified) llvm/test/CodeGen/VE/Scalar/min.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/combine-sdiv.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/freeze-binary.ll (+12-10)
  • (modified) llvm/test/CodeGen/X86/freeze-vector.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-256.ll (+211-211)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-512.ll (+80-80)
  • (modified) llvm/test/CodeGen/X86/midpoint-int.ll (+169-188)
  • (modified) llvm/test/CodeGen/X86/oddsubvector.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/pr38539.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/vector-compress.ll (+507-469)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-256.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-256.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-256.ll (+8-8)
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 20b96f5e1bc00..59808db5458e4 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -4068,18 +4068,11 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
   unsigned BitWidth = VT.getScalarSizeInBits();
   SDLoc DL(N);
 
-  auto PeekThroughFreeze = [](SDValue N) {
-    if (N->getOpcode() == ISD::FREEZE && N.hasOneUse())
-      return N->getOperand(0);
-    return N;
-  };
-
   if (SDValue V = foldSubCtlzNot<EmptyMatchContext>(N, DAG))
     return V;
 
   // fold (sub x, x) -> 0
-  // FIXME: Refactor this and xor and other similar operations together.
-  if (PeekThroughFreeze(N0) == PeekThroughFreeze(N1))
+  if (N0 == N1)
     return tryFoldToZero(DL, TLI, VT, DAG, LegalOperations);
 
   // fold (sub c1, c2) -> c3
@@ -16735,6 +16728,17 @@ SDValue DAGCombiner::visitFREEZE(SDNode *N) {
   if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, /*PoisonOnly*/ false))
     return N0;
 
+  // If we have frozen and unfrozen users of N0, update so everything uses N.
+  if (!N0.isUndef() && !N0.hasOneUse()) {
+    SDValue FrozenN0 = SDValue(N, 0);
+    DAG.ReplaceAllUsesOfValueWith(N0, FrozenN0);
+    // ReplaceAllUsesOfValueWith will have also updated the use in N, thus
+    // creating a cycle in a DAG. Let's undo that by mutating the freeze.
+    assert(N->getOperand(0) == FrozenN0 && "Expected cycle in DAG");
+    DAG.UpdateNodeOperands(N, N0);
+    return FrozenN0;
+  }
+
   // We currently avoid folding freeze over SRA/SRL, due to the problems seen
   // with (freeze (assert ext)) blocking simplifications of SRA/SRL. See for
   // example https://reviews.llvm.org/D136529#4120959.
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 1764910861df4..0df0d5a479385 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -775,13 +775,6 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
 
     break;
   }
-  case ISD::FREEZE: {
-    SDValue N0 = Op.getOperand(0);
-    if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, DemandedElts,
-                                             /*PoisonOnly=*/false, Depth + 1))
-      return N0;
-    break;
-  }
   case ISD::AND: {
     LHSKnown = DAG.computeKnownBits(Op.getOperand(0), DemandedElts, Depth + 1);
     RHSKnown = DAG.computeKnownBits(Op.getOperand(1), DemandedElts, Depth + 1);
diff --git a/llvm/test/CodeGen/AArch64/midpoint-int.ll b/llvm/test/CodeGen/AArch64/midpoint-int.ll
index bbdce7c6e933b..a8993e3542cfb 100644
--- a/llvm/test/CodeGen/AArch64/midpoint-int.ll
+++ b/llvm/test/CodeGen/AArch64/midpoint-int.ll
@@ -61,11 +61,10 @@ define i32 @scalar_i32_signed_mem_reg(ptr %a1_addr, i32 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w1
 ; CHECK-NEXT:    sub w10, w1, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w11, w9, w1
 ; CHECK-NEXT:    csel w10, w11, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -86,11 +85,10 @@ define i32 @scalar_i32_signed_reg_mem(i32 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w0, w9
 ; CHECK-NEXT:    sub w10, w9, w0
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w9, w0, w9
 ; CHECK-NEXT:    csel w9, w9, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w9, w9, #1
 ; CHECK-NEXT:    madd w0, w9, w8, w0
 ; CHECK-NEXT:    ret
@@ -112,11 +110,10 @@ define i32 @scalar_i32_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    ldr w10, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w10
 ; CHECK-NEXT:    sub w11, w10, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w10, w9, w10
 ; CHECK-NEXT:    csel w10, w10, w11, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -190,11 +187,10 @@ define i64 @scalar_i64_signed_mem_reg(ptr %a1_addr, i64 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x1
 ; CHECK-NEXT:    sub x10, x1, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x11, x9, x1
 ; CHECK-NEXT:    csel x10, x11, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
@@ -215,11 +211,10 @@ define i64 @scalar_i64_signed_reg_mem(i64 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x0, x9
 ; CHECK-NEXT:    sub x10, x9, x0
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x9, x0, x9
 ; CHECK-NEXT:    csel x9, x9, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x9, x9, #1
 ; CHECK-NEXT:    madd x0, x9, x8, x0
 ; CHECK-NEXT:    ret
@@ -241,11 +236,10 @@ define i64 @scalar_i64_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    ldr x10, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x10
 ; CHECK-NEXT:    sub x11, x10, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x10, x9, x10
 ; CHECK-NEXT:    csel x10, x10, x11, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/AMDGPU/div_i128.ll b/llvm/test/CodeGen/AMDGPU/div_i128.ll
index e6c38d29be949..55067023116f0 100644
--- a/llvm/test/CodeGen/AMDGPU/div_i128.ll
+++ b/llvm/test/CodeGen/AMDGPU/div_i128.ll
@@ -495,8 +495,9 @@ define i128 @v_sdiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
@@ -2679,8 +2680,9 @@ define i128 @v_udiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
diff --git a/llvm/test/CodeGen/AMDGPU/freeze.ll b/llvm/test/CodeGen/AMDGPU/freeze.ll
index ac4f0df7506ae..308e86bbaf8fd 100644
--- a/llvm/test/CodeGen/AMDGPU/freeze.ll
+++ b/llvm/test/CodeGen/AMDGPU/freeze.ll
@@ -5692,10 +5692,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -5725,10 +5721,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -6351,10 +6343,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -6384,10 +6372,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -12347,14 +12331,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GFX6-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12392,14 +12371,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12474,11 +12448,7 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-SDAG-NEXT:    global_load_dword v0, v[0:1], off
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-SDAG-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX10-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX10-SDAG-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX10-SDAG-NEXT:    v_or_b32_sdwa v0, v0, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
-; GFX10-SDAG-NEXT:    global_store_byte v[2:3], v4, off offset:2
+; GFX10-SDAG-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-SDAG-NEXT:    global_store_short v[2:3], v0, off
 ; GFX10-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12499,36 +12469,15 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-GISEL-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-GISEL-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX11-SDAG-TRUE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-TRUE16:       ; %bb.0:
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    global_load_b32 v1, v[0:1], off
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.h, 0
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    v_lshrrev_b16 v0.l, 8, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_and_b16 v0.h, 0xff, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.l, v1.h
-; GFX11-SDAG-TRUE16-NEXT:    v_lshlrev_b16 v0.l, 8, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    v_or_b16 v0.l, v0.h, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b8 v[2:3], v4, off offset:2
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b16 v[2:3], v0, off
-; GFX11-SDAG-TRUE16-NEXT:    s_setpc_b64 s[30:31]
-;
-; GFX11-SDAG-FAKE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-FAKE16:       ; %bb.0:
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    global_load_b32 v0, v[0:1], off
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_and_b32_e32 v4, 0xff, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX11-SDAG-FAKE16-NEXT:    v_or_b32_e32 v1, v4, v1
-; GFX11-SDAG-FAKE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b8 v[2:3], v0, off offset:2
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b16 v[2:3], v1, off
-; GFX11-SDAG-FAKE16-NEXT:    s_setpc_b64 s[30:31]
+; GFX11-SDAG-LABEL: freeze_v3i8:
+; GFX11-SDAG:       ; %bb.0:
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-SDAG-NEXT:    global_load_b32 v0, v[0:1], off
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0)
+; GFX11-SDAG-NEXT:    s_clause 0x1
+; GFX11-SDAG-NEXT:    global_store_d16_hi_b8 v[2:3], v0, off offset:2
+; GFX11-SDAG-NEXT:    global_store_b16 v[2:3], v0, off
+; GFX11-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-GISEL-LABEL: freeze_v3i8:
 ; GFX11-GISEL:       ; %bb.0:
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index bfc01ef138721..d59f72ad7a1ac 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8343,53 +8343,53 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_mov_b32 s2, -1
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX6-NEXT:    s_lshr_b32 s42, s5, 30
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 28
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 29
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 26
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 24
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 25
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 22
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 23
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 20
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 21
-; GFX6-NEXT:    s_lshr_b32 s14, s5, 18
-; GFX6-NEXT:    s_lshr_b32 s16, s5, 19
-; GFX6-NEXT:    s_lshr_b32 s10, s5, 16
-; GFX6-NEXT:    s_lshr_b32 s12, s5, 17
-; GFX6-NEXT:    s_lshr_b32 s6, s5, 14
-; GFX6-NEXT:    s_lshr_b32 s8, s5, 15
-; GFX6-NEXT:    s_mov_b32 s40, s5
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 30
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 31
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 28
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 29
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 26
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 27
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 24
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 25
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 22
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 23
+; GFX6-NEXT:    s_lshr_b32 s14, s4, 20
+; GFX6-NEXT:    s_lshr_b32 s16, s4, 21
+; GFX6-NEXT:    s_lshr_b32 s10, s4, 18
+; GFX6-NEXT:    s_lshr_b32 s12, s4, 19
+; GFX6-NEXT:    s_lshr_b32 s6, s4, 16
+; GFX6-NEXT:    s_lshr_b32 s8, s4, 17
 ; GFX6-NEXT:    s_ashr_i32 s7, s5, 31
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[40:41], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s7
-; GFX6-NEXT:    s_lshr_b32 s40, s5, 12
+; GFX6-NEXT:    s_lshr_b32 s40, s4, 14
 ; GFX6-NEXT:    v_mov_b32_e32 v0, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v1, s45
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
+; GFX6-NEXT:    s_mov_b32 s44, s5
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[44:45], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[42:43], s[42:43], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v6, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v7, s45
-; GFX6-NEXT:    s_lshr_b32 s44, s5, 13
+; GFX6-NEXT:    s_lshr_b32 s44, s4, 15
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s42
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s43
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 10
+; GFX6-NEXT:    s_lshr_b32 s42, s4, 12
 ; GFX6-NEXT:    s_bfe_i64 s[36:37], s[36:37], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[38:39], s[38:39], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s36
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s37
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 11
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 13
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s38
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s39
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 8
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 10
 ; GFX6-NEXT:    s_bfe_i64 s[30:31], s[30:31], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[34:35], s[34:35], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s30
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s31
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 9
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 11
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s34
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s35
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 6
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 8
 ; GFX6-NEXT:    s_bfe_i64 s[28:29], s[28:29], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s7
@@ -8397,190 +8397,191 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s26
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 7
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 9
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s28
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s29
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 4
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 6
 ; GFX6-NEXT:    s_bfe_i64 s[24:25], s[24:25], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
+; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:240
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s22
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s23
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 5
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 7
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s24
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s25
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 2
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 4
 ; GFX6-NEXT:    s_bfe_i64 s[20:21], s[20:21], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[18:19], s[18:19], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:464
+; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:224
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s18
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s19
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 3
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 5
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s20
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s21
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 1
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 2
 ; GFX6-NEXT:    s_bfe_i64 s[16:17], s[16:17], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[14:15], s[14:15], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:448
+; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:208
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s14
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s15
-; G...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 29, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Simon Pilgrim (RKSimon)

Changes

Similar to InstCombinerImpl::freezeOtherUses, attempt to ensure that we merge multiple frozen/unfrozen uses of a SDValue. This fixes a number of hasOneUse() problems when trying to push FREEZE nodes through the DAG.

Remove SimplifyMultipleUseDemandedBits handling of FREEZE nodes as we now want to keep the common node, and not bypass for some nodes just because of DemandedElts.

Fixes #149799


Patch is 559.57 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/150017.diff

39 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+12-8)
  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (-7)
  • (modified) llvm/test/CodeGen/AArch64/midpoint-int.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/div_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/freeze.ll (+12-63)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i1.ll (+179-178)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i16.ll (+378-376)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+909-907)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i16.ll (+275-273)
  • (modified) llvm/test/CodeGen/AMDGPU/load-global-i8.ll (+783-782)
  • (modified) llvm/test/CodeGen/AMDGPU/load-local-i16.ll (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/rem_i128.ll (+6-4)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv.ll (+398-386)
  • (modified) llvm/test/CodeGen/AMDGPU/sext-in-reg-vector-shuffle.ll (+11-10)
  • (modified) llvm/test/CodeGen/NVPTX/i1-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/abds.ll (+108-108)
  • (modified) llvm/test/CodeGen/RISCV/fpclamptosat.ll (+104-104)
  • (modified) llvm/test/CodeGen/RISCV/iabs.ll (+40-40)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbb.ll (+28-28)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbs.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll (+170-170)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vec3-setcc-crash.ll (+21-21)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vp-splice.ll (+68-68)
  • (modified) llvm/test/CodeGen/VE/Scalar/min.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/combine-sdiv.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/freeze-binary.ll (+12-10)
  • (modified) llvm/test/CodeGen/X86/freeze-vector.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-256.ll (+211-211)
  • (modified) llvm/test/CodeGen/X86/midpoint-int-vec-512.ll (+80-80)
  • (modified) llvm/test/CodeGen/X86/midpoint-int.ll (+169-188)
  • (modified) llvm/test/CodeGen/X86/oddsubvector.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/pr38539.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/vector-compress.ll (+507-469)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-256.ll (+8-8)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-256.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-128.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-256.ll (+8-8)
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 20b96f5e1bc00..59808db5458e4 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -4068,18 +4068,11 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
   unsigned BitWidth = VT.getScalarSizeInBits();
   SDLoc DL(N);
 
-  auto PeekThroughFreeze = [](SDValue N) {
-    if (N->getOpcode() == ISD::FREEZE && N.hasOneUse())
-      return N->getOperand(0);
-    return N;
-  };
-
   if (SDValue V = foldSubCtlzNot<EmptyMatchContext>(N, DAG))
     return V;
 
   // fold (sub x, x) -> 0
-  // FIXME: Refactor this and xor and other similar operations together.
-  if (PeekThroughFreeze(N0) == PeekThroughFreeze(N1))
+  if (N0 == N1)
     return tryFoldToZero(DL, TLI, VT, DAG, LegalOperations);
 
   // fold (sub c1, c2) -> c3
@@ -16735,6 +16728,17 @@ SDValue DAGCombiner::visitFREEZE(SDNode *N) {
   if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, /*PoisonOnly*/ false))
     return N0;
 
+  // If we have frozen and unfrozen users of N0, update so everything uses N.
+  if (!N0.isUndef() && !N0.hasOneUse()) {
+    SDValue FrozenN0 = SDValue(N, 0);
+    DAG.ReplaceAllUsesOfValueWith(N0, FrozenN0);
+    // ReplaceAllUsesOfValueWith will have also updated the use in N, thus
+    // creating a cycle in a DAG. Let's undo that by mutating the freeze.
+    assert(N->getOperand(0) == FrozenN0 && "Expected cycle in DAG");
+    DAG.UpdateNodeOperands(N, N0);
+    return FrozenN0;
+  }
+
   // We currently avoid folding freeze over SRA/SRL, due to the problems seen
   // with (freeze (assert ext)) blocking simplifications of SRA/SRL. See for
   // example https://reviews.llvm.org/D136529#4120959.
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 1764910861df4..0df0d5a479385 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -775,13 +775,6 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
 
     break;
   }
-  case ISD::FREEZE: {
-    SDValue N0 = Op.getOperand(0);
-    if (DAG.isGuaranteedNotToBeUndefOrPoison(N0, DemandedElts,
-                                             /*PoisonOnly=*/false, Depth + 1))
-      return N0;
-    break;
-  }
   case ISD::AND: {
     LHSKnown = DAG.computeKnownBits(Op.getOperand(0), DemandedElts, Depth + 1);
     RHSKnown = DAG.computeKnownBits(Op.getOperand(1), DemandedElts, Depth + 1);
diff --git a/llvm/test/CodeGen/AArch64/midpoint-int.ll b/llvm/test/CodeGen/AArch64/midpoint-int.ll
index bbdce7c6e933b..a8993e3542cfb 100644
--- a/llvm/test/CodeGen/AArch64/midpoint-int.ll
+++ b/llvm/test/CodeGen/AArch64/midpoint-int.ll
@@ -61,11 +61,10 @@ define i32 @scalar_i32_signed_mem_reg(ptr %a1_addr, i32 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w1
 ; CHECK-NEXT:    sub w10, w1, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w11, w9, w1
 ; CHECK-NEXT:    csel w10, w11, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -86,11 +85,10 @@ define i32 @scalar_i32_signed_reg_mem(i32 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr w9, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w0, w9
 ; CHECK-NEXT:    sub w10, w9, w0
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w9, w0, w9
 ; CHECK-NEXT:    csel w9, w9, w10, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w9, w9, #1
 ; CHECK-NEXT:    madd w0, w9, w8, w0
 ; CHECK-NEXT:    ret
@@ -112,11 +110,10 @@ define i32 @scalar_i32_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr w9, [x0]
 ; CHECK-NEXT:    ldr w10, [x1]
 ; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
-; CHECK-NEXT:    cmp w9, w10
 ; CHECK-NEXT:    sub w11, w10, w9
-; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    subs w10, w9, w10
 ; CHECK-NEXT:    csel w10, w10, w11, gt
+; CHECK-NEXT:    cneg w8, w8, le
 ; CHECK-NEXT:    lsr w10, w10, #1
 ; CHECK-NEXT:    madd w0, w10, w8, w9
 ; CHECK-NEXT:    ret
@@ -190,11 +187,10 @@ define i64 @scalar_i64_signed_mem_reg(ptr %a1_addr, i64 %a2) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x1
 ; CHECK-NEXT:    sub x10, x1, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x11, x9, x1
 ; CHECK-NEXT:    csel x10, x11, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
@@ -215,11 +211,10 @@ define i64 @scalar_i64_signed_reg_mem(i64 %a1, ptr %a2_addr) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    ldr x9, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x0, x9
 ; CHECK-NEXT:    sub x10, x9, x0
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x9, x0, x9
 ; CHECK-NEXT:    csel x9, x9, x10, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x9, x9, #1
 ; CHECK-NEXT:    madd x0, x9, x8, x0
 ; CHECK-NEXT:    ret
@@ -241,11 +236,10 @@ define i64 @scalar_i64_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
 ; CHECK-NEXT:    ldr x9, [x0]
 ; CHECK-NEXT:    ldr x10, [x1]
 ; CHECK-NEXT:    mov x8, #-1 // =0xffffffffffffffff
-; CHECK-NEXT:    cmp x9, x10
 ; CHECK-NEXT:    sub x11, x10, x9
-; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    subs x10, x9, x10
 ; CHECK-NEXT:    csel x10, x10, x11, gt
+; CHECK-NEXT:    cneg x8, x8, le
 ; CHECK-NEXT:    lsr x10, x10, #1
 ; CHECK-NEXT:    madd x0, x10, x8, x9
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/AMDGPU/div_i128.ll b/llvm/test/CodeGen/AMDGPU/div_i128.ll
index e6c38d29be949..55067023116f0 100644
--- a/llvm/test/CodeGen/AMDGPU/div_i128.ll
+++ b/llvm/test/CodeGen/AMDGPU/div_i128.ll
@@ -495,8 +495,9 @@ define i128 @v_sdiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
@@ -2679,8 +2680,9 @@ define i128 @v_udiv_i128_vv(i128 %lhs, i128 %rhs) {
 ; GFX9-O0-NEXT:    v_and_b32_e64 v6, 1, v6
 ; GFX9-O0-NEXT:    v_cmp_eq_u32_e64 s[8:9], v6, 1
 ; GFX9-O0-NEXT:    s_or_b64 s[8:9], s[4:5], s[8:9]
-; GFX9-O0-NEXT:    s_mov_b64 s[4:5], -1
-; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[8:9], s[4:5]
+; GFX9-O0-NEXT:    s_mov_b64 s[14:15], -1
+; GFX9-O0-NEXT:    s_mov_b64 s[4:5], s[8:9]
+; GFX9-O0-NEXT:    s_xor_b64 s[4:5], s[4:5], s[14:15]
 ; GFX9-O0-NEXT:    v_mov_b32_e32 v6, v5
 ; GFX9-O0-NEXT:    s_mov_b32 s14, s13
 ; GFX9-O0-NEXT:    v_xor_b32_e64 v6, v6, s14
diff --git a/llvm/test/CodeGen/AMDGPU/freeze.ll b/llvm/test/CodeGen/AMDGPU/freeze.ll
index ac4f0df7506ae..308e86bbaf8fd 100644
--- a/llvm/test/CodeGen/AMDGPU/freeze.ll
+++ b/llvm/test/CodeGen/AMDGPU/freeze.ll
@@ -5692,10 +5692,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -5725,10 +5721,6 @@ define void @freeze_v3i16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 16, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -6351,10 +6343,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX6-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX6-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
@@ -6384,10 +6372,6 @@ define void @freeze_v3f16(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dwordx2 v[0:1], v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xffff, v0
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v4, v0
 ; GFX7-SDAG-NEXT:    buffer_store_short v1, v[2:3], s[4:7], 0 addr64 offset:4
 ; GFX7-SDAG-NEXT:    buffer_store_dword v0, v[2:3], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
@@ -12347,14 +12331,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX6-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX6-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX6-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX6-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX6-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX6-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX6-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX6-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GFX6-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12392,14 +12371,9 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX7-SDAG-NEXT:    s_mov_b32 s5, s6
 ; GFX7-SDAG-NEXT:    buffer_load_dword v0, v[0:1], s[4:7], 0 addr64
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 8, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v4, 0xff, v4
 ; GFX7-SDAG-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
-; GFX7-SDAG-NEXT:    v_and_b32_e32 v0, 0xff, v0
-; GFX7-SDAG-NEXT:    v_lshlrev_b32_e32 v4, 8, v4
-; GFX7-SDAG-NEXT:    v_or_b32_e32 v0, v0, v4
-; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    buffer_store_short v0, v[2:3], s[4:7], 0 addr64
+; GFX7-SDAG-NEXT:    buffer_store_byte v1, v[2:3], s[4:7], 0 addr64 offset:2
 ; GFX7-SDAG-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12474,11 +12448,7 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-SDAG-NEXT:    global_load_dword v0, v[0:1], off
 ; GFX10-SDAG-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-SDAG-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX10-SDAG-NEXT:    v_lshrrev_b32_e32 v4, 16, v0
-; GFX10-SDAG-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX10-SDAG-NEXT:    v_or_b32_sdwa v0, v0, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
-; GFX10-SDAG-NEXT:    global_store_byte v[2:3], v4, off offset:2
+; GFX10-SDAG-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-SDAG-NEXT:    global_store_short v[2:3], v0, off
 ; GFX10-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -12499,36 +12469,15 @@ define void @freeze_v3i8(ptr addrspace(1) %ptra, ptr addrspace(1) %ptrb) {
 ; GFX10-GISEL-NEXT:    global_store_byte_d16_hi v[2:3], v0, off offset:2
 ; GFX10-GISEL-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX11-SDAG-TRUE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-TRUE16:       ; %bb.0:
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    global_load_b32 v1, v[0:1], off
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.h, 0
-; GFX11-SDAG-TRUE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-TRUE16-NEXT:    v_lshrrev_b16 v0.l, 8, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_and_b16 v0.h, 0xff, v1.l
-; GFX11-SDAG-TRUE16-NEXT:    v_mov_b16_e32 v4.l, v1.h
-; GFX11-SDAG-TRUE16-NEXT:    v_lshlrev_b16 v0.l, 8, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    v_or_b16 v0.l, v0.h, v0.l
-; GFX11-SDAG-TRUE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b8 v[2:3], v4, off offset:2
-; GFX11-SDAG-TRUE16-NEXT:    global_store_b16 v[2:3], v0, off
-; GFX11-SDAG-TRUE16-NEXT:    s_setpc_b64 s[30:31]
-;
-; GFX11-SDAG-FAKE16-LABEL: freeze_v3i8:
-; GFX11-SDAG-FAKE16:       ; %bb.0:
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    global_load_b32 v0, v[0:1], off
-; GFX11-SDAG-FAKE16-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b16 v1, 8, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_and_b32_e32 v4, 0xff, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX11-SDAG-FAKE16-NEXT:    v_lshlrev_b16 v1, 8, v1
-; GFX11-SDAG-FAKE16-NEXT:    v_or_b32_e32 v1, v4, v1
-; GFX11-SDAG-FAKE16-NEXT:    s_clause 0x1
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b8 v[2:3], v0, off offset:2
-; GFX11-SDAG-FAKE16-NEXT:    global_store_b16 v[2:3], v1, off
-; GFX11-SDAG-FAKE16-NEXT:    s_setpc_b64 s[30:31]
+; GFX11-SDAG-LABEL: freeze_v3i8:
+; GFX11-SDAG:       ; %bb.0:
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-SDAG-NEXT:    global_load_b32 v0, v[0:1], off
+; GFX11-SDAG-NEXT:    s_waitcnt vmcnt(0)
+; GFX11-SDAG-NEXT:    s_clause 0x1
+; GFX11-SDAG-NEXT:    global_store_d16_hi_b8 v[2:3], v0, off offset:2
+; GFX11-SDAG-NEXT:    global_store_b16 v[2:3], v0, off
+; GFX11-SDAG-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-GISEL-LABEL: freeze_v3i8:
 ; GFX11-GISEL:       ; %bb.0:
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index bfc01ef138721..d59f72ad7a1ac 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8343,53 +8343,53 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_mov_b32 s2, -1
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX6-NEXT:    s_lshr_b32 s42, s5, 30
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 28
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 29
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 26
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 24
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 25
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 22
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 23
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 20
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 21
-; GFX6-NEXT:    s_lshr_b32 s14, s5, 18
-; GFX6-NEXT:    s_lshr_b32 s16, s5, 19
-; GFX6-NEXT:    s_lshr_b32 s10, s5, 16
-; GFX6-NEXT:    s_lshr_b32 s12, s5, 17
-; GFX6-NEXT:    s_lshr_b32 s6, s5, 14
-; GFX6-NEXT:    s_lshr_b32 s8, s5, 15
-; GFX6-NEXT:    s_mov_b32 s40, s5
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 30
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 31
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 28
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 29
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 26
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 27
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 24
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 25
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 22
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 23
+; GFX6-NEXT:    s_lshr_b32 s14, s4, 20
+; GFX6-NEXT:    s_lshr_b32 s16, s4, 21
+; GFX6-NEXT:    s_lshr_b32 s10, s4, 18
+; GFX6-NEXT:    s_lshr_b32 s12, s4, 19
+; GFX6-NEXT:    s_lshr_b32 s6, s4, 16
+; GFX6-NEXT:    s_lshr_b32 s8, s4, 17
 ; GFX6-NEXT:    s_ashr_i32 s7, s5, 31
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[40:41], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s7
-; GFX6-NEXT:    s_lshr_b32 s40, s5, 12
+; GFX6-NEXT:    s_lshr_b32 s40, s4, 14
 ; GFX6-NEXT:    v_mov_b32_e32 v0, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v1, s45
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
+; GFX6-NEXT:    s_mov_b32 s44, s5
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[44:45], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[42:43], s[42:43], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v6, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v7, s45
-; GFX6-NEXT:    s_lshr_b32 s44, s5, 13
+; GFX6-NEXT:    s_lshr_b32 s44, s4, 15
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s42
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s43
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 10
+; GFX6-NEXT:    s_lshr_b32 s42, s4, 12
 ; GFX6-NEXT:    s_bfe_i64 s[36:37], s[36:37], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[38:39], s[38:39], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s36
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s37
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 11
+; GFX6-NEXT:    s_lshr_b32 s36, s4, 13
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s38
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s39
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 8
+; GFX6-NEXT:    s_lshr_b32 s38, s4, 10
 ; GFX6-NEXT:    s_bfe_i64 s[30:31], s[30:31], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[34:35], s[34:35], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s30
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s31
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 9
+; GFX6-NEXT:    s_lshr_b32 s30, s4, 11
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s34
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s35
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 6
+; GFX6-NEXT:    s_lshr_b32 s34, s4, 8
 ; GFX6-NEXT:    s_bfe_i64 s[28:29], s[28:29], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s7
@@ -8397,190 +8397,191 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s26
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 7
+; GFX6-NEXT:    s_lshr_b32 s26, s4, 9
 ; GFX6-NEXT:    v_mov_b32_e32 v4, s28
 ; GFX6-NEXT:    v_mov_b32_e32 v5, s29
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 4
+; GFX6-NEXT:    s_lshr_b32 s28, s4, 6
 ; GFX6-NEXT:    s_bfe_i64 s[24:25], s[24:25], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
+; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:240
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v8, s22
 ; GFX6-NEXT:    v_mov_b32_e32 v9, s23
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 5
+; GFX6-NEXT:    s_lshr_b32 s22, s4, 7
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s24
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s25
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 2
+; GFX6-NEXT:    s_lshr_b32 s24, s4, 4
 ; GFX6-NEXT:    s_bfe_i64 s[20:21], s[20:21], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[18:19], s[18:19], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:464
+; GFX6-NEXT:    buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:224
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v12, s18
 ; GFX6-NEXT:    v_mov_b32_e32 v13, s19
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 3
+; GFX6-NEXT:    s_lshr_b32 s18, s4, 5
 ; GFX6-NEXT:    v_mov_b32_e32 v14, s20
 ; GFX6-NEXT:    v_mov_b32_e32 v15, s21
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 1
+; GFX6-NEXT:    s_lshr_b32 s20, s4, 2
 ; GFX6-NEXT:    s_bfe_i64 s[16:17], s[16:17], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[14:15], s[14:15], 0x10000
-; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:448
+; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:208
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s14
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s15
-; G...
[truncated]

Copy link
Contributor

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks reasonable to me, but I only had a cursory look through the test diffs.

@RKSimon
Copy link
Collaborator Author

RKSimon commented Jul 31, 2025

Any more comments?

Comment on lines +498 to +500
; GFX9-O0-NEXT: s_mov_b64 s[14:15], -1
; GFX9-O0-NEXT: s_mov_b64 s[4:5], s[8:9]
; GFX9-O0-NEXT: s_xor_b64 s[4:5], s[4:5], s[14:15]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is worse in a weird way, but I suppose it is -O0

@RKSimon
Copy link
Collaborator Author

RKSimon commented Aug 4, 2025

ping - any further comments?

@RKSimon RKSimon merged commit d561259 into llvm:main Aug 5, 2025
9 checks passed
@RKSimon RKSimon deleted the dag-freeze-extrause branch August 5, 2025 08:24
RKSimon added a commit to RKSimon/llvm-project that referenced this pull request Aug 5, 2025
Now that llvm#146490 removed the assertion in visitFreeze to assert that the node was still isGuaranteedNotToBeUndefOrPoison we no longer need this reduced depth hack (which had to account for the difference in depth of freeze(op()) vs op(freeze())

Helps with some of the minor regressions in llvm#150017
RKSimon added a commit that referenced this pull request Aug 5, 2025
#152127)

Now that #146490 removed the assertion in visitFreeze to assert that the
node was still isGuaranteedNotToBeUndefOrPoison we no longer need this
reduced depth hack (which had to account for the difference in depth of
freeze(op()) vs op(freeze())

Helps with some of the minor regressions in #150017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[DAG] Add similar behaviour to InstCombinerImpl::freezeOtherUses

4 participants