Skip to content

Conversation

@mariusz-sikora-at-amd
Copy link
Contributor

Folding may remove COPY from inside of the divergent loop.

Folding may remove COPY from inside of the divergent loop.
@llvmbot
Copy link
Member

llvmbot commented Apr 16, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Mariusz Sikora (mariusz-sikora-at-amd)

Changes

Folding may remove COPY from inside of the divergent loop.


Full diff: https://github.com/llvm/llvm-project/pull/136003.diff

5 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIFoldOperands.cpp (+2-1)
  • (added) llvm/test/CodeGen/AMDGPU/do-not-fold-copy.mir (+36)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.init.whole.wave-w32.ll (+3-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mfma-loop.ll (+96-120)
  • (modified) llvm/test/CodeGen/AMDGPU/mul.ll (+3-3)
diff --git a/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp b/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
index d6acf9e081b9f..542a992ccb96e 100644
--- a/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
@@ -1092,7 +1092,8 @@ void SIFoldOperandsImpl::foldOperand(
   } else {
     if (UseMI->isCopy() && OpToFold.isReg() &&
         UseMI->getOperand(0).getReg().isVirtual() &&
-        !UseMI->getOperand(1).getSubReg()) {
+        !UseMI->getOperand(1).getSubReg() &&
+        !OpToFold.getParent()->hasRegisterImplicitUseOperand(AMDGPU::EXEC)) {
       LLVM_DEBUG(dbgs() << "Folding " << OpToFold << "\n into " << *UseMI);
       unsigned Size = TII->getOpSize(*UseMI, 1);
       Register UseReg = OpToFold.getReg();
diff --git a/llvm/test/CodeGen/AMDGPU/do-not-fold-copy.mir b/llvm/test/CodeGen/AMDGPU/do-not-fold-copy.mir
new file mode 100644
index 0000000000000..ef4b27d169224
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/do-not-fold-copy.mir
@@ -0,0 +1,36 @@
+# RUN: llc -mtriple=amdgcn -mcpu=gfx900 -run-pass=si-fold-operands -verify-machineinstrs -o - %s | FileCheck %s
+
+---
+liveins:
+name:            do_not_fold_copy_with_implicit_exec
+tracksRegLiveness: true
+body:             |
+  ; CHECK: bb.0:
+  ; CHECK: bb.1:
+  ; CHECK: %[[A:[0-9]*]]:sreg_32 = S_ADD_I32
+  ; CHECK: COPY %[[A]]
+  ; CHECKL SI_LOOP
+  ; CHECK: bb.2:
+
+  bb.0:
+    %0:sreg_64 = S_MOV_B64 0
+    %1:sreg_64 = S_MOV_B64 0
+    %2:sreg_32 = S_MOV_B32 0, implicit $exec
+    S_BRANCH %bb.1
+
+  bb.1:
+    %3:sreg_64 = PHI %1, %bb.0, %4, %bb.1
+    %5:sreg_32 = PHI %2, %bb.0, %6, %bb.1
+    %6:sreg_32 = S_ADD_I32 %5, 1, implicit-def dead $scc
+    %4:sreg_64 = SI_IF_BREAK %0, %3, implicit-def dead $scc
+    %7:vgpr_32 = COPY %6, implicit $exec
+    SI_LOOP %4, %bb.1, implicit-def dead $exec, implicit-def dead $scc, implicit $exec
+    S_BRANCH %bb.2
+
+  bb.2:
+    SI_END_CF %4, implicit-def dead $exec, implicit-def dead $scc, implicit $exec
+    %9:vgpr_32 = COPY %7
+    %10:sreg_64_xexec = IMPLICIT_DEF
+    %11:vgpr_32 = V_SET_INACTIVE_B32 0, %9, 0, 0, killed %10, implicit $exec
+    S_ENDPGM 0
+...
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.init.whole.wave-w32.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.init.whole.wave-w32.ll
index 1e2bf8256321d..c2f8c2c44316a 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.init.whole.wave-w32.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.init.whole.wave-w32.ll
@@ -429,13 +429,14 @@ define amdgpu_cs_chain void @control_flow(<3 x i32> inreg %sgpr, ptr inreg %call
 ; DAGISEL12-NEXT:    v_cmp_ne_u32_e64 s9, 0, v0
 ; DAGISEL12-NEXT:    s_mov_b32 exec_lo, s8
 ; DAGISEL12-NEXT:    v_cmp_eq_u32_e32 vcc_lo, v13, v1
+; DAGISEL12-NEXT:    v_mov_b32_e32 v11, s9
 ; DAGISEL12-NEXT:    s_or_b32 s4, vcc_lo, s4
 ; DAGISEL12-NEXT:    s_wait_alu 0xfffe
 ; DAGISEL12-NEXT:    s_and_not1_b32 exec_lo, exec_lo, s4
 ; DAGISEL12-NEXT:    s_cbranch_execnz .LBB3_2
 ; DAGISEL12-NEXT:  ; %bb.3: ; %tail.loopexit
 ; DAGISEL12-NEXT:    s_or_b32 exec_lo, exec_lo, s4
-; DAGISEL12-NEXT:    v_dual_mov_b32 v11, s9 :: v_dual_add_nc_u32 v10, 42, v1
+; DAGISEL12-NEXT:    v_add_nc_u32_e32 v10, 42, v1
 ; DAGISEL12-NEXT:  .LBB3_4: ; %Flow1
 ; DAGISEL12-NEXT:    s_wait_alu 0xfffe
 ; DAGISEL12-NEXT:    s_or_b32 exec_lo, exec_lo, s3
@@ -526,13 +527,13 @@ define amdgpu_cs_chain void @control_flow(<3 x i32> inreg %sgpr, ptr inreg %call
 ; DAGISEL10-NEXT:    v_cmp_ne_u32_e64 s9, 0, v0
 ; DAGISEL10-NEXT:    s_mov_b32 exec_lo, s8
 ; DAGISEL10-NEXT:    v_cmp_eq_u32_e32 vcc_lo, v13, v1
+; DAGISEL10-NEXT:    v_mov_b32_e32 v11, s9
 ; DAGISEL10-NEXT:    s_or_b32 s4, vcc_lo, s4
 ; DAGISEL10-NEXT:    s_andn2_b32 exec_lo, exec_lo, s4
 ; DAGISEL10-NEXT:    s_cbranch_execnz .LBB3_2
 ; DAGISEL10-NEXT:  ; %bb.3: ; %tail.loopexit
 ; DAGISEL10-NEXT:    s_or_b32 exec_lo, exec_lo, s4
 ; DAGISEL10-NEXT:    v_add_nc_u32_e32 v10, 42, v1
-; DAGISEL10-NEXT:    v_mov_b32_e32 v11, s9
 ; DAGISEL10-NEXT:  .LBB3_4: ; %Flow1
 ; DAGISEL10-NEXT:    s_or_b32 exec_lo, exec_lo, s3
 ; DAGISEL10-NEXT:    s_mov_b32 s3, exec_lo
diff --git a/llvm/test/CodeGen/AMDGPU/mfma-loop.ll b/llvm/test/CodeGen/AMDGPU/mfma-loop.ll
index d0042bb692402..a0d587ac68ff1 100644
--- a/llvm/test/CodeGen/AMDGPU/mfma-loop.ll
+++ b/llvm/test/CodeGen/AMDGPU/mfma-loop.ll
@@ -1425,41 +1425,42 @@ define amdgpu_kernel void @test_mfma_loop_sgpr_init(ptr addrspace(1) %arg, float
 ; GFX90A:       ; %bb.0: ; %entry
 ; GFX90A-NEXT:    s_load_dword s1, s[4:5], 0x2c
 ; GFX90A-NEXT:    s_mov_b32 s0, 16
-; GFX90A-NEXT:    v_mov_b32_e32 v0, 2.0
 ; GFX90A-NEXT:    v_mov_b32_e32 v1, 1.0
 ; GFX90A-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX90A-NEXT:    v_accvgpr_write_b32 a31, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a30, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a29, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a28, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a27, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a26, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a25, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a24, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a23, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a22, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a21, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a20, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a19, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a18, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a17, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a16, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a15, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a14, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a13, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a12, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a11, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a10, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a9, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a8, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a7, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a6, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a5, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a4, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a3, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a2, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a1, s1
-; GFX90A-NEXT:    v_accvgpr_write_b32 a0, s1
+; GFX90A-NEXT:    v_mov_b32_e32 v0, s1
+; GFX90A-NEXT:    v_accvgpr_write_b32 a31, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a30, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a29, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a28, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a27, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a26, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a25, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a24, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a23, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a22, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a21, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a20, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a19, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a18, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a17, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a16, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a15, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a14, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a13, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a12, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a11, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a10, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a9, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a8, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a7, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a6, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a5, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a4, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a3, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a2, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a1, v0
+; GFX90A-NEXT:    v_accvgpr_write_b32 a0, v0
+; GFX90A-NEXT:    v_mov_b32_e32 v0, 2.0
 ; GFX90A-NEXT:  .LBB5_1: ; %for.cond.preheader
 ; GFX90A-NEXT:    ; =>This Inner Loop Header: Depth=1
 ; GFX90A-NEXT:    s_nop 1
@@ -1487,41 +1488,42 @@ define amdgpu_kernel void @test_mfma_loop_sgpr_init(ptr addrspace(1) %arg, float
 ; GFX942:       ; %bb.0: ; %entry
 ; GFX942-NEXT:    s_load_dword s1, s[4:5], 0x2c
 ; GFX942-NEXT:    s_mov_b32 s0, 16
-; GFX942-NEXT:    v_mov_b32_e32 v0, 2.0
 ; GFX942-NEXT:    v_mov_b32_e32 v1, 1.0
 ; GFX942-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX942-NEXT:    v_accvgpr_write_b32 a31, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a30, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a29, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a28, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a27, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a26, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a25, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a24, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a23, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a22, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a21, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a20, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a19, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a18, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a17, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a16, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a15, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a14, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a13, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a12, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a11, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a10, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a9, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a8, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a7, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a6, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a5, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a4, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a3, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a2, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a1, s1
-; GFX942-NEXT:    v_accvgpr_write_b32 a0, s1
+; GFX942-NEXT:    v_mov_b32_e32 v0, s1
+; GFX942-NEXT:    v_accvgpr_write_b32 a31, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a30, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a29, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a28, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a27, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a26, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a25, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a24, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a23, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a22, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a21, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a20, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a19, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a18, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a17, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a16, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a15, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a14, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a13, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a12, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a11, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a10, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a9, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a8, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a7, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a6, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a5, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a4, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a3, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a2, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a1, v0
+; GFX942-NEXT:    v_accvgpr_write_b32 a0, v0
+; GFX942-NEXT:    v_mov_b32_e32 v0, 2.0
 ; GFX942-NEXT:  .LBB5_1: ; %for.cond.preheader
 ; GFX942-NEXT:    ; =>This Inner Loop Header: Depth=1
 ; GFX942-NEXT:    s_nop 1
@@ -1696,6 +1698,8 @@ define amdgpu_kernel void @test_mfma_loop_mixed_init(ptr addrspace(1) %arg, floa
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a0, v0
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a31, 0
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a30, 0
+; GFX90A-NEXT:    s_waitcnt lgkmcnt(0)
+; GFX90A-NEXT:    v_mov_b32_e32 v0, s1
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a29, 0
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a28, 0
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a27, 0
@@ -1725,8 +1729,7 @@ define amdgpu_kernel void @test_mfma_loop_mixed_init(ptr addrspace(1) %arg, floa
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a3, 0
 ; GFX90A-NEXT:    v_accvgpr_write_b32 a2, 0
 ; GFX90A-NEXT:    s_mov_b32 s0, 16
-; GFX90A-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX90A-NEXT:    v_accvgpr_write_b32 a1, s1
+; GFX90A-NEXT:    v_accvgpr_write_b32 a1, v0
 ; GFX90A-NEXT:    v_mov_b32_e32 v0, 2.0
 ; GFX90A-NEXT:    v_mov_b32_e32 v1, 1.0
 ; GFX90A-NEXT:  .LBB6_1: ; %for.cond.preheader
@@ -1759,6 +1762,8 @@ define amdgpu_kernel void @test_mfma_loop_mixed_init(ptr addrspace(1) %arg, floa
 ; GFX942-NEXT:    v_accvgpr_write_b32 a0, v0
 ; GFX942-NEXT:    v_accvgpr_write_b32 a31, 0
 ; GFX942-NEXT:    v_accvgpr_write_b32 a30, 0
+; GFX942-NEXT:    s_waitcnt lgkmcnt(0)
+; GFX942-NEXT:    v_mov_b32_e32 v0, s1
 ; GFX942-NEXT:    v_accvgpr_write_b32 a29, 0
 ; GFX942-NEXT:    v_accvgpr_write_b32 a28, 0
 ; GFX942-NEXT:    v_accvgpr_write_b32 a27, 0
@@ -1788,8 +1793,7 @@ define amdgpu_kernel void @test_mfma_loop_mixed_init(ptr addrspace(1) %arg, floa
 ; GFX942-NEXT:    v_accvgpr_write_b32 a3, 0
 ; GFX942-NEXT:    v_accvgpr_write_b32 a2, 0
 ; GFX942-NEXT:    s_mov_b32 s0, 16
-; GFX942-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX942-NEXT:    v_accvgpr_write_b32 a1, s1
+; GFX942-NEXT:    v_accvgpr_write_b32 a1, v0
 ; GFX942-NEXT:    v_mov_b32_e32 v0, 2.0
 ; GFX942-NEXT:    v_mov_b32_e32 v1, 1.0
 ; GFX942-NEXT:  .LBB6_1: ; %for.cond.preheader
@@ -2050,66 +2054,38 @@ define amdgpu_kernel void @test_mfma_loop_agpr_init(ptr addrspace(1) %arg) #0 {
 ; GFX908-NEXT:    s_nop 7
 ; GFX908-NEXT:    s_nop 1
 ; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
+; GFX908-NEXT:    s_nop 1
+; GFX908-NEXT:    v_accvgpr_write_b32 a0, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a1, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a2, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a3, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a2, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a3, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a4, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a5, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a6, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a5, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a6, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a7, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a8, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a9, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a8, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a9, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a10, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a11, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a12, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a11, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a12, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a13, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a14, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a15, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a14, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a15, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a16, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a17, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a18, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a17, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a18, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a19, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a20, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a21, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a20, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a21, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a22, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a23, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a24, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a23, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a24, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a25, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a26, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a27, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a26, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a27, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a28, v2
-; GFX908-NEXT:    v_accvgpr_read_b32 v3, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v33, a0
-; GFX908-NEXT:    v_accvgpr_read_b32 v2, a0
-; GFX908-NEXT:    v_accvgpr_write_b32 a29, v3
-; GFX908-NEXT:    v_accvgpr_write_b32 a30, v33
+; GFX908-NEXT:    v_accvgpr_write_b32 a29, v2
+; GFX908-NEXT:    v_accvgpr_write_b32 a30, v2
 ; GFX908-NEXT:    v_accvgpr_write_b32 a31, v2
 ; GFX908-NEXT:  .LBB8_1: ; %for.cond.preheader
 ; GFX908-NEXT:    ; =>This Inner Loop Header: Depth=1
diff --git a/llvm/test/CodeGen/AMDGPU/mul.ll b/llvm/test/CodeGen/AMDGPU/mul.ll
index 896f48a9215b9..0f47a31f52dcb 100644
--- a/llvm/test/CodeGen/AMDGPU/mul.ll
+++ b/llvm/test/CodeGen/AMDGPU/mul.ll
@@ -2619,13 +2619,13 @@ define amdgpu_kernel void @s_mul_i128(ptr addrspace(1) %out, [8 x i32], i128 %a,
 ; SI-NEXT:    v_add_i32_e32 v0, vcc, s4, v0
 ; SI-NEXT:    v_add_i32_e32 v0, vcc, s5, v0
 ; SI-NEXT:    s_mul_i32 s5, s14, s9
-; SI-NEXT:    s_mul_i32 s4, s12, s10
 ; SI-NEXT:    v_add_i32_e32 v1, vcc, s5, v1
 ; SI-NEXT:    s_mul_i32 s5, s15, s8
 ; SI-NEXT:    v_add_i32_e32 v1, vcc, s5, v1
 ; SI-NEXT:    s_mul_i32 s5, s14, s8
-; SI-NEXT:    v_mov_b32_e32 v2, s4
-; SI-NEXT:    v_add_i32_e32 v2, vcc, s5, v2
+; SI-NEXT:    s_mul_i32 s4, s12, s10
+; SI-NEXT:    v_mov_b32_e32 v2, s5
+; SI-NEXT:    v_add_i32_e32 v2, vcc, s4, v2
 ; SI-NEXT:    v_addc_u32_e32 v0, vcc, v1, v0, vcc
 ; SI-NEXT:    v_mov_b32_e32 v1, s12
 ; SI-NEXT:    v_mul_hi_u32 v5, s8, v1

@mariusz-sikora-at-amd mariusz-sikora-at-amd changed the title [AMDGPU] Do not fold COPY with implicit use of exec [AMDGPU] Do not fold COPY with implicit operands Apr 17, 2025
Copy link
Contributor

@perlfu perlfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Narrowing the operand fold is obviously safe.
The test changes suggest minimal fallout, but I wonder how that scales to actual code bases?
That said, I don't immediately see better solution right now for this problem.

; CHECK: bb.1:
; CHECK: %[[A:[0-9]*]]:sreg_32 = S_ADD_I32
; CHECK: COPY %[[A]]
; CHECKL SI_LOOP
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably this should be CHECK:?
Any reason not to auto-generate test checks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this is a fragment of real code from more than one game.

Any reason not to auto-generate test checks?

I can auto-generate. I just wanted to have a minimal test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is preferable to autogenerate this, so we can see if/when ISA changes in future.
(We may want to re-enable this fold in future with some kind of EXEC analysis.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I auto-generated this test in this commit: c9a9f10

Copy link
Contributor

@perlfu perlfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - but leave it a little while for in case other reviews want to respond.

@mariusz-sikora-at-amd mariusz-sikora-at-amd merged commit 1a48e1d into llvm:main Apr 22, 2025
11 checks passed
IanWood1 pushed a commit to IanWood1/llvm-project that referenced this pull request May 6, 2025
Folding may remove COPY from inside of the divergent loop.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants