Skip to content

Conversation

broxigarchen
Copy link
Contributor

@broxigarchen broxigarchen commented Oct 7, 2025

S_PACK_XX_B32_B16 requires special lowering for true16 mode when it's being lowered to VALU in fix-sgpr-copy pass.

Added test cases in fix-sgpr-copies-f16-true16.mir

@broxigarchen broxigarchen force-pushed the main-fix-true16-pack-s32 branch from 0b22265 to cedc90b Compare October 7, 2025 23:12
@broxigarchen broxigarchen changed the title tmp [AMDGPU][True16][CodeGen] S_PACK_XX_B32_B16 lowering for true16 mode Oct 7, 2025
@broxigarchen broxigarchen force-pushed the main-fix-true16-pack-s32 branch from cedc90b to d8102a3 Compare October 7, 2025 23:16
@broxigarchen broxigarchen marked this pull request as ready for review October 8, 2025 14:26
@broxigarchen broxigarchen requested review from Sisyph, mbrkusanin and rampitec and removed request for Sisyph and mbrkusanin October 8, 2025 14:26
@llvmbot
Copy link
Member

llvmbot commented Oct 8, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Brox Chen (broxigarchen)

Changes

S_PACK_XX_B32_B16 requires special lowering for true16 mode when it's being lowered to VALU in fix-sgpr-copy pass


Patch is 4.77 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/162389.diff

45 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIInstrInfo.cpp (+57)
  • (modified) llvm/test/CodeGen/AMDGPU/add.v2i16.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.1024bit.ll (+14415-7927)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.128bit.ll (+1536-820)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.256bit.ll (+3865-2029)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.320bit.ll (+830-412)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.32bit.ll (+509-259)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.48bit.ll (+231-131)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.512bit.ll (+4843-2972)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.576bit.ll (+1644-840)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.640bit.ll (+3252-1342)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.64bit.ll (+878-462)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.704bit.ll (+3438-1524)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.768bit.ll (+3628-1708)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.832bit.ll (+3812-1876)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.896bit.ll (+3960-2054)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.960bit.ll (+4116-2222)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.96bit.ll (+919-492)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-packed-partial-undef.ll (+8-20)
  • (modified) llvm/test/CodeGen/AMDGPU/divergence-driven-buildvector.ll (+2-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fix-sgpr-copies-f16-true16.mir (+60)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-fabs.bf16.ll (+224-111)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg.bf16.ll (+82-41)
  • (modified) llvm/test/CodeGen/AMDGPU/fptosi.f16.ll (+2-5)
  • (modified) llvm/test/CodeGen/AMDGPU/fptoui.f16.ll (+2-5)
  • (modified) llvm/test/CodeGen/AMDGPU/frem.ll (+19-46)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-5)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.raw.atomic.buffer.load.ll (+5-9)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.raw.ptr.atomic.buffer.load.ll (+5-9)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.struct.atomic.buffer.load.ll (+4-8)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.struct.ptr.atomic.buffer.load.ll (+4-8)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.maximum.f16.ll (+6-10)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.minimum.f16.ll (+6-10)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.sqrt.bf16.ll (+3-7)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+27-45)
  • (modified) llvm/test/CodeGen/AMDGPU/strict_fsub.f16.ll (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/sub.v2i16.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/v_sat_pk_u8_i16.ll (+8-14)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-and.ll (+8-6)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-mul.ll (+44-36)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-or.ll (+10-8)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-xor.ll (+10-8)
  • (modified) llvm/test/CodeGen/AMDGPU/vector_rebroadcast.ll (+9-20)
  • (modified) llvm/test/CodeGen/AMDGPU/vector_shuffle.packed.ll (+7-13)
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 46757cf5fe90c..2d95dd135aacf 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -9115,6 +9115,63 @@ void SIInstrInfo::movePackToVALU(SIInstrWorklist &Worklist,
   MachineOperand &Src1 = Inst.getOperand(2);
   const DebugLoc &DL = Inst.getDebugLoc();
 
+  if (ST.useRealTrue16Insts()) {
+    Register SrcReg0 = Src0.getReg();
+    Register SrcReg1 = Src1.getReg();
+
+    if (!RI.isVGPR(MRI, SrcReg0)) {
+      SrcReg0 = MRI.createVirtualRegister(&AMDGPU::VGPR_32RegClass);
+      BuildMI(*MBB, Inst, DL, get(AMDGPU::V_MOV_B32_e32), SrcReg0).add(Src0);
+    }
+    if (!RI.isVGPR(MRI, SrcReg1)) {
+      SrcReg1 = MRI.createVirtualRegister(&AMDGPU::VGPR_32RegClass);
+      BuildMI(*MBB, Inst, DL, get(AMDGPU::V_MOV_B32_e32), SrcReg1).add(Src1);
+    }
+    bool isSrc0Reg16 = MRI.constrainRegClass(SrcReg0, &AMDGPU::VGPR_16RegClass);
+    bool isSrc1Reg16 = MRI.constrainRegClass(SrcReg1, &AMDGPU::VGPR_16RegClass);
+
+    auto NewMI = BuildMI(*MBB, Inst, DL, get(AMDGPU::REG_SEQUENCE), ResultReg);
+    switch (Inst.getOpcode()) {
+    case AMDGPU::S_PACK_LL_B32_B16: {
+      NewMI
+          .addReg(SrcReg0, 0,
+                  isSrc0Reg16 ? AMDGPU::NoSubRegister : AMDGPU::lo16)
+          .addImm(AMDGPU::lo16)
+          .addReg(SrcReg1, 0,
+                  isSrc1Reg16 ? AMDGPU::NoSubRegister : AMDGPU::lo16)
+          .addImm(AMDGPU::hi16);
+    } break;
+    case AMDGPU::S_PACK_LH_B32_B16: {
+      NewMI
+          .addReg(SrcReg0, 0,
+                  isSrc0Reg16 ? AMDGPU::NoSubRegister : AMDGPU::lo16)
+          .addImm(AMDGPU::lo16)
+          .addReg(SrcReg1, 0, AMDGPU::hi16)
+          .addImm(AMDGPU::hi16);
+    } break;
+    case AMDGPU::S_PACK_HL_B32_B16: {
+      NewMI.addReg(SrcReg0, 0, AMDGPU::hi16)
+          .addImm(AMDGPU::lo16)
+          .addReg(SrcReg1, 0,
+                  isSrc1Reg16 ? AMDGPU::NoSubRegister : AMDGPU::lo16)
+          .addImm(AMDGPU::hi16);
+    } break;
+    case AMDGPU::S_PACK_HH_B32_B16: {
+      NewMI.addReg(SrcReg0, 0, AMDGPU::hi16)
+          .addImm(AMDGPU::lo16)
+          .addReg(SrcReg1, 0, AMDGPU::hi16)
+          .addImm(AMDGPU::hi16);
+    } break;
+    default:
+      llvm_unreachable("unhandled s_pack_* instruction");
+    }
+
+    MachineOperand &Dest = Inst.getOperand(0);
+    MRI.replaceRegWith(Dest.getReg(), ResultReg);
+    addUsersToMoveToVALUWorklist(ResultReg, MRI, Worklist);
+    return;
+  }
+
   switch (Inst.getOpcode()) {
   case AMDGPU::S_PACK_LL_B32_B16: {
     Register ImmReg = MRI.createVirtualRegister(&AMDGPU::VGPR_32RegClass);
diff --git a/llvm/test/CodeGen/AMDGPU/add.v2i16.ll b/llvm/test/CodeGen/AMDGPU/add.v2i16.ll
index d25bfbba0b372..12309f356c6b3 100644
--- a/llvm/test/CodeGen/AMDGPU/add.v2i16.ll
+++ b/llvm/test/CodeGen/AMDGPU/add.v2i16.ll
@@ -780,7 +780,7 @@ define amdgpu_kernel void @v_test_add_v2i16_zext_to_v2i64(ptr addrspace(1) %out,
 ; GFX11-TRUE16-NEXT:    s_load_b128 s[0:3], s[4:5], 0x24
 ; GFX11-TRUE16-NEXT:    s_load_b64 s[4:5], s[4:5], 0x34
 ; GFX11-TRUE16-NEXT:    v_and_b32_e32 v0, 0x3ff, v0
-; GFX11-TRUE16-NEXT:    v_mov_b16_e32 v2.l, 0
+; GFX11-TRUE16-NEXT:    v_mov_b16_e32 v2.h, 0
 ; GFX11-TRUE16-NEXT:    s_delay_alu instid0(VALU_DEP_2)
 ; GFX11-TRUE16-NEXT:    v_lshlrev_b32_e32 v0, 2, v0
 ; GFX11-TRUE16-NEXT:    s_waitcnt lgkmcnt(0)
@@ -790,11 +790,9 @@ define amdgpu_kernel void @v_test_add_v2i16_zext_to_v2i64(ptr addrspace(1) %out,
 ; GFX11-TRUE16-NEXT:    s_waitcnt vmcnt(0)
 ; GFX11-TRUE16-NEXT:    v_pk_add_u16 v0, v1, v0
 ; GFX11-TRUE16-NEXT:    v_mov_b32_e32 v1, 0
-; GFX11-TRUE16-NEXT:    s_delay_alu instid0(VALU_DEP_2) | instskip(SKIP_1) | instid1(VALU_DEP_2)
-; GFX11-TRUE16-NEXT:    v_lshrrev_b32_e32 v3, 16, v0
-; GFX11-TRUE16-NEXT:    v_mov_b16_e32 v0.h, v2.l
-; GFX11-TRUE16-NEXT:    v_lshl_or_b32 v2, v2, 16, v3
-; GFX11-TRUE16-NEXT:    s_delay_alu instid0(VALU_DEP_4)
+; GFX11-TRUE16-NEXT:    s_delay_alu instid0(VALU_DEP_2) | instskip(SKIP_1) | instid1(VALU_DEP_3)
+; GFX11-TRUE16-NEXT:    v_mov_b16_e32 v2.l, v0.h
+; GFX11-TRUE16-NEXT:    v_mov_b16_e32 v0.h, v2.h
 ; GFX11-TRUE16-NEXT:    v_mov_b32_e32 v3, v1
 ; GFX11-TRUE16-NEXT:    global_store_b128 v1, v[0:3], s[0:1]
 ; GFX11-TRUE16-NEXT:    s_endpgm
diff --git a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.1024bit.ll b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.1024bit.ll
index df9c97fa23722..96cd8bd11c116 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.1024bit.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.1024bit.ll
@@ -29381,870 +29381,1844 @@ define inreg <32 x i32> @bitcast_v64bf16_to_v32i32_scalar(<64 x bfloat> inreg %a
 ; GFX9-NEXT:  .LBB19_4:
 ; GFX9-NEXT:    s_branch .LBB19_2
 ;
-; GFX11-LABEL: bitcast_v64bf16_to_v32i32_scalar:
-; GFX11:       ; %bb.0:
-; GFX11-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-NEXT:    v_cmp_ne_u32_e32 vcc_lo, 0, v14
-; GFX11-NEXT:    s_clause 0x1f
-; GFX11-NEXT:    scratch_store_b32 off, v40, s32 offset:288
-; GFX11-NEXT:    scratch_store_b32 off, v41, s32 offset:284
-; GFX11-NEXT:    scratch_store_b32 off, v42, s32 offset:280
-; GFX11-NEXT:    scratch_store_b32 off, v43, s32 offset:276
-; GFX11-NEXT:    scratch_store_b32 off, v44, s32 offset:272
-; GFX11-NEXT:    scratch_store_b32 off, v45, s32 offset:268
-; GFX11-NEXT:    scratch_store_b32 off, v46, s32 offset:264
-; GFX11-NEXT:    scratch_store_b32 off, v47, s32 offset:260
-; GFX11-NEXT:    scratch_store_b32 off, v56, s32 offset:256
-; GFX11-NEXT:    scratch_store_b32 off, v57, s32 offset:252
-; GFX11-NEXT:    scratch_store_b32 off, v58, s32 offset:248
-; GFX11-NEXT:    scratch_store_b32 off, v59, s32 offset:244
-; GFX11-NEXT:    scratch_store_b32 off, v60, s32 offset:240
-; GFX11-NEXT:    scratch_store_b32 off, v61, s32 offset:236
-; GFX11-NEXT:    scratch_store_b32 off, v62, s32 offset:232
-; GFX11-NEXT:    scratch_store_b32 off, v63, s32 offset:228
-; GFX11-NEXT:    scratch_store_b32 off, v72, s32 offset:224
-; GFX11-NEXT:    scratch_store_b32 off, v73, s32 offset:220
-; GFX11-NEXT:    scratch_store_b32 off, v74, s32 offset:216
-; GFX11-NEXT:    scratch_store_b32 off, v75, s32 offset:212
-; GFX11-NEXT:    scratch_store_b32 off, v76, s32 offset:208
-; GFX11-NEXT:    scratch_store_b32 off, v77, s32 offset:204
-; GFX11-NEXT:    scratch_store_b32 off, v78, s32 offset:200
-; GFX11-NEXT:    scratch_store_b32 off, v79, s32 offset:196
-; GFX11-NEXT:    scratch_store_b32 off, v88, s32 offset:192
-; GFX11-NEXT:    scratch_store_b32 off, v89, s32 offset:188
-; GFX11-NEXT:    scratch_store_b32 off, v90, s32 offset:184
-; GFX11-NEXT:    scratch_store_b32 off, v91, s32 offset:180
-; GFX11-NEXT:    scratch_store_b32 off, v92, s32 offset:176
-; GFX11-NEXT:    scratch_store_b32 off, v93, s32 offset:172
-; GFX11-NEXT:    scratch_store_b32 off, v94, s32 offset:168
-; GFX11-NEXT:    scratch_store_b32 off, v95, s32 offset:164
-; GFX11-NEXT:    s_clause 0x1f
-; GFX11-NEXT:    scratch_store_b32 off, v104, s32 offset:160
-; GFX11-NEXT:    scratch_store_b32 off, v105, s32 offset:156
-; GFX11-NEXT:    scratch_store_b32 off, v106, s32 offset:152
-; GFX11-NEXT:    scratch_store_b32 off, v107, s32 offset:148
-; GFX11-NEXT:    scratch_store_b32 off, v108, s32 offset:144
-; GFX11-NEXT:    scratch_store_b32 off, v109, s32 offset:140
-; GFX11-NEXT:    scratch_store_b32 off, v110, s32 offset:136
-; GFX11-NEXT:    scratch_store_b32 off, v111, s32 offset:132
-; GFX11-NEXT:    scratch_store_b32 off, v120, s32 offset:128
-; GFX11-NEXT:    scratch_store_b32 off, v121, s32 offset:124
-; GFX11-NEXT:    scratch_store_b32 off, v122, s32 offset:120
-; GFX11-NEXT:    scratch_store_b32 off, v123, s32 offset:116
-; GFX11-NEXT:    scratch_store_b32 off, v124, s32 offset:112
-; GFX11-NEXT:    scratch_store_b32 off, v125, s32 offset:108
-; GFX11-NEXT:    scratch_store_b32 off, v126, s32 offset:104
-; GFX11-NEXT:    scratch_store_b32 off, v127, s32 offset:100
-; GFX11-NEXT:    scratch_store_b32 off, v136, s32 offset:96
-; GFX11-NEXT:    scratch_store_b32 off, v137, s32 offset:92
-; GFX11-NEXT:    scratch_store_b32 off, v138, s32 offset:88
-; GFX11-NEXT:    scratch_store_b32 off, v139, s32 offset:84
-; GFX11-NEXT:    scratch_store_b32 off, v140, s32 offset:80
-; GFX11-NEXT:    scratch_store_b32 off, v141, s32 offset:76
-; GFX11-NEXT:    scratch_store_b32 off, v142, s32 offset:72
-; GFX11-NEXT:    scratch_store_b32 off, v143, s32 offset:68
-; GFX11-NEXT:    scratch_store_b32 off, v152, s32 offset:64
-; GFX11-NEXT:    scratch_store_b32 off, v153, s32 offset:60
-; GFX11-NEXT:    scratch_store_b32 off, v154, s32 offset:56
-; GFX11-NEXT:    scratch_store_b32 off, v155, s32 offset:52
-; GFX11-NEXT:    scratch_store_b32 off, v156, s32 offset:48
-; GFX11-NEXT:    scratch_store_b32 off, v157, s32 offset:44
-; GFX11-NEXT:    scratch_store_b32 off, v158, s32 offset:40
-; GFX11-NEXT:    scratch_store_b32 off, v159, s32 offset:36
-; GFX11-NEXT:    s_clause 0x8
-; GFX11-NEXT:    scratch_store_b32 off, v168, s32 offset:32
-; GFX11-NEXT:    scratch_store_b32 off, v169, s32 offset:28
-; GFX11-NEXT:    scratch_store_b32 off, v170, s32 offset:24
-; GFX11-NEXT:    scratch_store_b32 off, v171, s32 offset:20
-; GFX11-NEXT:    scratch_store_b32 off, v172, s32 offset:16
-; GFX11-NEXT:    scratch_store_b32 off, v173, s32 offset:12
-; GFX11-NEXT:    scratch_store_b32 off, v174, s32 offset:8
-; GFX11-NEXT:    scratch_store_b32 off, v175, s32 offset:4
-; GFX11-NEXT:    scratch_store_b32 off, v184, s32
-; GFX11-NEXT:    v_dual_mov_b32 v178, v13 :: v_dual_mov_b32 v179, v12
-; GFX11-NEXT:    v_dual_mov_b32 v180, v11 :: v_dual_mov_b32 v181, v9
-; GFX11-NEXT:    v_dual_mov_b32 v182, v10 :: v_dual_mov_b32 v169, v7
-; GFX11-NEXT:    v_dual_mov_b32 v170, v8 :: v_dual_mov_b32 v177, v3
-; GFX11-NEXT:    v_dual_mov_b32 v176, v6 :: v_dual_mov_b32 v171, v4
-; GFX11-NEXT:    v_dual_mov_b32 v174, v5 :: v_dual_mov_b32 v173, v0
-; GFX11-NEXT:    v_dual_mov_b32 v184, v2 :: v_dual_mov_b32 v175, v1
-; GFX11-NEXT:    v_dual_mov_b32 v183, s28 :: v_dual_mov_b32 v172, s29
-; GFX11-NEXT:    s_mov_b32 s4, 0
-; GFX11-NEXT:    s_and_b32 s5, vcc_lo, exec_lo
-; GFX11-NEXT:    s_cbranch_scc0 .LBB19_4
-; GFX11-NEXT:  ; %bb.1: ; %cmp.false
-; GFX11-NEXT:    v_dual_mov_b32 v32, s0 :: v_dual_mov_b32 v37, s2
-; GFX11-NEXT:    v_dual_mov_b32 v34, s1 :: v_dual_mov_b32 v41, s3
-; GFX11-NEXT:    v_dual_mov_b32 v46, s16 :: v_dual_mov_b32 v59, s18
-; GFX11-NEXT:    v_dual_mov_b32 v52, s17 :: v_dual_mov_b32 v67, s19
-; GFX11-NEXT:    v_dual_mov_b32 v76, s20 :: v_dual_mov_b32 v97, s22
-; GFX11-NEXT:    v_dual_mov_b32 v86, s21 :: v_dual_mov_b32 v109, s23
-; GFX11-NEXT:    v_dual_mov_b32 v122, s24 :: v_dual_mov_b32 v151, s26
-; GFX11-NEXT:    v_dual_mov_b32 v136, s25 :: v_dual_mov_b32 v15, s27
-; GFX11-NEXT:    s_and_not1_b32 vcc_lo, exec_lo, s4
-; GFX11-NEXT:    s_cbranch_vccnz .LBB19_3
-; GFX11-NEXT:  .LBB19_2: ; %cmp.true
-; GFX11-NEXT:    s_and_b32 s5, s27, 0xffff0000
-; GFX11-NEXT:    s_lshl_b32 s4, s27, 16
-; GFX11-NEXT:    v_add_f32_e64 v1, 0x40c00000, s5
-; GFX11-NEXT:    v_add_f32_e64 v0, 0x40c00000, s4
-; GFX11-NEXT:    s_lshl_b32 s6, s26, 16
-; GFX11-NEXT:    s_and_b32 s4, s26, 0xffff0000
-; GFX11-NEXT:    v_add_f32_e64 v5, 0x40c00000, s6
-; GFX11-NEXT:    v_bfe_u32 v4, v1, 16, 1
-; GFX11-NEXT:    v_bfe_u32 v2, v0, 16, 1
-; GFX11-NEXT:    v_or_b32_e32 v7, 0x400000, v1
-; GFX11-NEXT:    v_add_f32_e64 v3, 0x40c00000, s4
-; GFX11-NEXT:    v_or_b32_e32 v8, 0x400000, v0
-; GFX11-NEXT:    v_add_nc_u32_e32 v4, v4, v1
-; GFX11-NEXT:    v_bfe_u32 v10, v5, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v0, v0
-; GFX11-NEXT:    v_bfe_u32 v9, v3, 16, 1
-; GFX11-NEXT:    s_lshl_b32 s7, s25, 16
-; GFX11-NEXT:    v_add_nc_u32_e32 v4, 0x7fff, v4
-; GFX11-NEXT:    v_add_nc_u32_e32 v2, v2, v0
-; GFX11-NEXT:    s_and_b32 s5, s25, 0xffff0000
-; GFX11-NEXT:    s_and_b32 s4, s24, 0xffff0000
-; GFX11-NEXT:    v_add_f32_e64 v6, 0x40c00000, s5
-; GFX11-NEXT:    v_and_b32_e32 v51, 0xffff0000, v183
-; GFX11-NEXT:    v_add_nc_u32_e32 v2, 0x7fff, v2
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1)
-; GFX11-NEXT:    v_cndmask_b32_e32 v0, v2, v8, vcc_lo
-; GFX11-NEXT:    v_add_nc_u32_e32 v8, v10, v5
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v1, v1
-; GFX11-NEXT:    v_add_nc_u32_e32 v2, v9, v3
-; GFX11-NEXT:    v_or_b32_e32 v9, 0x400000, v5
-; GFX11-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; GFX11-NEXT:    v_bfe_u32 v10, v6, 16, 1
-; GFX11-NEXT:    v_cndmask_b32_e32 v1, v4, v7, vcc_lo
-; GFX11-NEXT:    v_add_nc_u32_e32 v7, 0x7fff, v8
-; GFX11-NEXT:    v_add_f32_e64 v8, 0x40c00000, s7
-; GFX11-NEXT:    v_or_b32_e32 v4, 0x400000, v3
-; GFX11-NEXT:    v_add_nc_u32_e32 v2, 0x7fff, v2
-; GFX11-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
-; GFX11-NEXT:    v_and_b32_e32 v0, 0xffff, v0
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v3, v3
-; GFX11-NEXT:    v_bfe_u32 v3, v8, 16, 1
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
-; GFX11-NEXT:    v_lshl_or_b32 v15, v1, 16, v0
-; GFX11-NEXT:    v_add_nc_u32_e32 v1, v3, v8
-; GFX11-NEXT:    v_cndmask_b32_e32 v2, v2, v4, vcc_lo
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v5, v5
-; GFX11-NEXT:    v_add_nc_u32_e32 v5, v10, v6
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_4) | instskip(NEXT) | instid1(VALU_DEP_4)
-; GFX11-NEXT:    v_add_nc_u32_e32 v1, 0x7fff, v1
-; GFX11-NEXT:    v_lshrrev_b32_e32 v0, 16, v2
-; GFX11-NEXT:    v_cndmask_b32_e32 v4, v7, v9, vcc_lo
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_4)
-; GFX11-NEXT:    v_add_nc_u32_e32 v3, 0x7fff, v5
-; GFX11-NEXT:    v_add_f32_e64 v5, 0x40c00000, s4
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v6, v6
-; GFX11-NEXT:    s_lshl_b32 s4, s24, 16
-; GFX11-NEXT:    v_lshrrev_b32_e32 v2, 16, v4
-; GFX11-NEXT:    v_or_b32_e32 v4, 0x400000, v6
-; GFX11-NEXT:    v_or_b32_e32 v7, 0x400000, v8
-; GFX11-NEXT:    v_add_f32_e64 v9, 0x40c00000, s4
-; GFX11-NEXT:    s_and_b32 s4, s23, 0xffff0000
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_3) | instskip(SKIP_4) | instid1(VALU_DEP_4)
-; GFX11-NEXT:    v_cndmask_b32_e32 v3, v3, v4, vcc_lo
-; GFX11-NEXT:    v_bfe_u32 v4, v5, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v8, v8
-; GFX11-NEXT:    v_or_b32_e32 v8, 0x400000, v5
-; GFX11-NEXT:    v_or_b32_e32 v10, 0x400000, v9
-; GFX11-NEXT:    v_add_nc_u32_e32 v4, v4, v5
-; GFX11-NEXT:    v_dual_cndmask_b32 v6, v1, v7 :: v_dual_and_b32 v1, 0xffff, v2
-; GFX11-NEXT:    v_bfe_u32 v7, v9, 16, 1
-; GFX11-NEXT:    v_lshrrev_b32_e32 v2, 16, v3
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_4) | instskip(NEXT) | instid1(VALU_DEP_4)
-; GFX11-NEXT:    v_add_nc_u32_e32 v4, 0x7fff, v4
-; GFX11-NEXT:    v_lshrrev_b32_e32 v3, 16, v6
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v5, v5
-; GFX11-NEXT:    v_add_nc_u32_e32 v6, v7, v9
-; GFX11-NEXT:    v_add_f32_e64 v7, 0x40c00000, s4
-; GFX11-NEXT:    s_lshl_b32 s4, s23, 16
-; GFX11-NEXT:    v_lshl_or_b32 v151, v0, 16, v1
-; GFX11-NEXT:    v_add_f32_e64 v12, 0x40c00000, s4
-; GFX11-NEXT:    v_add_nc_u32_e32 v6, 0x7fff, v6
-; GFX11-NEXT:    v_bfe_u32 v11, v7, 16, 1
-; GFX11-NEXT:    v_cndmask_b32_e32 v5, v4, v8, vcc_lo
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v9, v9
-; GFX11-NEXT:    s_and_b32 s4, s22, 0xffff0000
-; GFX11-NEXT:    v_bfe_u32 v9, v12, 16, 1
-; GFX11-NEXT:    v_add_nc_u32_e32 v8, v11, v7
-; GFX11-NEXT:    v_and_b32_e32 v4, 0xffff, v3
-; GFX11-NEXT:    v_cndmask_b32_e32 v6, v6, v10, vcc_lo
-; GFX11-NEXT:    v_add_f32_e64 v10, 0x40c00000, s4
-; GFX11-NEXT:    s_lshl_b32 s4, s22, 16
-; GFX11-NEXT:    v_lshrrev_b32_e32 v3, 16, v5
-; GFX11-NEXT:    v_add_f32_e64 v11, 0x40c00000, s4
-; GFX11-NEXT:    v_lshrrev_b32_e32 v5, 16, v6
-; GFX11-NEXT:    v_add_nc_u32_e32 v6, 0x7fff, v8
-; GFX11-NEXT:    v_add_nc_u32_e32 v8, v9, v12
-; GFX11-NEXT:    v_or_b32_e32 v9, 0x400000, v7
-; GFX11-NEXT:    v_bfe_u32 v14, v10, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v7, v7
-; GFX11-NEXT:    v_or_b32_e32 v13, 0x400000, v12
-; GFX11-NEXT:    v_add_nc_u32_e32 v8, 0x7fff, v8
-; GFX11-NEXT:    s_and_b32 s4, s21, 0xffff0000
-; GFX11-NEXT:    v_cndmask_b32_e32 v7, v6, v9, vcc_lo
-; GFX11-NEXT:    v_bfe_u32 v9, v11, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v12, v12
-; GFX11-NEXT:    v_add_nc_u32_e32 v12, v14, v10
-; GFX11-NEXT:    v_and_b32_e32 v6, 0xffff, v5
-; GFX11-NEXT:    v_lshrrev_b32_e32 v5, 16, v7
-; GFX11-NEXT:    v_dual_cndmask_b32 v8, v8, v13 :: v_dual_add_nc_u32 v7, v9, v11
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_4)
-; GFX11-NEXT:    v_add_nc_u32_e32 v9, 0x7fff, v12
-; GFX11-NEXT:    v_or_b32_e32 v12, 0x400000, v10
-; GFX11-NEXT:    v_add_f32_e64 v13, 0x40c00000, s4
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v10, v10
-; GFX11-NEXT:    s_lshl_b32 s4, s21, 16
-; GFX11-NEXT:    v_add_nc_u32_e32 v7, 0x7fff, v7
-; GFX11-NEXT:    v_or_b32_e32 v14, 0x400000, v11
-; GFX11-NEXT:    v_add_f32_e64 v16, 0x40c00000, s4
-; GFX11-NEXT:    v_cndmask_b32_e32 v9, v9, v12, vcc_lo
-; GFX11-NEXT:    v_bfe_u32 v10, v13, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v11, v11
-; GFX11-NEXT:    v_lshrrev_b32_e32 v8, 16, v8
-; GFX11-NEXT:    v_bfe_u32 v12, v16, 16, 1
-; GFX11-NEXT:    s_and_b32 s4, s20, 0xffff0000
-; GFX11-NEXT:    v_dual_cndmask_b32 v11, v7, v14 :: v_dual_add_nc_u32 v10, v10, v13
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_3) | instskip(SKIP_2) | instid1(VALU_DEP_4)
-; GFX11-NEXT:    v_and_b32_e32 v7, 0xffff, v8
-; GFX11-NEXT:    v_lshrrev_b32_e32 v8, 16, v9
-; GFX11-NEXT:    v_or_b32_e32 v14, 0x400000, v13
-; GFX11-NEXT:    v_add_nc_u32_e32 v10, 0x7fff, v10
-; GFX11-NEXT:    v_lshrrev_b32_e32 v9, 16, v11
-; GFX11-NEXT:    v_add_nc_u32_e32 v11, v12, v16
-; GFX11-NEXT:    v_add_f32_e64 v12, 0x40c00000, s4
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v13, v13
-; GFX11-NEXT:    s_lshl_b32 s4, s20, 16
-; GFX11-NEXT:    v_or_b32_e32 v17, 0x400000, v16
-; GFX11-NEXT:    v_add_nc_u32_e32 v11, 0x7fff, v11
-; GFX11-NEXT:    v_bfe_u32 v18, v12, 16, 1
-; GFX11-NEXT:    v_add_f32_e64 v19, 0x40c00000, s4
-; GFX11-NEXT:    v_cndmask_b32_e32 v13, v10, v14, vcc_lo
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v16, v16
-; GFX11-NEXT:    s_and_b32 s4, s19, 0xffff0000
-; GFX11-NEXT:    v_add_nc_u32_e32 v14, v18, v12
-; GFX11-NEXT:    v_bfe_u32 v16, v19, 16, 1
-; GFX11-NEXT:    v_and_b32_e32 v10, 0xffff, v9
-; GFX11-NEXT:    v_cndmask_b32_e32 v11, v11, v17, vcc_lo
-; GFX11-NEXT:    v_add_f32_e64 v17, 0x40c00000, s4
-; GFX11-NEXT:    s_lshl_b32 s4, s19, 16
-; GFX11-NEXT:    v_lshrrev_b32_e32 v9, 16, v13
-; GFX11-NEXT:    v_add_nc_u32_e32 v13, 0x7fff, v14
-; GFX11-NEXT:    v_add_nc_u32_e32 v14, v16, v19
-; GFX11-NEXT:    v_or_b32_e32 v16, 0x400000, v12
-; GFX11-NEXT:    v_add_f32_e64 v18, 0x40c00000, s4
-; GFX11-NEXT:    v_bfe_u32 v21, v17, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v12, v12
-; GFX11-NEXT:    v_lshrrev_b32_e32 v11, 16, v11
-; GFX11-NEXT:    v_add_nc_u32_e32 v14, 0x7fff, v14
-; GFX11-NEXT:    v_or_b32_e32 v20, 0x400000, v19
-; GFX11-NEXT:    s_and_b32 s4, s18, 0xffff0000
-; GFX11-NEXT:    v_cndmask_b32_e32 v13, v13, v16, vcc_lo
-; GFX11-NEXT:    v_bfe_u32 v16, v18, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v19, v19
-; GFX11-NEXT:    v_add_nc_u32_e32 v19, v21, v17
-; GFX11-NEXT:    v_and_b32_e32 v12, 0xffff, v11
-; GFX11-NEXT:    v_lshrrev_b32_e32 v11, 16, v13
-; GFX11-NEXT:    v_dual_cndmask_b32 v14, v14, v20 :: v_dual_add_nc_u32 v13, v16, v18
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_4)
-; GFX11-NEXT:    v_add_nc_u32_e32 v16, 0x7fff, v19
-; GFX11-NEXT:    v_or_b32_e32 v19, 0x400000, v17
-; GFX11-NEXT:    v_add_f32_e64 v20, 0x40c00000, s4
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v17, v17
-; GFX11-NEXT:    s_lshl_b32 s4, s18, 16
-; GFX11-NEXT:    v_add_nc_u32_e32 v13, 0x7fff, v13
-; GFX11-NEXT:    v_or_b32_e32 v21, 0x400000, v18
-; GFX11-NEXT:    v_add_f32_e64 v22, 0x40c00000, s4
-; GFX11-NEXT:    v_cndmask_b32_e32 v16, v16, v19, vcc_lo
-; GFX11-NEXT:    v_bfe_u32 v17, v20, 16, 1
-; GFX11-NEXT:    v_cmp_u_f32_e32 vcc_lo, v18, v18
-; GFX11-NEXT:    v_lshrrev_b32_e32 v14, 16, v14
-; GFX11-NEXT:    v_bfe_u32 v19, v22, 16, 1
-; GFX11-NEXT:    s_and_b32 s4, s17, 0xffff0000
-; GFX11-NEXT:    v_add_nc_u32_e32 v17, v17, v20
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_3) | instskip(SKIP_2) | instid1(VALU_DEP_4)
-; GFX11-NEXT:    v_dual_cndmask_b32 v18, v13, v21 :: v_dual_and_b32 v13, 0xffff, v14
-; GFX11-NEXT:    v_lshrrev_b32_e32 v14, 16, v16
-; GFX11-NEXT:    v_or_b32_e32 v21, 0x400000, v20
-; GFX11-NEXT:    v_add_nc_u32_e32 v17, 0x7fff, v17
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_4)
-; GFX...
[truncated]

@broxigarchen
Copy link
Contributor Author

ping! This is blocking the downstream branch so might be a bit urgent. Thanks!

@broxigarchen broxigarchen force-pushed the main-fix-true16-pack-s32 branch 3 times, most recently from d8102a3 to 5d02581 Compare October 9, 2025 17:32
@broxigarchen broxigarchen force-pushed the main-fix-true16-pack-s32 branch from 5d02581 to 5209296 Compare October 9, 2025 17:46

if (!RI.isVGPR(MRI, SrcReg0)) {
SrcReg0 = MRI.createVirtualRegister(&AMDGPU::VGPR_32RegClass);
BuildMI(*MBB, Inst, DL, get(AMDGPU::V_MOV_B32_e32), SrcReg0).add(Src0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should use copy, may need a separate path if there are non-register inputs

Copy link
Contributor Author

@broxigarchen broxigarchen Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out. For non-register case, we should still use v_mov, I created a new condition here, and also a new test.

For reg case, replacing to COPY reveals a seperate probem. This pack instruction creates a vgpr32 = copy sreg32 followed by a vgpr_hi16 = COPY hi16:vgpr32. The compiler will try to fold and create a vgpr_hi16 = COPY sreg_hi16 in the machine copy propagation. This seems is checking regclass match and is a pretty standard pass.

I can certainly lower this COPY in PostRAExpand, but since we don't want to generate spgr_16 in the pipeline, this might cause unexpect issue in other pass with a larger test case. I haven't figured out how to fix this properly.

I would think to merge this patch as it to unblock downstream branch, and create another patch to address this issue so that it gives me more time to look at it. I also noticed that there are some bad code pattern generated, i.e. "v_mov_b32_e32 v2, v2". These should also be removed after the v_mov_b32 is replaced to COPY

@broxigarchen broxigarchen force-pushed the main-fix-true16-pack-s32 branch from 97026e3 to 378ec8b Compare October 14, 2025 21:28
SrcReg1 = Src1.getReg();

bool isSrc0Reg16 = MRI.constrainRegClass(SrcReg0, &AMDGPU::VGPR_16RegClass);
bool isSrc1Reg16 = MRI.constrainRegClass(SrcReg1, &AMDGPU::VGPR_16RegClass);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually get 16 bit sources? What would that mean? The instruction is defined to take two 32 bit sources.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we do when there is a SALU16 used by s_pack_b32_b16.

There is a test added in fix-sgpr-copies-f16-true16.mir "s_pack_ll_b32_b16_use_SALU16"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying the example here:

    %1:sreg_32 = COPY %0:vgpr_32
    %2:sreg_32 = S_FMAC_F16 %1:sreg_32, %1:sreg_32, %1:sreg_32, implicit $mode
    %3:sreg_32 = S_PACK_LL_B32_B16 %2:sreg_32, %1:sreg_32, implicit-def dead $scc

That seems like a bug in how moveToVALU handles S_FMAC_F16. If we naively convert it to something like %2:vgpr_16 = V_FMAC_F16 ... then we have lost the fact that the original instruction generated a 32-bit result with zeroes in the high 16 bits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the dst and ops of S_FMAC_F16, I think it's all f16 in the isel, but it's put in a sreg32. Wouldn't it be safe to remove the top zero bit from it when moved to a VALU16?

We can definitly create a vgpr16, and then reg_sequence a vgpr32 on top, but these eventually will be removed in the end

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can definitly create a vgpr16, and then reg_sequence a vgpr32 on top, but these eventually will be removed in the end

I think that would be more correct, and it would avoid weird issues like this where an instruction that requires a 32-bit input sees a 16-bit input instead.

Copy link
Contributor Author

@broxigarchen broxigarchen Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. It does seems a bit confusing. I'll create another patch to clean this up. We have quite a few insts in this approach

@broxigarchen
Copy link
Contributor Author

ping!

@jayfoad
Copy link
Contributor

jayfoad commented Oct 17, 2025

Are you suggesting to commit this patch as-is, and then remove the handling of vgpr16 inputs in a later patch?

@broxigarchen
Copy link
Contributor Author

broxigarchen commented Oct 17, 2025

Are you suggesting to commit this patch as-is, and then remove the handling of vgpr16 inputs in a later patch?

Yes. We have quite a number of SALU16 to VALU16 case. I think it would be better to change them altogether.

@broxigarchen broxigarchen merged commit ac193bc into llvm:main Oct 17, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants