Skip to content

Conversation

@choikwa
Copy link
Contributor

@choikwa choikwa commented Dec 12, 2024

Rework involves 3 things:

  • return unsigned value, the number of div/rem bits actually needed.
  • change from AtLeast(SignBits) to MaxDivBits hint.
  • use MaxDivBits hint for unsigned case.

Mostly NFC changes.

@llvmbot
Copy link
Member

llvmbot commented Dec 12, 2024

@llvm/pr-subscribers-backend-amdgpu

Author: choikwa (choikwa)

Changes

Rework involves 3 things:

  • return unsigned value, the number of div/rem bits actually needed.
  • change from AtLeast(SignBits) to MaxDivBits hint.
  • use MaxDivBits hint for unsigned case.

Mostly NFC changes.


Full diff: https://github.com/llvm/llvm-project/pull/119768.diff

3 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp (+30-23)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv64.ll (+27-90)
  • (modified) llvm/test/CodeGen/AMDGPU/srem64.ll (+31-101)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
index e02ef56f234498..9b9ec8c5674f61 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
@@ -254,9 +254,8 @@ class AMDGPUCodeGenPrepareImpl
 
   bool divHasSpecialOptimization(BinaryOperator &I,
                                  Value *Num, Value *Den) const;
-  int getDivNumBits(BinaryOperator &I,
-                    Value *Num, Value *Den,
-                    unsigned AtLeast, bool Signed) const;
+  unsigned getDivNumBits(BinaryOperator &I, Value *Num, Value *Den,
+                         unsigned AtLeast, bool Signed) const;
 
   /// Expands 24 bit div or rem.
   Value* expandDivRem24(IRBuilder<> &Builder, BinaryOperator &I,
@@ -1189,27 +1188,32 @@ static Value* getMulHu(IRBuilder<> &Builder, Value *LHS, Value *RHS) {
   return getMul64(Builder, LHS, RHS).second;
 }
 
-/// Figure out how many bits are really needed for this division. \p AtLeast is
-/// an optimization hint to bypass the second ComputeNumSignBits call if we the
-/// first one is insufficient. Returns -1 on failure.
-int AMDGPUCodeGenPrepareImpl::getDivNumBits(BinaryOperator &I, Value *Num,
-                                            Value *Den, unsigned AtLeast,
-                                            bool IsSigned) const {
+/// Figure out how many bits are really needed for this division.
+/// \p MaxDivBits is an optimization hint to bypass the second
+/// ComputeNumSignBits/computeKnownBits call if we the first one is
+/// insufficient.
+unsigned AMDGPUCodeGenPrepareImpl::getDivNumBits(BinaryOperator &I, Value *Num,
+                                                 Value *Den,
+                                                 unsigned MaxDivBits,
+                                                 bool IsSigned) const {
   assert(Num->getType()->getScalarSizeInBits() ==
          Den->getType()->getScalarSizeInBits());
   unsigned SSBits = Num->getType()->getScalarSizeInBits();
   if (IsSigned) {
     unsigned RHSSignBits = ComputeNumSignBits(Den, DL, 0, AC, &I);
-    if (RHSSignBits < AtLeast)
-      return -1;
+    // a SignBit needs to be reserved for shrinking
+    unsigned DivBits = SSBits - RHSSignBits + 1;
+    if (DivBits > MaxDivBits)
+      return DivBits;
 
     unsigned LHSSignBits = ComputeNumSignBits(Num, DL, 0, AC, &I);
-    if (LHSSignBits < AtLeast)
-      return -1;
+    DivBits = SSBits - LHSSignBits + 1;
+    if (DivBits > MaxDivBits)
+      return DivBits;
 
     unsigned SignBits = std::min(LHSSignBits, RHSSignBits);
-    unsigned DivBits = SSBits - SignBits + 1;
-    return DivBits; // a SignBit needs to be reserved for shrinking
+    DivBits = SSBits - SignBits + 1;
+    return DivBits;
   }
 
   // All bits are used for unsigned division for Num or Den in range
@@ -1218,14 +1222,20 @@ int AMDGPUCodeGenPrepareImpl::getDivNumBits(BinaryOperator &I, Value *Num,
   if (Known.isNegative() || !Known.isNonNegative())
     return SSBits;
   unsigned RHSSignBits = Known.countMinLeadingZeros();
+  unsigned DivBits = SSBits - RHSSignBits;
+  if (DivBits > MaxDivBits)
+    return DivBits;
 
   Known = computeKnownBits(Num, DL, 0, AC, &I);
   if (Known.isNegative() || !Known.isNonNegative())
     return SSBits;
   unsigned LHSSignBits = Known.countMinLeadingZeros();
+  DivBits = SSBits - LHSSignBits;
+  if (DivBits > MaxDivBits)
+    return DivBits;
 
   unsigned SignBits = std::min(LHSSignBits, RHSSignBits);
-  unsigned DivBits = SSBits - SignBits;
+  DivBits = SSBits - SignBits;
   return DivBits;
 }
 
@@ -1235,11 +1245,8 @@ Value *AMDGPUCodeGenPrepareImpl::expandDivRem24(IRBuilder<> &Builder,
                                                 BinaryOperator &I, Value *Num,
                                                 Value *Den, bool IsDiv,
                                                 bool IsSigned) const {
-  unsigned SSBits = Num->getType()->getScalarSizeInBits();
-  // If Num bits <= 24, assume 0 signbits.
-  unsigned AtLeast = (SSBits <= 24) ? 0 : (SSBits - 24 + IsSigned);
-  int DivBits = getDivNumBits(I, Num, Den, AtLeast, IsSigned);
-  if (DivBits == -1 || DivBits > 24)
+  unsigned DivBits = getDivNumBits(I, Num, Den, 24, IsSigned);
+  if (DivBits > 24)
     return nullptr;
   return expandDivRem24Impl(Builder, I, Num, Den, DivBits, IsDiv, IsSigned);
 }
@@ -1523,8 +1530,8 @@ Value *AMDGPUCodeGenPrepareImpl::shrinkDivRem64(IRBuilder<> &Builder,
   bool IsDiv = Opc == Instruction::SDiv || Opc == Instruction::UDiv;
   bool IsSigned = Opc == Instruction::SDiv || Opc == Instruction::SRem;
 
-  int NumDivBits = getDivNumBits(I, Num, Den, 32, IsSigned);
-  if (NumDivBits == -1)
+  unsigned NumDivBits = getDivNumBits(I, Num, Den, 32, IsSigned);
+  if (NumDivBits > 32)
     return nullptr;
 
   Value *Narrowed = nullptr;
diff --git a/llvm/test/CodeGen/AMDGPU/sdiv64.ll b/llvm/test/CodeGen/AMDGPU/sdiv64.ll
index 3e8768c98b5c9a..96dd6276f7e382 100644
--- a/llvm/test/CodeGen/AMDGPU/sdiv64.ll
+++ b/llvm/test/CodeGen/AMDGPU/sdiv64.ll
@@ -1065,100 +1065,37 @@ define amdgpu_kernel void @s_test_sdiv24_48(ptr addrspace(1) %out, i48 %x, i48 %
 ; GCN-NEXT:    s_endpgm
 ;
 ; GCN-IR-LABEL: s_test_sdiv24_48:
-; GCN-IR:       ; %bb.0: ; %_udiv-special-cases
-; GCN-IR-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0xb
-; GCN-IR-NEXT:    s_mov_b32 s15, 0
-; GCN-IR-NEXT:    s_waitcnt lgkmcnt(0)
-; GCN-IR-NEXT:    s_sext_i32_i16 s1, s1
-; GCN-IR-NEXT:    s_ashr_i64 s[0:1], s[0:1], 24
-; GCN-IR-NEXT:    s_sext_i32_i16 s3, s3
-; GCN-IR-NEXT:    s_lshl_b64 s[0:1], s[0:1], 16
-; GCN-IR-NEXT:    s_ashr_i64 s[2:3], s[2:3], 24
-; GCN-IR-NEXT:    s_ashr_i64 s[6:7], s[0:1], 16
-; GCN-IR-NEXT:    s_ashr_i32 s0, s1, 31
-; GCN-IR-NEXT:    s_lshl_b64 s[2:3], s[2:3], 16
-; GCN-IR-NEXT:    s_mov_b32 s1, s0
-; GCN-IR-NEXT:    s_ashr_i64 s[8:9], s[2:3], 16
-; GCN-IR-NEXT:    s_ashr_i32 s2, s3, 31
-; GCN-IR-NEXT:    s_xor_b64 s[6:7], s[6:7], s[0:1]
-; GCN-IR-NEXT:    s_mov_b32 s3, s2
-; GCN-IR-NEXT:    s_sub_u32 s12, s6, s0
-; GCN-IR-NEXT:    s_subb_u32 s13, s7, s0
-; GCN-IR-NEXT:    s_xor_b64 s[6:7], s[8:9], s[2:3]
-; GCN-IR-NEXT:    s_sub_u32 s6, s6, s2
-; GCN-IR-NEXT:    s_subb_u32 s7, s7, s2
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[8:9], s[6:7], 0
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[10:11], s[12:13], 0
-; GCN-IR-NEXT:    s_flbit_i32_b64 s14, s[6:7]
-; GCN-IR-NEXT:    s_or_b64 s[10:11], s[8:9], s[10:11]
-; GCN-IR-NEXT:    s_flbit_i32_b64 s20, s[12:13]
-; GCN-IR-NEXT:    s_sub_u32 s16, s14, s20
-; GCN-IR-NEXT:    s_subb_u32 s17, 0, 0
-; GCN-IR-NEXT:    v_cmp_gt_u64_e64 s[18:19], s[16:17], 63
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[22:23], s[16:17], 63
-; GCN-IR-NEXT:    s_or_b64 s[18:19], s[10:11], s[18:19]
-; GCN-IR-NEXT:    s_and_b64 s[10:11], s[18:19], exec
-; GCN-IR-NEXT:    s_cselect_b32 s11, 0, s13
-; GCN-IR-NEXT:    s_cselect_b32 s10, 0, s12
-; GCN-IR-NEXT:    s_or_b64 s[18:19], s[18:19], s[22:23]
-; GCN-IR-NEXT:    s_mov_b64 s[8:9], 0
-; GCN-IR-NEXT:    s_andn2_b64 vcc, exec, s[18:19]
-; GCN-IR-NEXT:    s_cbranch_vccz .LBB9_5
-; GCN-IR-NEXT:  ; %bb.1: ; %udiv-bb1
-; GCN-IR-NEXT:    s_add_u32 s18, s16, 1
-; GCN-IR-NEXT:    s_addc_u32 s19, s17, 0
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[10:11], s[18:19], 0
-; GCN-IR-NEXT:    s_sub_i32 s16, 63, s16
-; GCN-IR-NEXT:    s_andn2_b64 vcc, exec, s[10:11]
-; GCN-IR-NEXT:    s_lshl_b64 s[10:11], s[12:13], s16
-; GCN-IR-NEXT:    s_cbranch_vccz .LBB9_4
-; GCN-IR-NEXT:  ; %bb.2: ; %udiv-preheader
-; GCN-IR-NEXT:    s_lshr_b64 s[16:17], s[12:13], s18
-; GCN-IR-NEXT:    s_add_u32 s18, s6, -1
-; GCN-IR-NEXT:    s_addc_u32 s19, s7, -1
-; GCN-IR-NEXT:    s_not_b64 s[8:9], s[14:15]
-; GCN-IR-NEXT:    s_add_u32 s12, s8, s20
-; GCN-IR-NEXT:    s_addc_u32 s13, s9, 0
-; GCN-IR-NEXT:    s_mov_b64 s[14:15], 0
-; GCN-IR-NEXT:    s_mov_b32 s9, 0
-; GCN-IR-NEXT:  .LBB9_3: ; %udiv-do-while
-; GCN-IR-NEXT:    ; =>This Inner Loop Header: Depth=1
-; GCN-IR-NEXT:    s_lshl_b64 s[16:17], s[16:17], 1
-; GCN-IR-NEXT:    s_lshr_b32 s8, s11, 31
-; GCN-IR-NEXT:    s_lshl_b64 s[10:11], s[10:11], 1
-; GCN-IR-NEXT:    s_or_b64 s[16:17], s[16:17], s[8:9]
-; GCN-IR-NEXT:    s_or_b64 s[10:11], s[14:15], s[10:11]
-; GCN-IR-NEXT:    s_sub_u32 s8, s18, s16
-; GCN-IR-NEXT:    s_subb_u32 s8, s19, s17
-; GCN-IR-NEXT:    s_ashr_i32 s14, s8, 31
-; GCN-IR-NEXT:    s_mov_b32 s15, s14
-; GCN-IR-NEXT:    s_and_b32 s8, s14, 1
-; GCN-IR-NEXT:    s_and_b64 s[14:15], s[14:15], s[6:7]
-; GCN-IR-NEXT:    s_sub_u32 s16, s16, s14
-; GCN-IR-NEXT:    s_subb_u32 s17, s17, s15
-; GCN-IR-NEXT:    s_add_u32 s12, s12, 1
-; GCN-IR-NEXT:    s_addc_u32 s13, s13, 0
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[20:21], s[12:13], 0
-; GCN-IR-NEXT:    s_mov_b64 s[14:15], s[8:9]
-; GCN-IR-NEXT:    s_and_b64 vcc, exec, s[20:21]
-; GCN-IR-NEXT:    s_cbranch_vccz .LBB9_3
-; GCN-IR-NEXT:  .LBB9_4: ; %Flow4
-; GCN-IR-NEXT:    s_lshl_b64 s[6:7], s[10:11], 1
-; GCN-IR-NEXT:    s_or_b64 s[10:11], s[8:9], s[6:7]
-; GCN-IR-NEXT:  .LBB9_5: ; %udiv-end
-; GCN-IR-NEXT:    s_load_dwordx2 s[4:5], s[4:5], 0x9
-; GCN-IR-NEXT:    s_xor_b64 s[0:1], s[2:3], s[0:1]
-; GCN-IR-NEXT:    s_xor_b64 s[2:3], s[10:11], s[0:1]
-; GCN-IR-NEXT:    s_sub_u32 s0, s2, s0
-; GCN-IR-NEXT:    s_subb_u32 s1, s3, s1
+; GCN-IR:       ; %bb.0:
+; GCN-IR-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x9
+; GCN-IR-NEXT:    s_load_dwordx2 s[8:9], s[4:5], 0xd
 ; GCN-IR-NEXT:    s_mov_b32 s7, 0xf000
 ; GCN-IR-NEXT:    s_mov_b32 s6, -1
-; GCN-IR-NEXT:    v_mov_b32_e32 v0, s1
 ; GCN-IR-NEXT:    s_waitcnt lgkmcnt(0)
-; GCN-IR-NEXT:    buffer_store_short v0, off, s[4:7], 0 offset:4
-; GCN-IR-NEXT:    s_waitcnt expcnt(0)
-; GCN-IR-NEXT:    v_mov_b32_e32 v0, s0
+; GCN-IR-NEXT:    s_mov_b32 s5, s1
+; GCN-IR-NEXT:    s_sext_i32_i16 s1, s9
+; GCN-IR-NEXT:    v_mov_b32_e32 v0, s8
+; GCN-IR-NEXT:    v_alignbit_b32 v0, s1, v0, 24
+; GCN-IR-NEXT:    v_cvt_f32_i32_e32 v1, v0
+; GCN-IR-NEXT:    s_mov_b32 s4, s0
+; GCN-IR-NEXT:    s_sext_i32_i16 s0, s3
+; GCN-IR-NEXT:    v_mov_b32_e32 v2, s2
+; GCN-IR-NEXT:    v_alignbit_b32 v2, s0, v2, 24
+; GCN-IR-NEXT:    v_cvt_f32_i32_e32 v3, v2
+; GCN-IR-NEXT:    v_rcp_iflag_f32_e32 v4, v1
+; GCN-IR-NEXT:    v_xor_b32_e32 v0, v2, v0
+; GCN-IR-NEXT:    v_ashrrev_i32_e32 v0, 30, v0
+; GCN-IR-NEXT:    v_or_b32_e32 v0, 1, v0
+; GCN-IR-NEXT:    v_mul_f32_e32 v2, v3, v4
+; GCN-IR-NEXT:    v_trunc_f32_e32 v2, v2
+; GCN-IR-NEXT:    v_mad_f32 v3, -v2, v1, v3
+; GCN-IR-NEXT:    v_cvt_i32_f32_e32 v2, v2
+; GCN-IR-NEXT:    v_cmp_ge_f32_e64 vcc, |v3|, |v1|
+; GCN-IR-NEXT:    v_cndmask_b32_e32 v0, 0, v0, vcc
+; GCN-IR-NEXT:    v_add_i32_e32 v0, vcc, v0, v2
+; GCN-IR-NEXT:    v_bfe_i32 v0, v0, 0, 24
+; GCN-IR-NEXT:    v_ashrrev_i32_e32 v1, 31, v0
 ; GCN-IR-NEXT:    buffer_store_dword v0, off, s[4:7], 0
+; GCN-IR-NEXT:    buffer_store_short v1, off, s[4:7], 0 offset:4
 ; GCN-IR-NEXT:    s_endpgm
   %1 = ashr i48 %x, 24
   %2 = ashr i48 %y, 24
diff --git a/llvm/test/CodeGen/AMDGPU/srem64.ll b/llvm/test/CodeGen/AMDGPU/srem64.ll
index cb8f82db92bbf8..23364e860d1542 100644
--- a/llvm/test/CodeGen/AMDGPU/srem64.ll
+++ b/llvm/test/CodeGen/AMDGPU/srem64.ll
@@ -1188,109 +1188,39 @@ define amdgpu_kernel void @s_test_srem24_48(ptr addrspace(1) %out, i48 %x, i48 %
 ; GCN-NEXT:    s_endpgm
 ;
 ; GCN-IR-LABEL: s_test_srem24_48:
-; GCN-IR:       ; %bb.0: ; %_udiv-special-cases
-; GCN-IR-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0xb
-; GCN-IR-NEXT:    s_mov_b32 s13, 0
+; GCN-IR:       ; %bb.0:
+; GCN-IR-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x9
+; GCN-IR-NEXT:    s_load_dwordx2 s[4:5], s[4:5], 0xd
+; GCN-IR-NEXT:    s_mov_b32 s7, 0xf000
+; GCN-IR-NEXT:    s_mov_b32 s6, -1
 ; GCN-IR-NEXT:    s_waitcnt lgkmcnt(0)
-; GCN-IR-NEXT:    s_sext_i32_i16 s1, s1
 ; GCN-IR-NEXT:    s_sext_i32_i16 s3, s3
-; GCN-IR-NEXT:    s_ashr_i64 s[0:1], s[0:1], 24
-; GCN-IR-NEXT:    s_ashr_i64 s[2:3], s[2:3], 24
-; GCN-IR-NEXT:    s_lshl_b64 s[0:1], s[0:1], 16
-; GCN-IR-NEXT:    s_lshl_b64 s[6:7], s[2:3], 16
-; GCN-IR-NEXT:    s_ashr_i64 s[2:3], s[0:1], 16
-; GCN-IR-NEXT:    s_ashr_i32 s0, s1, 31
-; GCN-IR-NEXT:    s_mov_b32 s1, s0
-; GCN-IR-NEXT:    s_ashr_i64 s[8:9], s[6:7], 16
-; GCN-IR-NEXT:    s_xor_b64 s[2:3], s[2:3], s[0:1]
-; GCN-IR-NEXT:    s_sub_u32 s2, s2, s0
-; GCN-IR-NEXT:    s_subb_u32 s3, s3, s0
-; GCN-IR-NEXT:    s_ashr_i32 s10, s7, 31
-; GCN-IR-NEXT:    s_mov_b32 s11, s10
-; GCN-IR-NEXT:    s_xor_b64 s[6:7], s[8:9], s[10:11]
-; GCN-IR-NEXT:    s_sub_u32 s6, s6, s10
-; GCN-IR-NEXT:    s_subb_u32 s7, s7, s10
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[8:9], s[6:7], 0
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[10:11], s[2:3], 0
-; GCN-IR-NEXT:    s_flbit_i32_b64 s12, s[6:7]
-; GCN-IR-NEXT:    s_or_b64 s[10:11], s[8:9], s[10:11]
-; GCN-IR-NEXT:    s_flbit_i32_b64 s20, s[2:3]
-; GCN-IR-NEXT:    s_sub_u32 s14, s12, s20
-; GCN-IR-NEXT:    s_subb_u32 s15, 0, 0
-; GCN-IR-NEXT:    v_cmp_gt_u64_e64 s[16:17], s[14:15], 63
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[18:19], s[14:15], 63
-; GCN-IR-NEXT:    s_or_b64 s[16:17], s[10:11], s[16:17]
-; GCN-IR-NEXT:    s_and_b64 s[10:11], s[16:17], exec
-; GCN-IR-NEXT:    s_cselect_b32 s11, 0, s3
-; GCN-IR-NEXT:    s_cselect_b32 s10, 0, s2
-; GCN-IR-NEXT:    s_or_b64 s[16:17], s[16:17], s[18:19]
-; GCN-IR-NEXT:    s_mov_b64 s[8:9], 0
-; GCN-IR-NEXT:    s_andn2_b64 vcc, exec, s[16:17]
-; GCN-IR-NEXT:    s_cbranch_vccz .LBB9_5
-; GCN-IR-NEXT:  ; %bb.1: ; %udiv-bb1
-; GCN-IR-NEXT:    s_add_u32 s16, s14, 1
-; GCN-IR-NEXT:    s_addc_u32 s17, s15, 0
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[10:11], s[16:17], 0
-; GCN-IR-NEXT:    s_sub_i32 s14, 63, s14
-; GCN-IR-NEXT:    s_andn2_b64 vcc, exec, s[10:11]
-; GCN-IR-NEXT:    s_lshl_b64 s[10:11], s[2:3], s14
-; GCN-IR-NEXT:    s_cbranch_vccz .LBB9_4
-; GCN-IR-NEXT:  ; %bb.2: ; %udiv-preheader
-; GCN-IR-NEXT:    s_lshr_b64 s[14:15], s[2:3], s16
-; GCN-IR-NEXT:    s_add_u32 s18, s6, -1
-; GCN-IR-NEXT:    s_addc_u32 s19, s7, -1
-; GCN-IR-NEXT:    s_not_b64 s[8:9], s[12:13]
-; GCN-IR-NEXT:    s_add_u32 s12, s8, s20
-; GCN-IR-NEXT:    s_addc_u32 s13, s9, 0
-; GCN-IR-NEXT:    s_mov_b64 s[16:17], 0
-; GCN-IR-NEXT:    s_mov_b32 s9, 0
-; GCN-IR-NEXT:  .LBB9_3: ; %udiv-do-while
-; GCN-IR-NEXT:    ; =>This Inner Loop Header: Depth=1
-; GCN-IR-NEXT:    s_lshl_b64 s[14:15], s[14:15], 1
-; GCN-IR-NEXT:    s_lshr_b32 s8, s11, 31
-; GCN-IR-NEXT:    s_lshl_b64 s[10:11], s[10:11], 1
-; GCN-IR-NEXT:    s_or_b64 s[14:15], s[14:15], s[8:9]
-; GCN-IR-NEXT:    s_or_b64 s[10:11], s[16:17], s[10:11]
-; GCN-IR-NEXT:    s_sub_u32 s8, s18, s14
-; GCN-IR-NEXT:    s_subb_u32 s8, s19, s15
-; GCN-IR-NEXT:    s_ashr_i32 s16, s8, 31
-; GCN-IR-NEXT:    s_mov_b32 s17, s16
-; GCN-IR-NEXT:    s_and_b32 s8, s16, 1
-; GCN-IR-NEXT:    s_and_b64 s[16:17], s[16:17], s[6:7]
-; GCN-IR-NEXT:    s_sub_u32 s14, s14, s16
-; GCN-IR-NEXT:    s_subb_u32 s15, s15, s17
-; GCN-IR-NEXT:    s_add_u32 s12, s12, 1
-; GCN-IR-NEXT:    s_addc_u32 s13, s13, 0
-; GCN-IR-NEXT:    v_cmp_eq_u64_e64 s[20:21], s[12:13], 0
-; GCN-IR-NEXT:    s_mov_b64 s[16:17], s[8:9]
-; GCN-IR-NEXT:    s_and_b64 vcc, exec, s[20:21]
-; GCN-IR-NEXT:    s_cbranch_vccz .LBB9_3
-; GCN-IR-NEXT:  .LBB9_4: ; %Flow4
-; GCN-IR-NEXT:    s_lshl_b64 s[10:11], s[10:11], 1
-; GCN-IR-NEXT:    s_or_b64 s[10:11], s[8:9], s[10:11]
-; GCN-IR-NEXT:  .LBB9_5: ; %udiv-end
-; GCN-IR-NEXT:    v_mov_b32_e32 v0, s10
-; GCN-IR-NEXT:    v_mul_hi_u32 v0, s6, v0
-; GCN-IR-NEXT:    s_load_dwordx2 s[12:13], s[4:5], 0x9
-; GCN-IR-NEXT:    s_mul_i32 s4, s6, s11
-; GCN-IR-NEXT:    v_mov_b32_e32 v2, s3
-; GCN-IR-NEXT:    v_add_i32_e32 v0, vcc, s4, v0
-; GCN-IR-NEXT:    s_mul_i32 s4, s7, s10
-; GCN-IR-NEXT:    v_add_i32_e32 v0, vcc, s4, v0
-; GCN-IR-NEXT:    s_mul_i32 s4, s6, s10
-; GCN-IR-NEXT:    v_mov_b32_e32 v1, s4
-; GCN-IR-NEXT:    v_sub_i32_e32 v1, vcc, s2, v1
-; GCN-IR-NEXT:    v_subb_u32_e32 v0, vcc, v2, v0, vcc
-; GCN-IR-NEXT:    v_xor_b32_e32 v1, s0, v1
-; GCN-IR-NEXT:    v_xor_b32_e32 v0, s1, v0
-; GCN-IR-NEXT:    v_mov_b32_e32 v2, s1
-; GCN-IR-NEXT:    v_subrev_i32_e32 v1, vcc, s0, v1
-; GCN-IR-NEXT:    s_mov_b32 s15, 0xf000
-; GCN-IR-NEXT:    s_mov_b32 s14, -1
-; GCN-IR-NEXT:    v_subb_u32_e32 v0, vcc, v0, v2, vcc
-; GCN-IR-NEXT:    s_waitcnt lgkmcnt(0)
-; GCN-IR-NEXT:    buffer_store_short v0, off, s[12:15], 0 offset:4
-; GCN-IR-NEXT:    buffer_store_dword v1, off, s[12:15], 0
+; GCN-IR-NEXT:    s_sext_i32_i16 s5, s5
+; GCN-IR-NEXT:    v_mov_b32_e32 v0, s4
+; GCN-IR-NEXT:    v_alignbit_b32 v0, s5, v0, 24
+; GCN-IR-NEXT:    v_cvt_f32_i32_e32 v1, v0
+; GCN-IR-NEXT:    v_mov_b32_e32 v2, s2
+; GCN-IR-NEXT:    v_alignbit_b32 v2, s3, v2, 24
+; GCN-IR-NEXT:    v_cvt_f32_i32_e32 v3, v2
+; GCN-IR-NEXT:    v_rcp_iflag_f32_e32 v4, v1
+; GCN-IR-NEXT:    v_xor_b32_e32 v5, v2, v0
+; GCN-IR-NEXT:    v_ashrrev_i32_e32 v5, 30, v5
+; GCN-IR-NEXT:    v_or_b32_e32 v5, 1, v5
+; GCN-IR-NEXT:    v_mul_f32_e32 v4, v3, v4
+; GCN-IR-NEXT:    v_trunc_f32_e32 v4, v4
+; GCN-IR-NEXT:    v_mad_f32 v3, -v4, v1, v3
+; GCN-IR-NEXT:    v_cvt_i32_f32_e32 v4, v4
+; GCN-IR-NEXT:    v_cmp_ge_f32_e64 vcc, |v3|, |v1|
+; GCN-IR-NEXT:    v_cndmask_b32_e32 v1, 0, v5, vcc
+; GCN-IR-NEXT:    s_mov_b32 s4, s0
+; GCN-IR-NEXT:    v_add_i32_e32 v1, vcc, v1, v4
+; GCN-IR-NEXT:    v_mul_lo_u32 v0, v1, v0
+; GCN-IR-NEXT:    s_mov_b32 s5, s1
+; GCN-IR-NEXT:    v_subrev_i32_e32 v0, vcc, v0, v2
+; GCN-IR-NEXT:    v_bfe_i32 v0, v0, 0, 24
+; GCN-IR-NEXT:    v_ashrrev_i32_e32 v1, 31, v0
+; GCN-IR-NEXT:    buffer_store_dword v0, off, s[4:7], 0
+; GCN-IR-NEXT:    buffer_store_short v1, off, s[4:7], 0 offset:4
 ; GCN-IR-NEXT:    s_endpgm
   %1 = ashr i48 %x, 24
   %2 = ashr i48 %y, 24

@jayfoad
Copy link
Contributor

jayfoad commented Dec 13, 2024

I think it's important to understand why this changes behaviour. Is the new behaviour wrong or right or just adding/removing a missed optimization?

@choikwa
Copy link
Contributor Author

choikwa commented Dec 13, 2024

From debugging, I've observed that in shrinkDivRem64,

int NumDivBits = getDivNumBits(I, Num, Den, 32 /*AtLeast*/, IsSigned);
if (NumDivBits == -1)                                                                                                                                                                                                     
  return nullptr; 

It's rejecting the case where operands of Num or Den have <=32 DivNumBits but the SignNumBits are also <32. Such a case arises when i.e. Num or Den is %1 = ashr i48 %x, 24 or when source's number of scalar bits are in (64,32) since AtLeast logic only depends on number of sign bits.

@choikwa choikwa requested review from arsenm and jayfoad December 13, 2024 21:34
@choikwa
Copy link
Contributor Author

choikwa commented Jan 3, 2025

bump

@jayfoad
Copy link
Contributor

jayfoad commented Jan 6, 2025

From debugging, I've observed that in shrinkDivRem64,

int NumDivBits = getDivNumBits(I, Num, Den, 32 /*AtLeast*/, IsSigned);
if (NumDivBits == -1)                                                                                                                                                                                                     
  return nullptr; 

It's rejecting the case where operands of Num or Den have <=32 DivNumBits but the SignNumBits are also <32. Such a case arises when i.e. Num or Den is %1 = ashr i48 %x, 24 or when source's number of scalar bits are in (64,32) since AtLeast logic only depends on number of sign bits.

Do I understand you correctly: the original code should have passed in BitWidth - 32 instead of 32 for the AtLeast argument. And the name shrinkDivRem64 is misleading since it is also used for (e.g.) 48-bit division.

@choikwa
Copy link
Contributor Author

choikwa commented Jan 6, 2025

Yes to both

@arsenm
Copy link
Contributor

arsenm commented Jan 6, 2025

should have passed in BitWidth - 32 instead of 32

I'd prefer to just do this as a separate functionality fix without mixing in the API change

@choikwa
Copy link
Contributor Author

choikwa commented Jan 6, 2025

I'll create another PR for the functionality fix.

Value *Num, Value *Den,
unsigned AtLeast, bool Signed) const;
unsigned getDivNumBits(BinaryOperator &I, Value *Num, Value *Den,
unsigned AtLeast, bool Signed) const;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change argument name to MaxDivBits

bool IsSigned) const {
/// Figure out how many bits are really needed for this division.
/// \p MaxDivBits is an optimization hint to bypass the second
/// ComputeNumSignBits/computeKnownBits call if we the first one is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove "we". Or did you mean "if we know"?

unsigned RHSSignBits = ComputeNumSignBits(Den, DL, 0, AC, &I);
if (RHSSignBits < AtLeast)
return -1;
// a SignBit needs to be reserved for shrinking
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments should be full sentences. Capital letter, period.

// a SignBit needs to be reserved for shrinking
unsigned DivBits = SSBits - RHSSignBits + 1;
if (DivBits > MaxDivBits)
return DivBits;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is a bit safer to return SSBits whenever we bail out, instead of DivBits.

unsigned LHSSignBits = ComputeNumSignBits(Num, DL, 0, AC, &I);
if (LHSSignBits < AtLeast)
return -1;
DivBits = SSBits - LHSSignBits + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need for this early-out, since it does not save any expensive calls to ComputeNumSignBits. Same comments apply to the unsigned cases below.

Copy link
Contributor

@jayfoad jayfoad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

Rework involves 3 things:
- return unsigned value.
- change from AtLeast(SignBits) to MaxDivBits hint.
- use MaxDivBits hint for unsigned case.
@choikwa choikwa merged commit de12836 into llvm:main Jan 9, 2025
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants