Skip to content

Commit 3c2e5d5

Browse files
authored
[AMDGPU] Update log lowering to remove contract for AMDGCN backend (#168916)
## Problem Summary PyTorch's `test_warp_softmax_64bit_indexing` is failing with a numerical precision error where `log(1.1422761679)` computed with 54% higher error than expected (9.042e-09 vs 5.859e-09), causing gradient computations to exceed tolerance thresholds. This precision degradation was reproducible across all AMD GPU architectures (gfx1100, gfx1200, gfx90a, gfx950). I tracked down the problem to the commit **4703f8b6610a** (March 6, 2025) which changed HIP math headers to call `__builtin_logf()` directly instead of `__ocml_log_f32()`: ```diff - float logf(float __x) { return __FAST_OR_SLOW(__logf, __ocml_log_f32)(__x); } + float logf(float __x) { return __FAST_OR_SLOW(__logf, __builtin_logf)(__x); } ``` This change exposed a problem in the AMDGCN back-end as described below: ## Key Findings **1. Contract flag propagation:** When `-ffp-contract=fast` is enabled (default for HIP), Clang's CodeGen adds the `contract` flag to all `CallInst` instructions within the scope of `CGFPOptionsRAII`, including calls to LLVM intrinsics like `llvm.log.f32`. **2. Behavior change from OCML to builtin path:** - **Old path** (via `__ocml_log_f32`): The preprocessed IR showed the call to the OCML library function had the contract flag, but the OCML implementation internally dropped the contract flag when calling the `llvm.log.f32` intrinsic. ```llvm ; Function Attrs: alwaysinline convergent mustprogress nounwind define internal noundef float @_ZL4logff(float noundef %__x) #6 { entry: %retval = alloca float, align 4, addrspace(5) %__x.addr = alloca float, align 4, addrspace(5) %retval.ascast = addrspacecast ptr addrspace(5) %retval to ptr %__x.addr.ascast = addrspacecast ptr addrspace(5) %__x.addr to ptr store float %__x, ptr %__x.addr.ascast, align 4, !tbaa !23 %0 = load float, ptr %__x.addr.ascast, align 4, !tbaa !23 %call = call contract float @__ocml_log_f32(float noundef %0) #23 ret float %call } ; Function Attrs: convergent mustprogress nofree norecurse nosync nounwind willreturn memory(none) define internal noundef float @__ocml_log_f32(float noundef %0) #7 { %2 = tail call float @llvm.log.f32(float %0) ret float %2 } ``` - **New path** (via `__builtin_logf`): The call goes directly to `llvm.log.f32` intrinsic with the contract flag preserved, causing the backend to apply FMA contraction during polynomial expansion. ```llvm ; Function Attrs: alwaysinline convergent mustprogress nounwind define internal noundef float @_ZL4logff(float noundef %__x) #6 { entry: %retval = alloca float, align 4, addrspace(5) %__x.addr = alloca float, align 4, addrspace(5) %retval.ascast = addrspacecast ptr addrspace(5) %retval to ptr %__x.addr.ascast = addrspacecast ptr addrspace(5) %__x.addr to ptr store float %__x, ptr %__x.addr.ascast, align 4, !tbaa !24 %0 = load float, ptr %__x.addr.ascast, align 4, !tbaa !24 %1 = call contract float @llvm.log.f32(float %0) ret float %1 } ``` **3. Why contract breaks log:** Our AMDGCM target back end implements the natural logarithm by taking the result of the hardware log, then multiplying that by `ln(2)`, and applying some rounding error correction to that multiplication. This results in something like: ```c r = y * c1; // y is result of v_log_ instruction, c1 = ln(2) r = r + fma(y, c2, fma(y, c1, -r)) // c2 is another error-correcting constant ``` ```asm   v_log_f32_e32 v1, v1   s_mov_b32 s2, 0x3f317217   v_mul_f32_e32 v3, 0x3f317217, v1   v_fma_f32 v4, v1, s2, -v3   v_fmac_f32_e32 v4, 0x3377d1cf, v1   v_add_f32_e32 v3, v3, v4 ``` With the presence of the `contract` flag, the back-end fuses the add (`r + Z`) with the multiply thinking that it is legal, thus eliminating the intermediate rounding. The error compensation term, which was calculated based on the rounded product, is now being added to the full-precision result from the FMA, leading to incorrect error correction and degraded accuracy. The corresponding contracted operations become the following: ```c r = y * c1; r = fma(y, c1, fma(y, c2, fma(y, c1, -r))); ``` ```asm   v_log_f32_e32 v1, v1   s_mov_b32 s2, 0x3f317217   v_mul_f32_e32 v3, 0x3f317217, v1   v_fma_f32 v3, v1, s2, -v3   v_fmac_f32_e32 v3, 0x3377d1cf, v1   v_fmac_f32_e32 v3, 0x3f317217, v1 ``` ## Solution and Proposed Fix Based on our implementation of `llvm.log` and `llvm.log10`, it should be illegal for the back-end to propagate the `contract` flag when it is present on the intrinsic call because it uses error-correcting summation. My proposed fix is to modify the instruction selection passes (both global-isel and sdag) to drop the `contract` flag when lowering llvm.log. That way, when the instruction selection performs the contraction optimization, it will not fuse the multiply and add. Note: I had originally implemented this fix in the FE by removing the `contract` flag when lowering the llvm.log builtin (PR #168770). I have since closed that PR.
1 parent afc0fb8 commit 3c2e5d5

File tree

4 files changed

+627
-13
lines changed

4 files changed

+627
-13
lines changed

llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2772,7 +2772,6 @@ SDValue AMDGPUTargetLowering::LowerFLOGCommon(SDValue Op,
27722772
EVT VT = Op.getValueType();
27732773
SDNodeFlags Flags = Op->getFlags();
27742774
SDLoc DL(Op);
2775-
27762775
const bool IsLog10 = Op.getOpcode() == ISD::FLOG10;
27772776
assert(IsLog10 || Op.getOpcode() == ISD::FLOG);
27782777

@@ -2811,7 +2810,9 @@ SDValue AMDGPUTargetLowering::LowerFLOGCommon(SDValue Op,
28112810

28122811
SDValue C = DAG.getConstantFP(IsLog10 ? c_log10 : c_log, DL, VT);
28132812
SDValue CC = DAG.getConstantFP(IsLog10 ? cc_log10 : cc_log, DL, VT);
2814-
2813+
// This adds correction terms for which contraction may lead to an increase
2814+
// in the error of the approximation, so disable it.
2815+
Flags.setAllowContract(false);
28152816
R = DAG.getNode(ISD::FMUL, DL, VT, Y, C, Flags);
28162817
SDValue NegR = DAG.getNode(ISD::FNEG, DL, VT, R, Flags);
28172818
SDValue FMA0 = DAG.getNode(ISD::FMA, DL, VT, Y, C, NegR, Flags);
@@ -2834,7 +2835,9 @@ SDValue AMDGPUTargetLowering::LowerFLOGCommon(SDValue Op,
28342835
SDValue YHInt = DAG.getNode(ISD::AND, DL, MVT::i32, YAsInt, MaskConst);
28352836
SDValue YH = DAG.getNode(ISD::BITCAST, DL, MVT::f32, YHInt);
28362837
SDValue YT = DAG.getNode(ISD::FSUB, DL, VT, Y, YH, Flags);
2837-
2838+
// This adds correction terms for which contraction may lead to an increase
2839+
// in the error of the approximation, so disable it.
2840+
Flags.setAllowContract(false);
28382841
SDValue YTCT = DAG.getNode(ISD::FMUL, DL, VT, YT, CT, Flags);
28392842
SDValue Mad0 = getMad(DAG, DL, VT, YH, CT, YTCT, Flags);
28402843
SDValue Mad1 = getMad(DAG, DL, VT, YT, CH, Mad0, Flags);

llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp

Lines changed: 15 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3561,12 +3561,14 @@ bool AMDGPULegalizerInfo::legalizeFlogCommon(MachineInstr &MI,
35613561

35623562
auto C = B.buildFConstant(Ty, IsLog10 ? c_log10 : c_log);
35633563
auto CC = B.buildFConstant(Ty, IsLog10 ? cc_log10 : cc_log);
3564-
3565-
R = B.buildFMul(Ty, Y, C, Flags).getReg(0);
3566-
auto NegR = B.buildFNeg(Ty, R, Flags);
3567-
auto FMA0 = B.buildFMA(Ty, Y, C, NegR, Flags);
3568-
auto FMA1 = B.buildFMA(Ty, Y, CC, FMA0, Flags);
3569-
R = B.buildFAdd(Ty, R, FMA1, Flags).getReg(0);
3564+
// This adds correction terms for which contraction may lead to an increase
3565+
// in the error of the approximation, so disable it.
3566+
auto NewFlags = Flags & ~(MachineInstr::FmContract);
3567+
R = B.buildFMul(Ty, Y, C, NewFlags).getReg(0);
3568+
auto NegR = B.buildFNeg(Ty, R, NewFlags);
3569+
auto FMA0 = B.buildFMA(Ty, Y, C, NegR, NewFlags);
3570+
auto FMA1 = B.buildFMA(Ty, Y, CC, FMA0, NewFlags);
3571+
R = B.buildFAdd(Ty, R, FMA1, NewFlags).getReg(0);
35703572
} else {
35713573
// ch+ct is ln(2)/ln(10) to more than 36 bits
35723574
const float ch_log10 = 0x1.344000p-2f;
@@ -3582,12 +3584,15 @@ bool AMDGPULegalizerInfo::legalizeFlogCommon(MachineInstr &MI,
35823584
auto MaskConst = B.buildConstant(Ty, 0xfffff000);
35833585
auto YH = B.buildAnd(Ty, Y, MaskConst);
35843586
auto YT = B.buildFSub(Ty, Y, YH, Flags);
3585-
auto YTCT = B.buildFMul(Ty, YT, CT, Flags);
3587+
// This adds correction terms for which contraction may lead to an increase
3588+
// in the error of the approximation, so disable it.
3589+
auto NewFlags = Flags & ~(MachineInstr::FmContract);
3590+
auto YTCT = B.buildFMul(Ty, YT, CT, NewFlags);
35863591

35873592
Register Mad0 =
3588-
getMad(B, Ty, YH.getReg(0), CT.getReg(0), YTCT.getReg(0), Flags);
3589-
Register Mad1 = getMad(B, Ty, YT.getReg(0), CH.getReg(0), Mad0, Flags);
3590-
R = getMad(B, Ty, YH.getReg(0), CH.getReg(0), Mad1, Flags);
3593+
getMad(B, Ty, YH.getReg(0), CT.getReg(0), YTCT.getReg(0), NewFlags);
3594+
Register Mad1 = getMad(B, Ty, YT.getReg(0), CH.getReg(0), Mad0, NewFlags);
3595+
R = getMad(B, Ty, YH.getReg(0), CH.getReg(0), Mad1, NewFlags);
35913596
}
35923597

35933598
const bool IsFiniteOnly =

llvm/test/CodeGen/AMDGPU/llvm.log.ll

Lines changed: 303 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -316,6 +316,309 @@ define amdgpu_kernel void @s_log_f32(ptr addrspace(1) %out, float %in) {
316316
ret void
317317
}
318318

319+
define amdgpu_kernel void @s_log_contract_f32(ptr addrspace(1) %out, float %in) {
320+
; SI-SDAG-LABEL: s_log_contract_f32:
321+
; SI-SDAG: ; %bb.0:
322+
; SI-SDAG-NEXT: s_load_dword s6, s[4:5], 0xb
323+
; SI-SDAG-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x9
324+
; SI-SDAG-NEXT: v_mov_b32_e32 v0, 0x800000
325+
; SI-SDAG-NEXT: v_mov_b32_e32 v1, 0x41b17218
326+
; SI-SDAG-NEXT: s_mov_b32 s4, 0x3f317217
327+
; SI-SDAG-NEXT: s_waitcnt lgkmcnt(0)
328+
; SI-SDAG-NEXT: v_cmp_lt_f32_e32 vcc, s6, v0
329+
; SI-SDAG-NEXT: s_and_b64 s[2:3], vcc, exec
330+
; SI-SDAG-NEXT: s_cselect_b32 s2, 32, 0
331+
; SI-SDAG-NEXT: v_cndmask_b32_e32 v0, 0, v1, vcc
332+
; SI-SDAG-NEXT: v_mov_b32_e32 v1, s2
333+
; SI-SDAG-NEXT: v_ldexp_f32_e32 v1, s6, v1
334+
; SI-SDAG-NEXT: v_log_f32_e32 v1, v1
335+
; SI-SDAG-NEXT: s_mov_b32 s3, 0xf000
336+
; SI-SDAG-NEXT: s_mov_b32 s2, -1
337+
; SI-SDAG-NEXT: v_mul_f32_e32 v2, 0x3f317217, v1
338+
; SI-SDAG-NEXT: v_fma_f32 v3, v1, s4, -v2
339+
; SI-SDAG-NEXT: s_mov_b32 s4, 0x3377d1cf
340+
; SI-SDAG-NEXT: v_fma_f32 v3, v1, s4, v3
341+
; SI-SDAG-NEXT: s_mov_b32 s4, 0x7f800000
342+
; SI-SDAG-NEXT: v_add_f32_e32 v2, v2, v3
343+
; SI-SDAG-NEXT: v_cmp_lt_f32_e64 vcc, |v1|, s4
344+
; SI-SDAG-NEXT: v_cndmask_b32_e32 v1, v1, v2, vcc
345+
; SI-SDAG-NEXT: v_sub_f32_e32 v0, v1, v0
346+
; SI-SDAG-NEXT: buffer_store_dword v0, off, s[0:3], 0
347+
; SI-SDAG-NEXT: s_endpgm
348+
;
349+
; SI-GISEL-LABEL: s_log_contract_f32:
350+
; SI-GISEL: ; %bb.0:
351+
; SI-GISEL-NEXT: s_load_dword s0, s[4:5], 0xb
352+
; SI-GISEL-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x9
353+
; SI-GISEL-NEXT: v_mov_b32_e32 v0, 0x800000
354+
; SI-GISEL-NEXT: v_mov_b32_e32 v1, 0x3f317217
355+
; SI-GISEL-NEXT: v_mov_b32_e32 v2, 0x3377d1cf
356+
; SI-GISEL-NEXT: s_waitcnt lgkmcnt(0)
357+
; SI-GISEL-NEXT: v_cmp_lt_f32_e32 vcc, s0, v0
358+
; SI-GISEL-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc
359+
; SI-GISEL-NEXT: v_lshlrev_b32_e32 v0, 5, v0
360+
; SI-GISEL-NEXT: v_ldexp_f32_e32 v0, s0, v0
361+
; SI-GISEL-NEXT: v_log_f32_e32 v0, v0
362+
; SI-GISEL-NEXT: v_mov_b32_e32 v3, 0x7f800000
363+
; SI-GISEL-NEXT: s_mov_b32 s6, -1
364+
; SI-GISEL-NEXT: s_mov_b32 s7, 0xf000
365+
; SI-GISEL-NEXT: v_mul_f32_e32 v4, 0x3f317217, v0
366+
; SI-GISEL-NEXT: v_fma_f32 v1, v0, v1, -v4
367+
; SI-GISEL-NEXT: v_fma_f32 v1, v0, v2, v1
368+
; SI-GISEL-NEXT: v_add_f32_e32 v1, v4, v1
369+
; SI-GISEL-NEXT: v_cmp_lt_f32_e64 s[0:1], |v0|, v3
370+
; SI-GISEL-NEXT: v_cndmask_b32_e64 v0, v0, v1, s[0:1]
371+
; SI-GISEL-NEXT: v_mov_b32_e32 v1, 0x41b17218
372+
; SI-GISEL-NEXT: v_cndmask_b32_e32 v1, 0, v1, vcc
373+
; SI-GISEL-NEXT: v_sub_f32_e32 v0, v0, v1
374+
; SI-GISEL-NEXT: buffer_store_dword v0, off, s[4:7], 0
375+
; SI-GISEL-NEXT: s_endpgm
376+
;
377+
; VI-SDAG-LABEL: s_log_contract_f32:
378+
; VI-SDAG: ; %bb.0:
379+
; VI-SDAG-NEXT: s_load_dword s2, s[4:5], 0x2c
380+
; VI-SDAG-NEXT: v_mov_b32_e32 v0, 0x800000
381+
; VI-SDAG-NEXT: v_mov_b32_e32 v1, 0x41b17218
382+
; VI-SDAG-NEXT: s_waitcnt lgkmcnt(0)
383+
; VI-SDAG-NEXT: v_cmp_lt_f32_e32 vcc, s2, v0
384+
; VI-SDAG-NEXT: s_and_b64 s[0:1], vcc, exec
385+
; VI-SDAG-NEXT: s_cselect_b32 s0, 32, 0
386+
; VI-SDAG-NEXT: v_cndmask_b32_e32 v0, 0, v1, vcc
387+
; VI-SDAG-NEXT: v_mov_b32_e32 v1, s0
388+
; VI-SDAG-NEXT: v_ldexp_f32 v1, s2, v1
389+
; VI-SDAG-NEXT: v_log_f32_e32 v1, v1
390+
; VI-SDAG-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x24
391+
; VI-SDAG-NEXT: s_mov_b32 s2, 0x7f800000
392+
; VI-SDAG-NEXT: v_and_b32_e32 v2, 0xfffff000, v1
393+
; VI-SDAG-NEXT: v_sub_f32_e32 v3, v1, v2
394+
; VI-SDAG-NEXT: v_mul_f32_e32 v4, 0x3805fdf4, v2
395+
; VI-SDAG-NEXT: v_mul_f32_e32 v5, 0x3f317000, v3
396+
; VI-SDAG-NEXT: v_mul_f32_e32 v3, 0x3805fdf4, v3
397+
; VI-SDAG-NEXT: v_add_f32_e32 v3, v4, v3
398+
; VI-SDAG-NEXT: v_mul_f32_e32 v2, 0x3f317000, v2
399+
; VI-SDAG-NEXT: v_add_f32_e32 v3, v5, v3
400+
; VI-SDAG-NEXT: v_add_f32_e32 v2, v2, v3
401+
; VI-SDAG-NEXT: v_cmp_lt_f32_e64 vcc, |v1|, s2
402+
; VI-SDAG-NEXT: v_cndmask_b32_e32 v1, v1, v2, vcc
403+
; VI-SDAG-NEXT: v_sub_f32_e32 v2, v1, v0
404+
; VI-SDAG-NEXT: s_waitcnt lgkmcnt(0)
405+
; VI-SDAG-NEXT: v_mov_b32_e32 v0, s0
406+
; VI-SDAG-NEXT: v_mov_b32_e32 v1, s1
407+
; VI-SDAG-NEXT: flat_store_dword v[0:1], v2
408+
; VI-SDAG-NEXT: s_endpgm
409+
;
410+
; VI-GISEL-LABEL: s_log_contract_f32:
411+
; VI-GISEL: ; %bb.0:
412+
; VI-GISEL-NEXT: s_load_dword s0, s[4:5], 0x2c
413+
; VI-GISEL-NEXT: s_load_dwordx2 s[2:3], s[4:5], 0x24
414+
; VI-GISEL-NEXT: v_mov_b32_e32 v0, 0x800000
415+
; VI-GISEL-NEXT: v_mov_b32_e32 v1, 0x7f800000
416+
; VI-GISEL-NEXT: s_waitcnt lgkmcnt(0)
417+
; VI-GISEL-NEXT: v_cmp_lt_f32_e32 vcc, s0, v0
418+
; VI-GISEL-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc
419+
; VI-GISEL-NEXT: v_lshlrev_b32_e32 v0, 5, v0
420+
; VI-GISEL-NEXT: v_ldexp_f32 v0, s0, v0
421+
; VI-GISEL-NEXT: v_log_f32_e32 v0, v0
422+
; VI-GISEL-NEXT: v_and_b32_e32 v2, 0xfffff000, v0
423+
; VI-GISEL-NEXT: v_sub_f32_e32 v3, v0, v2
424+
; VI-GISEL-NEXT: v_mul_f32_e32 v4, 0x3805fdf4, v2
425+
; VI-GISEL-NEXT: v_mul_f32_e32 v5, 0x3805fdf4, v3
426+
; VI-GISEL-NEXT: v_mul_f32_e32 v3, 0x3f317000, v3
427+
; VI-GISEL-NEXT: v_add_f32_e32 v4, v4, v5
428+
; VI-GISEL-NEXT: v_mul_f32_e32 v2, 0x3f317000, v2
429+
; VI-GISEL-NEXT: v_add_f32_e32 v3, v3, v4
430+
; VI-GISEL-NEXT: v_add_f32_e32 v2, v2, v3
431+
; VI-GISEL-NEXT: v_cmp_lt_f32_e64 s[0:1], |v0|, v1
432+
; VI-GISEL-NEXT: v_mov_b32_e32 v1, 0x41b17218
433+
; VI-GISEL-NEXT: v_cndmask_b32_e64 v0, v0, v2, s[0:1]
434+
; VI-GISEL-NEXT: v_cndmask_b32_e32 v1, 0, v1, vcc
435+
; VI-GISEL-NEXT: v_sub_f32_e32 v2, v0, v1
436+
; VI-GISEL-NEXT: v_mov_b32_e32 v0, s2
437+
; VI-GISEL-NEXT: v_mov_b32_e32 v1, s3
438+
; VI-GISEL-NEXT: flat_store_dword v[0:1], v2
439+
; VI-GISEL-NEXT: s_endpgm
440+
;
441+
; GFX900-SDAG-LABEL: s_log_contract_f32:
442+
; GFX900-SDAG: ; %bb.0:
443+
; GFX900-SDAG-NEXT: s_load_dword s6, s[4:5], 0x2c
444+
; GFX900-SDAG-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x24
445+
; GFX900-SDAG-NEXT: v_mov_b32_e32 v0, 0x800000
446+
; GFX900-SDAG-NEXT: v_mov_b32_e32 v1, 0x41b17218
447+
; GFX900-SDAG-NEXT: v_mov_b32_e32 v2, 0
448+
; GFX900-SDAG-NEXT: s_waitcnt lgkmcnt(0)
449+
; GFX900-SDAG-NEXT: v_cmp_lt_f32_e32 vcc, s6, v0
450+
; GFX900-SDAG-NEXT: s_and_b64 s[2:3], vcc, exec
451+
; GFX900-SDAG-NEXT: s_cselect_b32 s2, 32, 0
452+
; GFX900-SDAG-NEXT: v_cndmask_b32_e32 v0, 0, v1, vcc
453+
; GFX900-SDAG-NEXT: v_mov_b32_e32 v1, s2
454+
; GFX900-SDAG-NEXT: v_ldexp_f32 v1, s6, v1
455+
; GFX900-SDAG-NEXT: v_log_f32_e32 v1, v1
456+
; GFX900-SDAG-NEXT: s_mov_b32 s2, 0x3f317217
457+
; GFX900-SDAG-NEXT: s_mov_b32 s3, 0x3377d1cf
458+
; GFX900-SDAG-NEXT: v_mul_f32_e32 v3, 0x3f317217, v1
459+
; GFX900-SDAG-NEXT: v_fma_f32 v4, v1, s2, -v3
460+
; GFX900-SDAG-NEXT: v_fma_f32 v4, v1, s3, v4
461+
; GFX900-SDAG-NEXT: s_mov_b32 s2, 0x7f800000
462+
; GFX900-SDAG-NEXT: v_add_f32_e32 v3, v3, v4
463+
; GFX900-SDAG-NEXT: v_cmp_lt_f32_e64 vcc, |v1|, s2
464+
; GFX900-SDAG-NEXT: v_cndmask_b32_e32 v1, v1, v3, vcc
465+
; GFX900-SDAG-NEXT: v_sub_f32_e32 v0, v1, v0
466+
; GFX900-SDAG-NEXT: global_store_dword v2, v0, s[0:1]
467+
; GFX900-SDAG-NEXT: s_endpgm
468+
;
469+
; GFX900-GISEL-LABEL: s_log_contract_f32:
470+
; GFX900-GISEL: ; %bb.0:
471+
; GFX900-GISEL-NEXT: s_load_dword s0, s[4:5], 0x2c
472+
; GFX900-GISEL-NEXT: s_load_dwordx2 s[2:3], s[4:5], 0x24
473+
; GFX900-GISEL-NEXT: v_mov_b32_e32 v0, 0x800000
474+
; GFX900-GISEL-NEXT: v_mov_b32_e32 v2, 0x3f317217
475+
; GFX900-GISEL-NEXT: v_mov_b32_e32 v3, 0x3377d1cf
476+
; GFX900-GISEL-NEXT: s_waitcnt lgkmcnt(0)
477+
; GFX900-GISEL-NEXT: v_cmp_lt_f32_e32 vcc, s0, v0
478+
; GFX900-GISEL-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc
479+
; GFX900-GISEL-NEXT: v_lshlrev_b32_e32 v0, 5, v0
480+
; GFX900-GISEL-NEXT: v_ldexp_f32 v0, s0, v0
481+
; GFX900-GISEL-NEXT: v_log_f32_e32 v0, v0
482+
; GFX900-GISEL-NEXT: v_mov_b32_e32 v4, 0x7f800000
483+
; GFX900-GISEL-NEXT: v_mov_b32_e32 v1, 0
484+
; GFX900-GISEL-NEXT: v_mul_f32_e32 v5, 0x3f317217, v0
485+
; GFX900-GISEL-NEXT: v_fma_f32 v2, v0, v2, -v5
486+
; GFX900-GISEL-NEXT: v_fma_f32 v2, v0, v3, v2
487+
; GFX900-GISEL-NEXT: v_add_f32_e32 v2, v5, v2
488+
; GFX900-GISEL-NEXT: v_cmp_lt_f32_e64 s[0:1], |v0|, v4
489+
; GFX900-GISEL-NEXT: v_cndmask_b32_e64 v0, v0, v2, s[0:1]
490+
; GFX900-GISEL-NEXT: v_mov_b32_e32 v2, 0x41b17218
491+
; GFX900-GISEL-NEXT: v_cndmask_b32_e32 v2, 0, v2, vcc
492+
; GFX900-GISEL-NEXT: v_sub_f32_e32 v0, v0, v2
493+
; GFX900-GISEL-NEXT: global_store_dword v1, v0, s[2:3]
494+
; GFX900-GISEL-NEXT: s_endpgm
495+
;
496+
; GFX1100-SDAG-LABEL: s_log_contract_f32:
497+
; GFX1100-SDAG: ; %bb.0:
498+
; GFX1100-SDAG-NEXT: s_load_b32 s0, s[4:5], 0x2c
499+
; GFX1100-SDAG-NEXT: s_waitcnt lgkmcnt(0)
500+
; GFX1100-SDAG-NEXT: v_cmp_gt_f32_e64 s1, 0x800000, s0
501+
; GFX1100-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1)
502+
; GFX1100-SDAG-NEXT: v_cndmask_b32_e64 v0, 0, 0x41b17218, s1
503+
; GFX1100-SDAG-NEXT: s_and_b32 s1, s1, exec_lo
504+
; GFX1100-SDAG-NEXT: s_cselect_b32 s1, 32, 0
505+
; GFX1100-SDAG-NEXT: v_ldexp_f32 v1, s0, s1
506+
; GFX1100-SDAG-NEXT: s_load_b64 s[0:1], s[4:5], 0x24
507+
; GFX1100-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_3) | instid1(VALU_DEP_2)
508+
; GFX1100-SDAG-NEXT: v_log_f32_e32 v1, v1
509+
; GFX1100-SDAG-NEXT: s_waitcnt_depctr depctr_va_vdst(0)
510+
; GFX1100-SDAG-NEXT: v_mul_f32_e32 v2, 0x3f317217, v1
511+
; GFX1100-SDAG-NEXT: v_cmp_gt_f32_e64 vcc_lo, 0x7f800000, |v1|
512+
; GFX1100-SDAG-NEXT: v_fma_f32 v3, 0x3f317217, v1, -v2
513+
; GFX1100-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
514+
; GFX1100-SDAG-NEXT: v_fmamk_f32 v3, v1, 0x3377d1cf, v3
515+
; GFX1100-SDAG-NEXT: v_add_f32_e32 v2, v2, v3
516+
; GFX1100-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
517+
; GFX1100-SDAG-NEXT: v_dual_cndmask_b32 v1, v1, v2 :: v_dual_mov_b32 v2, 0
518+
; GFX1100-SDAG-NEXT: v_sub_f32_e32 v0, v1, v0
519+
; GFX1100-SDAG-NEXT: s_waitcnt lgkmcnt(0)
520+
; GFX1100-SDAG-NEXT: global_store_b32 v2, v0, s[0:1]
521+
; GFX1100-SDAG-NEXT: s_endpgm
522+
;
523+
; GFX1100-GISEL-LABEL: s_log_contract_f32:
524+
; GFX1100-GISEL: ; %bb.0:
525+
; GFX1100-GISEL-NEXT: s_load_b32 s0, s[4:5], 0x2c
526+
; GFX1100-GISEL-NEXT: s_waitcnt lgkmcnt(0)
527+
; GFX1100-GISEL-NEXT: v_cmp_gt_f32_e64 s2, 0x800000, s0
528+
; GFX1100-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
529+
; GFX1100-GISEL-NEXT: v_cndmask_b32_e64 v0, 0, 1, s2
530+
; GFX1100-GISEL-NEXT: v_lshlrev_b32_e32 v0, 5, v0
531+
; GFX1100-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
532+
; GFX1100-GISEL-NEXT: v_ldexp_f32 v0, s0, v0
533+
; GFX1100-GISEL-NEXT: s_load_b64 s[0:1], s[4:5], 0x24
534+
; GFX1100-GISEL-NEXT: v_log_f32_e32 v0, v0
535+
; GFX1100-GISEL-NEXT: s_waitcnt_depctr depctr_va_vdst(0)
536+
; GFX1100-GISEL-NEXT: v_mul_f32_e32 v1, 0x3f317217, v0
537+
; GFX1100-GISEL-NEXT: v_cmp_gt_f32_e64 vcc_lo, 0x7f800000, |v0|
538+
; GFX1100-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
539+
; GFX1100-GISEL-NEXT: v_fma_f32 v2, 0x3f317217, v0, -v1
540+
; GFX1100-GISEL-NEXT: v_fmac_f32_e32 v2, 0x3377d1cf, v0
541+
; GFX1100-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
542+
; GFX1100-GISEL-NEXT: v_dual_add_f32 v1, v1, v2 :: v_dual_mov_b32 v2, 0
543+
; GFX1100-GISEL-NEXT: v_cndmask_b32_e32 v0, v0, v1, vcc_lo
544+
; GFX1100-GISEL-NEXT: v_cndmask_b32_e64 v1, 0, 0x41b17218, s2
545+
; GFX1100-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
546+
; GFX1100-GISEL-NEXT: v_sub_f32_e32 v0, v0, v1
547+
; GFX1100-GISEL-NEXT: s_waitcnt lgkmcnt(0)
548+
; GFX1100-GISEL-NEXT: global_store_b32 v2, v0, s[0:1]
549+
; GFX1100-GISEL-NEXT: s_endpgm
550+
;
551+
; R600-LABEL: s_log_contract_f32:
552+
; R600: ; %bb.0:
553+
; R600-NEXT: ALU 23, @4, KC0[CB0:0-32], KC1[]
554+
; R600-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1
555+
; R600-NEXT: CF_END
556+
; R600-NEXT: PAD
557+
; R600-NEXT: ALU clause starting at 4:
558+
; R600-NEXT: SETGT * T0.W, literal.x, KC0[2].Z,
559+
; R600-NEXT: 8388608(1.175494e-38), 0(0.000000e+00)
560+
; R600-NEXT: CNDE * T1.W, PV.W, 1.0, literal.x,
561+
; R600-NEXT: 1333788672(4.294967e+09), 0(0.000000e+00)
562+
; R600-NEXT: MUL_IEEE * T1.W, KC0[2].Z, PV.W,
563+
; R600-NEXT: LOG_IEEE * T0.X, PV.W,
564+
; R600-NEXT: AND_INT * T1.W, PS, literal.x,
565+
; R600-NEXT: -4096(nan), 0(0.000000e+00)
566+
; R600-NEXT: ADD * T2.W, T0.X, -PV.W,
567+
; R600-NEXT: MUL_IEEE * T3.W, PV.W, literal.x,
568+
; R600-NEXT: 939916788(3.194618e-05), 0(0.000000e+00)
569+
; R600-NEXT: MULADD_IEEE * T3.W, T1.W, literal.x, PV.W,
570+
; R600-NEXT: 939916788(3.194618e-05), 0(0.000000e+00)
571+
; R600-NEXT: MULADD_IEEE * T2.W, T2.W, literal.x, PV.W,
572+
; R600-NEXT: 1060204544(6.931152e-01), 0(0.000000e+00)
573+
; R600-NEXT: MULADD_IEEE T1.W, T1.W, literal.x, PV.W,
574+
; R600-NEXT: SETGT * T2.W, literal.y, |T0.X|,
575+
; R600-NEXT: 1060204544(6.931152e-01), 2139095040(INF)
576+
; R600-NEXT: CNDE T1.W, PS, T0.X, PV.W,
577+
; R600-NEXT: CNDE * T0.W, T0.W, 0.0, literal.x,
578+
; R600-NEXT: 1102148120(2.218071e+01), 0(0.000000e+00)
579+
; R600-NEXT: ADD T0.X, PV.W, -PS,
580+
; R600-NEXT: LSHR * T1.X, KC0[2].Y, literal.x,
581+
; R600-NEXT: 2(2.802597e-45), 0(0.000000e+00)
582+
;
583+
; CM-LABEL: s_log_contract_f32:
584+
; CM: ; %bb.0:
585+
; CM-NEXT: ALU 26, @4, KC0[CB0:0-32], KC1[]
586+
; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X
587+
; CM-NEXT: CF_END
588+
; CM-NEXT: PAD
589+
; CM-NEXT: ALU clause starting at 4:
590+
; CM-NEXT: SETGT * T0.W, literal.x, KC0[2].Z,
591+
; CM-NEXT: 8388608(1.175494e-38), 0(0.000000e+00)
592+
; CM-NEXT: CNDE * T1.W, PV.W, 1.0, literal.x,
593+
; CM-NEXT: 1333788672(4.294967e+09), 0(0.000000e+00)
594+
; CM-NEXT: MUL_IEEE * T1.W, KC0[2].Z, PV.W,
595+
; CM-NEXT: LOG_IEEE T0.X, T1.W,
596+
; CM-NEXT: LOG_IEEE T0.Y (MASKED), T1.W,
597+
; CM-NEXT: LOG_IEEE T0.Z (MASKED), T1.W,
598+
; CM-NEXT: LOG_IEEE * T0.W (MASKED), T1.W,
599+
; CM-NEXT: AND_INT * T1.W, PV.X, literal.x,
600+
; CM-NEXT: -4096(nan), 0(0.000000e+00)
601+
; CM-NEXT: ADD * T2.W, T0.X, -PV.W,
602+
; CM-NEXT: MUL_IEEE * T3.W, PV.W, literal.x,
603+
; CM-NEXT: 939916788(3.194618e-05), 0(0.000000e+00)
604+
; CM-NEXT: MULADD_IEEE * T3.W, T1.W, literal.x, PV.W,
605+
; CM-NEXT: 939916788(3.194618e-05), 0(0.000000e+00)
606+
; CM-NEXT: MULADD_IEEE * T2.W, T2.W, literal.x, PV.W,
607+
; CM-NEXT: 1060204544(6.931152e-01), 0(0.000000e+00)
608+
; CM-NEXT: MULADD_IEEE T0.Z, T1.W, literal.x, PV.W,
609+
; CM-NEXT: SETGT * T1.W, literal.y, |T0.X|,
610+
; CM-NEXT: 1060204544(6.931152e-01), 2139095040(INF)
611+
; CM-NEXT: CNDE T0.Z, PV.W, T0.X, PV.Z,
612+
; CM-NEXT: CNDE * T0.W, T0.W, 0.0, literal.x,
613+
; CM-NEXT: 1102148120(2.218071e+01), 0(0.000000e+00)
614+
; CM-NEXT: ADD * T0.X, PV.Z, -PV.W,
615+
; CM-NEXT: LSHR * T1.X, KC0[2].Y, literal.x,
616+
; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00)
617+
%result = call contract float @llvm.log.f32(float %in)
618+
store float %result, ptr addrspace(1) %out
619+
ret void
620+
}
621+
319622
; FIXME: We should be able to merge these packets together on Cayman so we
320623
; have a maximum of 4 instructions.
321624
define amdgpu_kernel void @s_log_v2f32(ptr addrspace(1) %out, <2 x float> %in) {

0 commit comments

Comments
 (0)