Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 0 additions & 12 deletions llvm/lib/Target/AMDGPU/AMDGPULowerBufferFatPointers.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1074,18 +1074,6 @@ Value *SplitPtrStructs::handleMemoryInst(Instruction *I, Value *Arg, Value *Ptr,
Args.push_back(IRB.getInt32(0));

uint32_t Aux = 0;
bool IsInvariant =
(isa<LoadInst>(I) && I->getMetadata(LLVMContext::MD_invariant_load));
bool IsNonTemporal = I->getMetadata(LLVMContext::MD_nontemporal);
Comment on lines -1077 to -1079
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing IsInvariant and IsNonTemporal makes sense to me, since we preserve this info when we copying the metadata to the intrinsic.

But I'm not sure if it is safe to remove the glc/dlc logic below.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That GLC/DLC is what's used to implement nontemporal, no? It was in the lowering logic up in LLPC when I copied from it.

If we're removing adding that data here, I'd like pointers in code - and tests for, if we don't have them - to setting glc/dlc in both SelectionDAG and GlobalISel's handling of the intrinsics

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IsNonTemporal should be handled post-isel by SIMemoryLegalizer, which sets bits in the aux (aka cpol) operand based on flags in the MachineMemOperand. It also knows how to do it properly for different architectures, which this code does not - e.g. the meaning of the aux bits changes completely in GFX12.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before GFX12, nontemporal was marked as GLC+SLC. This pass was coping Metadata to an intrinsic, and that Metadata is handled by SIMemoryLegalizer, the only problem was that during instruction selection, metadata was not copied (it was lost), so SIMemoryLegalizer was not able to handle it afterwards.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That GLC/DLC is what's used to implement nontemporal, no? It was in the lowering logic up in LLPC when I copied from it.

If we're removing adding that data here, I'd like pointers in code - and tests for, if we don't have them - to setting glc/dlc in both SelectionDAG and GlobalISel's handling of the intrinsics

I have added tests for Metadata in lower-fat-buffer-pointers - tests have been added for both SelectionDAG and GlobalISel, for all targets where the Cache policy differs.

// Atomic loads and stores need glc, atomic read-modify-write doesn't.
bool IsOneWayAtomic =
!isa<AtomicRMWInst>(I) && Order != AtomicOrdering::NotAtomic;
if (IsOneWayAtomic)
Aux |= AMDGPU::CPol::GLC;
if (IsNonTemporal && !IsInvariant)
Aux |= AMDGPU::CPol::SLC;
if (isa<LoadInst>(I) && ST->getGeneration() == AMDGPUSubtarget::GFX10)
Aux |= (Aux & AMDGPU::CPol::GLC ? AMDGPU::CPol::DLC : 0);
if (IsVolatile)
Aux |= AMDGPU::CPol::VOLATILE;
Args.push_back(IRB.getInt32(Aux));
Expand Down
3 changes: 3 additions & 0 deletions llvm/lib/Target/AMDGPU/SIISelLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1201,6 +1201,9 @@ bool SITargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
Info.flags = MachineMemOperand::MONone;
if (CI.hasMetadata(LLVMContext::MD_invariant_load))
Info.flags |= MachineMemOperand::MOInvariant;
if (CI.hasMetadata(LLVMContext::MD_nontemporal))
Info.flags |= MachineMemOperand::MONonTemporal;
Info.flags |= getTargetMMOFlags(CI);

if (const AMDGPU::RsrcIntrinsic *RsrcIntr =
AMDGPU::lookupRsrcIntrinsic(IntrID)) {
Expand Down
334 changes: 334 additions & 0 deletions llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-lastuse-metadata.ll
Original file line number Diff line number Diff line change
@@ -0,0 +1,334 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1200 < %s | FileCheck --check-prefix=GFX12 %s
; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1200 -mattr=+cumode < %s | FileCheck --check-prefix=GFX12 %s


define amdgpu_kernel void @buffer_last_use_load_0(ptr addrspace(7) %in, ptr addrspace(7) %out) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, let's add this to an existing test, potentially, and also run it for gfx11 and a gfx9, at least

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LastUse was not used before GFX12

; GFX12-LABEL: buffer_last_use_load_0:
; GFX12: ; %bb.0: ; %entry
; GFX12-NEXT: s_clause 0x2
; GFX12-NEXT: s_load_b128 s[0:3], s[4:5], 0x0
; GFX12-NEXT: s_load_b128 s[8:11], s[4:5], 0x20
; GFX12-NEXT: s_load_b32 s6, s[4:5], 0x10
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_dual_mov_b32 v0, s0 :: v_dual_mov_b32 v1, s1
; GFX12-NEXT: v_dual_mov_b32 v2, s2 :: v_dual_mov_b32 v3, s3
; GFX12-NEXT: v_dual_mov_b32 v7, s8 :: v_dual_mov_b32 v8, s9
; GFX12-NEXT: v_dual_mov_b32 v9, s10 :: v_dual_mov_b32 v10, s11
; GFX12-NEXT: scratch_store_b128 off, v[0:3], off offset:32
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[5:6], off, off offset:40
; GFX12-NEXT: scratch_load_b32 v4, off, off offset:36
; GFX12-NEXT: s_load_b32 s1, s[4:5], 0x30
; GFX12-NEXT: scratch_store_b128 off, v[7:10], off
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[1:2], off, off offset:8
; GFX12-NEXT: scratch_load_b32 v0, off, off offset:4
; GFX12-NEXT: v_mov_b32_e32 v7, s6
; GFX12-NEXT: v_mov_b32_e32 v9, s0
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_mov_b32_e32 v3, s1
; GFX12-NEXT: s_mov_b32 s1, exec_lo
; GFX12-NEXT: .LBB0_1: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x2
; GFX12-NEXT: v_readfirstlane_b32 s4, v4
; GFX12-NEXT: v_readfirstlane_b32 s5, v5
; GFX12-NEXT: v_readfirstlane_b32 s6, v6
; GFX12-NEXT: v_readfirstlane_b32 s7, v7
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[4:5]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[6:7]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_load_b32 v8, v9, s[4:7], null offen th:TH_LOAD_LU
; GFX12-NEXT: ; implicit-def: $vgpr4_vgpr5_vgpr6_vgpr7
; GFX12-NEXT: ; implicit-def: $vgpr9
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB0_1
; GFX12-NEXT: ; %bb.2:
; GFX12-NEXT: s_mov_b32 exec_lo, s1
; GFX12-NEXT: v_mov_b32_e32 v4, s8
; GFX12-NEXT: s_mov_b32 s0, exec_lo
; GFX12-NEXT: .LBB0_3: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x1
; GFX12-NEXT: v_readfirstlane_b32 s4, v0
; GFX12-NEXT: v_readfirstlane_b32 s5, v1
; GFX12-NEXT: v_readfirstlane_b32 s6, v2
; GFX12-NEXT: v_readfirstlane_b32 s7, v3
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[0:1]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[2:3]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_store_b32 v8, v4, s[4:7], null offen
; GFX12-NEXT: ; implicit-def: $vgpr0_vgpr1_vgpr2_vgpr3
; GFX12-NEXT: ; implicit-def: $vgpr8
; GFX12-NEXT: ; implicit-def: $vgpr4
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB0_3
; GFX12-NEXT: ; %bb.4:
; GFX12-NEXT: s_endpgm
entry:
%val = load i32, ptr addrspace(7) %in, !amdgpu.last.use !{}
store i32 %val, ptr addrspace(7) %out
ret void
}

define amdgpu_kernel void @buffer_last_use_load_1(ptr addrspace(7) %in, ptr addrspace(7) %out) {
; GFX12-LABEL: buffer_last_use_load_1:
; GFX12: ; %bb.0: ; %entry
; GFX12-NEXT: s_clause 0x2
; GFX12-NEXT: s_load_b128 s[0:3], s[4:5], 0x0
; GFX12-NEXT: s_load_b128 s[8:11], s[4:5], 0x20
; GFX12-NEXT: s_load_b32 s6, s[4:5], 0x10
; GFX12-NEXT: v_and_b32_e32 v0, 0x3ff, v0
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_dual_mov_b32 v4, s3 :: v_dual_mov_b32 v3, s2
; GFX12-NEXT: v_dual_mov_b32 v2, s1 :: v_dual_mov_b32 v1, s0
; GFX12-NEXT: v_dual_mov_b32 v8, s8 :: v_dual_mov_b32 v9, s9
; GFX12-NEXT: v_dual_mov_b32 v10, s10 :: v_dual_mov_b32 v11, s11
; GFX12-NEXT: scratch_store_b128 off, v[1:4], off offset:32
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[6:7], off, off offset:40
; GFX12-NEXT: scratch_load_b32 v5, off, off offset:36
; GFX12-NEXT: s_load_b32 s1, s[4:5], 0x30
; GFX12-NEXT: scratch_store_b128 off, v[8:11], off
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[2:3], off, off offset:8
; GFX12-NEXT: scratch_load_b32 v1, off, off offset:4
; GFX12-NEXT: v_mov_b32_e32 v8, s6
; GFX12-NEXT: v_lshl_add_u32 v9, v0, 2, s0
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_mov_b32_e32 v4, s1
; GFX12-NEXT: s_mov_b32 s1, exec_lo
; GFX12-NEXT: .LBB1_1: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x2
; GFX12-NEXT: v_readfirstlane_b32 s4, v5
; GFX12-NEXT: v_readfirstlane_b32 s5, v6
; GFX12-NEXT: v_readfirstlane_b32 s6, v7
; GFX12-NEXT: v_readfirstlane_b32 s7, v8
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[5:6]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[7:8]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_load_b32 v0, v9, s[4:7], null offen th:TH_LOAD_LU
; GFX12-NEXT: ; implicit-def: $vgpr5_vgpr6_vgpr7_vgpr8
; GFX12-NEXT: ; implicit-def: $vgpr9
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB1_1
; GFX12-NEXT: ; %bb.2:
; GFX12-NEXT: s_mov_b32 exec_lo, s1
; GFX12-NEXT: v_mov_b32_e32 v5, s8
; GFX12-NEXT: s_mov_b32 s0, exec_lo
; GFX12-NEXT: .LBB1_3: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x1
; GFX12-NEXT: v_readfirstlane_b32 s4, v1
; GFX12-NEXT: v_readfirstlane_b32 s5, v2
; GFX12-NEXT: v_readfirstlane_b32 s6, v3
; GFX12-NEXT: v_readfirstlane_b32 s7, v4
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[1:2]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[3:4]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_store_b32 v0, v5, s[4:7], null offen
; GFX12-NEXT: ; implicit-def: $vgpr1_vgpr2_vgpr3_vgpr4
; GFX12-NEXT: ; implicit-def: $vgpr0
; GFX12-NEXT: ; implicit-def: $vgpr5
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB1_3
; GFX12-NEXT: ; %bb.4:
; GFX12-NEXT: s_endpgm
entry:
%tid = call i32 @llvm.amdgcn.workitem.id.x()
%val.gep = getelementptr inbounds i32, ptr addrspace(7) %in, i32 %tid
%val = load i32, ptr addrspace(7) %val.gep, align 4, !amdgpu.last.use !{}
store i32 %val, ptr addrspace(7) %out
ret void
}

define amdgpu_kernel void @buffer_last_use_and_volatile_load(ptr addrspace(7) %in, ptr addrspace(7) %out) {
; GFX12-LABEL: buffer_last_use_and_volatile_load:
; GFX12: ; %bb.0: ; %entry
; GFX12-NEXT: s_clause 0x2
; GFX12-NEXT: s_load_b128 s[0:3], s[4:5], 0x0
; GFX12-NEXT: s_load_b128 s[8:11], s[4:5], 0x20
; GFX12-NEXT: s_load_b32 s6, s[4:5], 0x10
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_dual_mov_b32 v0, s0 :: v_dual_mov_b32 v1, s1
; GFX12-NEXT: v_dual_mov_b32 v2, s2 :: v_dual_mov_b32 v3, s3
; GFX12-NEXT: v_dual_mov_b32 v7, s8 :: v_dual_mov_b32 v8, s9
; GFX12-NEXT: v_dual_mov_b32 v9, s10 :: v_dual_mov_b32 v10, s11
; GFX12-NEXT: scratch_store_b128 off, v[0:3], off offset:32
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[5:6], off, off offset:40
; GFX12-NEXT: scratch_load_b32 v4, off, off offset:36
; GFX12-NEXT: s_load_b32 s1, s[4:5], 0x30
; GFX12-NEXT: scratch_store_b128 off, v[7:10], off
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[1:2], off, off offset:8
; GFX12-NEXT: scratch_load_b32 v0, off, off offset:4
; GFX12-NEXT: v_mov_b32_e32 v7, s6
; GFX12-NEXT: v_mov_b32_e32 v9, s0
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_mov_b32_e32 v3, s1
; GFX12-NEXT: s_mov_b32 s1, exec_lo
; GFX12-NEXT: .LBB2_1: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x2
; GFX12-NEXT: v_readfirstlane_b32 s4, v4
; GFX12-NEXT: v_readfirstlane_b32 s5, v5
; GFX12-NEXT: v_readfirstlane_b32 s6, v6
; GFX12-NEXT: v_readfirstlane_b32 s7, v7
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[4:5]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[6:7]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_load_b32 v8, v9, s[4:7], null offen th:TH_LOAD_BYPASS scope:SCOPE_SYS
; GFX12-NEXT: ; implicit-def: $vgpr4_vgpr5_vgpr6_vgpr7
; GFX12-NEXT: ; implicit-def: $vgpr9
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB2_1
; GFX12-NEXT: ; %bb.2:
; GFX12-NEXT: s_mov_b32 exec_lo, s1
; GFX12-NEXT: v_mov_b32_e32 v4, s8
; GFX12-NEXT: s_mov_b32 s0, exec_lo
; GFX12-NEXT: .LBB2_3: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x1
; GFX12-NEXT: v_readfirstlane_b32 s4, v0
; GFX12-NEXT: v_readfirstlane_b32 s5, v1
; GFX12-NEXT: v_readfirstlane_b32 s6, v2
; GFX12-NEXT: v_readfirstlane_b32 s7, v3
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[0:1]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[2:3]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_store_b32 v8, v4, s[4:7], null offen
; GFX12-NEXT: ; implicit-def: $vgpr0_vgpr1_vgpr2_vgpr3
; GFX12-NEXT: ; implicit-def: $vgpr8
; GFX12-NEXT: ; implicit-def: $vgpr4
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB2_3
; GFX12-NEXT: ; %bb.4:
; GFX12-NEXT: s_endpgm
entry:
%val = load volatile i32, ptr addrspace(7) %in, !amdgpu.last.use !{}
store i32 %val, ptr addrspace(7) %out
ret void
}

define amdgpu_kernel void @buffer_last_use_and_nontemporal_load(ptr addrspace(7) %in, ptr addrspace(7) %out) {
; GFX12-LABEL: buffer_last_use_and_nontemporal_load:
; GFX12: ; %bb.0: ; %entry
; GFX12-NEXT: s_clause 0x2
; GFX12-NEXT: s_load_b128 s[0:3], s[4:5], 0x0
; GFX12-NEXT: s_load_b128 s[8:11], s[4:5], 0x20
; GFX12-NEXT: s_load_b32 s6, s[4:5], 0x10
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_dual_mov_b32 v0, s0 :: v_dual_mov_b32 v1, s1
; GFX12-NEXT: v_dual_mov_b32 v2, s2 :: v_dual_mov_b32 v3, s3
; GFX12-NEXT: v_dual_mov_b32 v7, s8 :: v_dual_mov_b32 v8, s9
; GFX12-NEXT: v_dual_mov_b32 v9, s10 :: v_dual_mov_b32 v10, s11
; GFX12-NEXT: scratch_store_b128 off, v[0:3], off offset:32
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[5:6], off, off offset:40
; GFX12-NEXT: scratch_load_b32 v4, off, off offset:36
; GFX12-NEXT: s_load_b32 s1, s[4:5], 0x30
; GFX12-NEXT: scratch_store_b128 off, v[7:10], off
; GFX12-NEXT: s_clause 0x1
; GFX12-NEXT: scratch_load_b64 v[1:2], off, off offset:8
; GFX12-NEXT: scratch_load_b32 v0, off, off offset:4
; GFX12-NEXT: v_mov_b32_e32 v7, s6
; GFX12-NEXT: v_mov_b32_e32 v9, s0
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_mov_b32_e32 v3, s1
; GFX12-NEXT: s_mov_b32 s1, exec_lo
; GFX12-NEXT: .LBB3_1: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x2
; GFX12-NEXT: v_readfirstlane_b32 s4, v4
; GFX12-NEXT: v_readfirstlane_b32 s5, v5
; GFX12-NEXT: v_readfirstlane_b32 s6, v6
; GFX12-NEXT: v_readfirstlane_b32 s7, v7
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[4:5]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[6:7]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_load_b32 v8, v9, s[4:7], null offen th:TH_LOAD_LU
; GFX12-NEXT: ; implicit-def: $vgpr4_vgpr5_vgpr6_vgpr7
; GFX12-NEXT: ; implicit-def: $vgpr9
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB3_1
; GFX12-NEXT: ; %bb.2:
; GFX12-NEXT: s_mov_b32 exec_lo, s1
; GFX12-NEXT: v_mov_b32_e32 v4, s8
; GFX12-NEXT: s_mov_b32 s0, exec_lo
; GFX12-NEXT: .LBB3_3: ; =>This Inner Loop Header: Depth=1
; GFX12-NEXT: s_wait_loadcnt 0x1
; GFX12-NEXT: v_readfirstlane_b32 s4, v0
; GFX12-NEXT: v_readfirstlane_b32 s5, v1
; GFX12-NEXT: v_readfirstlane_b32 s6, v2
; GFX12-NEXT: v_readfirstlane_b32 s7, v3
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_2)
; GFX12-NEXT: v_cmp_eq_u64_e32 vcc_lo, s[4:5], v[0:1]
; GFX12-NEXT: v_cmp_eq_u64_e64 s0, s[6:7], v[2:3]
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX12-NEXT: s_and_b32 s0, vcc_lo, s0
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_and_saveexec_b32 s0, s0
; GFX12-NEXT: s_wait_loadcnt 0x0
; GFX12-NEXT: buffer_store_b32 v8, v4, s[4:7], null offen
; GFX12-NEXT: ; implicit-def: $vgpr0_vgpr1_vgpr2_vgpr3
; GFX12-NEXT: ; implicit-def: $vgpr8
; GFX12-NEXT: ; implicit-def: $vgpr4
; GFX12-NEXT: s_wait_alu 0xfffe
; GFX12-NEXT: s_xor_b32 exec_lo, exec_lo, s0
; GFX12-NEXT: s_cbranch_execnz .LBB3_3
; GFX12-NEXT: ; %bb.4:
; GFX12-NEXT: s_endpgm
entry:
%val = load i32, ptr addrspace(7) %in, !amdgpu.last.use !{}, !nontemporal !0
store i32 %val, ptr addrspace(7) %out
ret void
}

!0 = !{i32 1}
declare i32 @llvm.amdgcn.workitem.id.x()
Loading
Loading