Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
f9e5a7c
[Intrinsics][AArch64] Add intrinsic to mask off aliasing vector lanes
SamTebbs33 Nov 15, 2024
071728f
Rework lowering location
SamTebbs33 Jan 10, 2025
80a72ca
Fix ISD node name string and remove shouldExpand function
SamTebbs33 Jan 15, 2025
daa2ac4
Format
SamTebbs33 Jan 16, 2025
3fcb9e8
Move promote case
SamTebbs33 Jan 27, 2025
6628a98
Fix tablegen comment
SamTebbs33 Jan 27, 2025
0644542
Remove DAGTypeLegalizer::
SamTebbs33 Jan 27, 2025
75af361
Use getConstantOperandVal
SamTebbs33 Jan 27, 2025
5f563d9
Remove isPredicateCCSettingOp case
SamTebbs33 Jan 29, 2025
24df6bf
Remove overloads for pointer and element size parameters
SamTebbs33 Jan 30, 2025
ec37dfa
Clarify elementSize and writeAfterRead = 0
SamTebbs33 Jan 30, 2025
8d81955
Add i=0 to VF-1
SamTebbs33 Jan 30, 2025
8a09412
Rename to get.nonalias.lane.mask
SamTebbs33 Jan 30, 2025
45cbaff
Fix pointer types in example
SamTebbs33 Jan 30, 2025
1b7b0da
Remove shouldExpandGetAliasLaneMask
SamTebbs33 Jan 30, 2025
0a0de88
Lower to ISD node rather than intrinsic
SamTebbs33 Jan 30, 2025
54d32ad
Rename to noalias
SamTebbs33 Jan 31, 2025
2066929
Rename to loop.dependence.raw/war.mask
SamTebbs33 Feb 26, 2025
9b3a71a
Rename in langref
SamTebbs33 Mar 10, 2025
215d2e7
Reword argument description
SamTebbs33 Mar 21, 2025
ec2bfed
Fixup langref
SamTebbs33 May 20, 2025
9f5f91a
IsWriteAfterRead -> IsReadAfterWrite and avoid using ops vector
SamTebbs33 May 20, 2025
eb8d5af
Extend vXi1 setcc to account for intrinsic VT promotion
SamTebbs33 May 20, 2025
c3d6fc8
Remove experimental from intrinsic name
SamTebbs33 May 21, 2025
9c5631d
Clean up vector type creation
SamTebbs33 May 21, 2025
52fca12
Address review
SamTebbs33 Aug 5, 2025
9a985ab
Remove experimental from comment
SamTebbs33 Aug 7, 2025
b09d354
Add splitting
SamTebbs33 Aug 7, 2025
56f9a6b
Add widening
SamTebbs33 Aug 7, 2025
26bf362
Remove assertions and expand invalid immediates
SamTebbs33 Aug 11, 2025
a84e5e2
Remove comment about mismatched type and immediate
SamTebbs33 Aug 11, 2025
054f859
Improve lowering and splitting code a bit
SamTebbs33 Aug 12, 2025
970e7f9
Remove splitting from lowering
SamTebbs33 Aug 12, 2025
fddda14
Improve wording in lang ref
SamTebbs33 Aug 12, 2025
36be558
Rebase
SamTebbs33 Aug 12, 2025
c3d2acf
Remove backend promotion
SamTebbs33 Aug 13, 2025
8af5019
Don't create StoreVT
SamTebbs33 Aug 13, 2025
558bc3e
Use ternary for Addend
SamTebbs33 Aug 13, 2025
32e0192
Stop adding to PtrB
SamTebbs33 Aug 13, 2025
3d7c2da
Move nosve/nosve2 tests to separate files
SamTebbs33 Aug 13, 2025
5402e27
Rebase
SamTebbs33 Aug 15, 2025
5075b5f
Remove unneeded lowering cases
SamTebbs33 Aug 18, 2025
d85d375
Simplify lang ref again
SamTebbs33 Aug 19, 2025
4dedf42
More langref re-wording
SamTebbs33 Aug 27, 2025
33be150
Define a store-to-load forwarding hazard
SamTebbs33 Aug 28, 2025
587a25c
Scalarize <1 x Y> intrinsic calls
SamTebbs33 Aug 31, 2025
3abc7ba
Address review
SamTebbs33 Sep 1, 2025
8eb12a0
Address review
SamTebbs33 Sep 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 12 additions & 8 deletions llvm/docs/LangRef.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24128,8 +24128,7 @@ Overview:
Given a vector load from %ptrA followed by a vector store to %ptrB, this
instruction generates a mask where an active lane indicates that the
write-after-read sequence can be performed safely for that lane, without the
danger of it turning into a read-after-write sequence and introducing a
store-to-load forwarding hazard.
danger of a write-after-read hazard occurring.

A write-after-read hazard occurs when a write-after-read sequence for a given
lane in a vector ends up being executed as a read-after-write sequence due to
Expand All @@ -24149,8 +24148,7 @@ The intrinsic returns ``poison`` if the distance between ``%prtA`` and ``%ptrB``
is smaller than ``VF * %elementsize`` and either ``%ptrA + VF * %elementSize``
or ``%ptrB + VF * %elementSize`` wrap.
The element of the result mask is active when loading from %ptrA then storing to
%ptrB is safe and doesn't result in a write-after-read sequence turning into a
read-after-write sequence, meaning that:
%ptrB is safe and doesn't result in a write-after-read hazard:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
%ptrB is safe and doesn't result in a write-after-read hazard:
%ptrB is safe and doesn't result in a write-after-read hazard, meaning that:

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thank you.


* (ptrB - ptrA) <= 0 (guarantees that all lanes are loaded before any stores), or
* (ptrB - ptrA) >= elementSize * lane (guarantees that this lane is loaded
Expand Down Expand Up @@ -24188,13 +24186,19 @@ Overview:

Given a vector store to %ptrA followed by a vector load from %ptrB, this
instruction generates a mask where an active lane indicates that the
read-after-write sequence can be performed safely for that lane, without the
danger of it turning into a write-after-read sequence.
read-after-write sequence can be performed safely for that lane, without a
read-after-write hazard occurring or a a new store-to-load forwarding hazard
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

store-to-load forwarding hazard is not defined. Do we need this wording here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wording for the store-to-load forwarding (hazard) behaviour cannot be removed, because it is the only distinction between this intrinsic and the .war intrinsic. i.e. The "safe" requirement is not the only behaviour that this intrinsic implements.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've re-added the hazard wording, thanks.

being introduced.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
read-after-write sequence can be performed safely for that lane, without a
read-after-write hazard occurring or a a new store-to-load forwarding hazard
being introduced.
read-after-write sequence can be performed safely for that lane, without a
read-after-write hazard or a store-to-load forwarding hazard being introduced.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


A read-after-write hazard occurs when a read-after-write sequence for a given
lane in a vector ends up being executed as a write-after-read sequence due to
the aliasing of pointers.

A store-to-load forwarding hazard occurs when a vector store writes to an
address that partially overlaps with the address of a subsequent vector load.
Only the overlapping addresses can be forwarded to the load if the data hasn't
been written to memory yet.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that the load can't be performed until the write has completed, resulting in a stall that did not exist when executing as scalars. So perhaps you can write instead:

Suggested change
A store-to-load forwarding hazard occurs when a vector store writes to an
address that partially overlaps with the address of a subsequent vector load.
Only the overlapping addresses can be forwarded to the load if the data hasn't
been written to memory yet.
A store-to-load forwarding hazard occurs when a vector store writes to an
address that partially overlaps with the address of a subsequent vector load,
meaning that the vector load can't be performed until the vector store has completed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cheers.


Arguments:
""""""""""

Expand All @@ -24212,8 +24216,8 @@ The element of the result mask is active when storing to %ptrA then loading from
%ptrB is safe and doesn't result in aliasing, meaning that:

* abs(ptrB - ptrA) >= elementSize * lane (guarantees that the store of this lane
occurs before loading from this address)
* ptrA == ptrB doesn't introduce any new hazards and is safe
occurs before loading from this address), or
* ptrA == ptrB, doesn't introduce any new hazards
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* ptrA == ptrB, doesn't introduce any new hazards
* ptrA == ptrB (doesn't introduce any new hazards that weren't present in scalar code)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


Examples:
"""""""""
Expand Down
1 change: 1 addition & 0 deletions llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
Original file line number Diff line number Diff line change
Expand Up @@ -870,6 +870,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
// Vector Result Scalarization: <1 x ty> -> ty.
void ScalarizeVectorResult(SDNode *N, unsigned ResNo);
SDValue ScalarizeVecRes_MERGE_VALUES(SDNode *N, unsigned ResNo);
SDValue ScalarizeVecRes_LOOP_DEPENDENCE_MASK(SDNode *N);
SDValue ScalarizeVecRes_BinOp(SDNode *N);
SDValue ScalarizeVecRes_CMP(SDNode *N);
SDValue ScalarizeVecRes_TernaryOp(SDNode *N);
Expand Down
20 changes: 20 additions & 0 deletions llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ void DAGTypeLegalizer::ScalarizeVectorResult(SDNode *N, unsigned ResNo) {
report_fatal_error("Do not know how to scalarize the result of this "
"operator!\n");

case ISD::LOOP_DEPENDENCE_WAR_MASK:
case ISD::LOOP_DEPENDENCE_RAW_MASK:
R = ScalarizeVecRes_LOOP_DEPENDENCE_MASK(N);
break;
case ISD::MERGE_VALUES: R = ScalarizeVecRes_MERGE_VALUES(N, ResNo);break;
case ISD::BITCAST: R = ScalarizeVecRes_BITCAST(N); break;
case ISD::BUILD_VECTOR: R = ScalarizeVecRes_BUILD_VECTOR(N); break;
Expand Down Expand Up @@ -396,6 +400,22 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_MERGE_VALUES(SDNode *N,
return GetScalarizedVector(Op);
}

SDValue DAGTypeLegalizer::ScalarizeVecRes_LOOP_DEPENDENCE_MASK(SDNode *N) {
SDValue SourceValue = N->getOperand(0);
SDValue SinkValue = N->getOperand(1);
SDValue EltSize = N->getOperand(2);
EVT PtrVT = SourceValue->getValueType(0);
SDLoc DL(N);

SDValue Diff = DAG.getNode(ISD::SUB, DL, PtrVT, SinkValue, SourceValue);
EVT CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
Diff.getValueType());
SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
return DAG.getNode(ISD::OR, DL, CmpVT,
DAG.getSetCC(DL, CmpVT, Diff, EltSize, ISD::SETGE),
DAG.getSetCC(DL, CmpVT, Diff, Zero, ISD::SETEQ));
}

SDValue DAGTypeLegalizer::ScalarizeVecRes_BITCAST(SDNode *N) {
SDValue Op = N->getOperand(0);
if (getTypeAction(Op.getValueType()) == TargetLowering::TypeScalarizeVector)
Expand Down
112 changes: 112 additions & 0 deletions llvm/test/CodeGen/AArch64/alias_mask.ll
Original file line number Diff line number Diff line change
Expand Up @@ -784,3 +784,115 @@ entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 3)
ret <16 x i1> %0
}

define <1 x i1> @whilewr_8_scalarize(ptr %a, ptr %b) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can you add a section header saying these tests are about scalarising <1 x i1> types?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

; CHECK-LABEL: whilewr_8_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #0
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
%0 = call <1 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
%0 = call <1 x i1> @llvm.loop.dependence.war.mask.v1i1(ptr %a, ptr %b, i64 1)

(same for the tests below)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

ret <1 x i1> %0
}

define <1 x i1> @whilewr_16_scalarize(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_16_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #1
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 2)
ret <1 x i1> %0
}

define <1 x i1> @whilewr_32_scalarize(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_32_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #3
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 4)
ret <1 x i1> %0
}

define <1 x i1> @whilewr_64_scalarize(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_64_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #7
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 8)
ret <1 x i1> %0
}

define <1 x i1> @whilerw_8_scalarize(ptr %a, ptr %b) {
; CHECK-LABEL: whilerw_8_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #0
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <1 x i1> %0
}

define <1 x i1> @whilerw_16_scalarize(ptr %a, ptr %b) {
; CHECK-LABEL: whilerw_16_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #1
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 2)
ret <1 x i1> %0
}

define <1 x i1> @whilerw_32_scalarize(ptr %a, ptr %b) {
; CHECK-LABEL: whilerw_32_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #3
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 4)
ret <1 x i1> %0
}

define <1 x i1> @whilerw_64_scalarize(ptr %a, ptr %b) {
; CHECK-LABEL: whilerw_64_scalarize:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: subs x8, x1, x0
; CHECK-NEXT: cmp x8, #7
; CHECK-NEXT: cset w8, gt
; CHECK-NEXT: cmp x1, x0
; CHECK-NEXT: csinc w0, w8, wzr, ne
; CHECK-NEXT: ret
entry:
%0 = call <1 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 8)
ret <1 x i1> %0
}
Loading