Skip to content

Commit c61a505

Browse files
committed
[AArch64][SME] Implement the SME ABI (ZA state management) in Machine IR
<h1>Short Summary</h1> This patch adds a new pass `aarch64-machine-sme-abi` to handle the ABI for ZA state (e.g., lazy saves and agnostic ZA functions). This is currently not enabled by default (but aims to be by LLVM 22). The goal is for this new pass to more optimally place ZA saves/restores and to work with exception handling. <h1>Long Description</h1> This patch reimplements management of ZA state for functions with private and shared ZA state. Agnostic ZA functions will be handled in a later patch. For now, this is under the flag `-aarch64-new-sme-abi`, however, we intend for this to replace the current SelectionDAG implementation once complete. The approach taken here is to mark instructions as needing ZA to be in a specific ("ACTIVE" or "LOCAL_SAVED"). Machine instructions implicitly defining or using ZA registers (such as $zt0 or $zab0) require the "ACTIVE" state. Function calls may need the "LOCAL_SAVED" or "ACTIVE" state depending on the callee (having shared or private ZA). We already add ZA register uses/definitions to machine instructions, so no extra work is needed to mark these. Calls need to be marked by glueing Arch64ISD::INOUT_ZA_USE or Arch64ISD::REQUIRES_ZA_SAVE to the CALLSEQ_START. These markers are then used by the MachineSMEABIPass to find instructions where there is a transition between required ZA states. These are the points we need to insert code to set up or restore a ZA save (or initialize ZA). To handle control flow between blocks (which may have different ZA state requirements), we bundle the incoming and outgoing edges of blocks. Bundles are formed by assigning each block an incoming and outgoing bundle (initially, all blocks have their own two bundles). Bundles are then combined by joining the outgoing bundle of a block with the incoming bundle of all successors. These bundles are then assigned a ZA state based on the blocks that participate in the bundle. Blocks whose incoming edges are in a bundle "vote" for a ZA state that matches the state required at the first instruction in the block, and likewise, blocks whose outgoing edges are in a bundle vote for the ZA state that matches the last instruction in the block. The ZA state with the most votes is used, which aims to minimize the number of state transitions. Change-Id: Iced4a3f329deab3ff8f3fd449a2337f7bbfa71ec
1 parent d35686b commit c61a505

23 files changed

+3885
-502
lines changed

llvm/lib/Target/AArch64/AArch64.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@ FunctionPass *createAArch64CleanupLocalDynamicTLSPass();
6060
FunctionPass *createAArch64CollectLOHPass();
6161
FunctionPass *createSMEABIPass();
6262
FunctionPass *createSMEPeepholeOptPass();
63+
FunctionPass *createMachineSMEABIPass();
6364
ModulePass *createSVEIntrinsicOptsPass();
6465
InstructionSelector *
6566
createAArch64InstructionSelector(const AArch64TargetMachine &,
@@ -111,6 +112,7 @@ void initializeFalkorMarkStridedAccessesLegacyPass(PassRegistry&);
111112
void initializeLDTLSCleanupPass(PassRegistry&);
112113
void initializeSMEABIPass(PassRegistry &);
113114
void initializeSMEPeepholeOptPass(PassRegistry &);
115+
void initializeMachineSMEABIPass(PassRegistry &);
114116
void initializeSVEIntrinsicOptsPass(PassRegistry &);
115117
void initializeAArch64Arm64ECCallLoweringPass(PassRegistry &);
116118
} // end namespace llvm

llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp

Lines changed: 26 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -92,8 +92,8 @@ class AArch64ExpandPseudo : public MachineFunctionPass {
9292
bool expandCALL_BTI(MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI);
9393
bool expandStoreSwiftAsyncContext(MachineBasicBlock &MBB,
9494
MachineBasicBlock::iterator MBBI);
95-
MachineBasicBlock *expandRestoreZA(MachineBasicBlock &MBB,
96-
MachineBasicBlock::iterator MBBI);
95+
MachineBasicBlock *expandCommitOrRestoreZA(MachineBasicBlock &MBB,
96+
MachineBasicBlock::iterator MBBI);
9797
MachineBasicBlock *expandCondSMToggle(MachineBasicBlock &MBB,
9898
MachineBasicBlock::iterator MBBI);
9999
};
@@ -980,40 +980,50 @@ bool AArch64ExpandPseudo::expandStoreSwiftAsyncContext(
980980
}
981981

982982
MachineBasicBlock *
983-
AArch64ExpandPseudo::expandRestoreZA(MachineBasicBlock &MBB,
984-
MachineBasicBlock::iterator MBBI) {
983+
AArch64ExpandPseudo::expandCommitOrRestoreZA(MachineBasicBlock &MBB,
984+
MachineBasicBlock::iterator MBBI) {
985985
MachineInstr &MI = *MBBI;
986+
bool IsRestoreZA = MI.getOpcode() == AArch64::RestoreZAPseudo;
987+
assert((MI.getOpcode() == AArch64::RestoreZAPseudo ||
988+
MI.getOpcode() == AArch64::CommitZAPseudo) &&
989+
"Expected ZA commit or restore");
986990
assert((std::next(MBBI) != MBB.end() ||
987991
MI.getParent()->successors().begin() !=
988992
MI.getParent()->successors().end()) &&
989993
"Unexpected unreachable in block that restores ZA");
990994

991995
// Compare TPIDR2_EL0 value against 0.
992996
DebugLoc DL = MI.getDebugLoc();
993-
MachineInstrBuilder Cbz = BuildMI(MBB, MBBI, DL, TII->get(AArch64::CBZX))
994-
.add(MI.getOperand(0));
997+
MachineInstrBuilder Branch =
998+
BuildMI(MBB, MBBI, DL,
999+
TII->get(IsRestoreZA ? AArch64::CBZX : AArch64::CBNZX))
1000+
.add(MI.getOperand(0));
9951001

9961002
// Split MBB and create two new blocks:
9971003
// - MBB now contains all instructions before RestoreZAPseudo.
998-
// - SMBB contains the RestoreZAPseudo instruction only.
999-
// - EndBB contains all instructions after RestoreZAPseudo.
1004+
// - SMBB contains the [Commit|RestoreZA]Pseudo instruction only.
1005+
// - EndBB contains all instructions after [Commit|RestoreZA]Pseudo.
10001006
MachineInstr &PrevMI = *std::prev(MBBI);
10011007
MachineBasicBlock *SMBB = MBB.splitAt(PrevMI, /*UpdateLiveIns*/ true);
10021008
MachineBasicBlock *EndBB = std::next(MI.getIterator()) == SMBB->end()
10031009
? *SMBB->successors().begin()
10041010
: SMBB->splitAt(MI, /*UpdateLiveIns*/ true);
10051011

1006-
// Add the SMBB label to the TB[N]Z instruction & create a branch to EndBB.
1007-
Cbz.addMBB(SMBB);
1012+
// Add the SMBB label to the CB[N]Z instruction & create a branch to EndBB.
1013+
Branch.addMBB(SMBB);
10081014
BuildMI(&MBB, DL, TII->get(AArch64::B))
10091015
.addMBB(EndBB);
10101016
MBB.addSuccessor(EndBB);
10111017

10121018
// Replace the pseudo with a call (BL).
10131019
MachineInstrBuilder MIB =
10141020
BuildMI(*SMBB, SMBB->end(), DL, TII->get(AArch64::BL));
1015-
MIB.addReg(MI.getOperand(1).getReg(), RegState::Implicit);
1016-
for (unsigned I = 2; I < MI.getNumOperands(); ++I)
1021+
unsigned FirstBLOperand = 1;
1022+
if (IsRestoreZA) {
1023+
MIB.addReg(MI.getOperand(1).getReg(), RegState::Implicit);
1024+
FirstBLOperand = 2;
1025+
}
1026+
for (unsigned I = FirstBLOperand; I < MI.getNumOperands(); ++I)
10171027
MIB.add(MI.getOperand(I));
10181028
BuildMI(SMBB, DL, TII->get(AArch64::B)).addMBB(EndBB);
10191029

@@ -1635,8 +1645,9 @@ bool AArch64ExpandPseudo::expandMI(MachineBasicBlock &MBB,
16351645
return expandCALL_BTI(MBB, MBBI);
16361646
case AArch64::StoreSwiftAsyncContext:
16371647
return expandStoreSwiftAsyncContext(MBB, MBBI);
1648+
case AArch64::CommitZAPseudo:
16381649
case AArch64::RestoreZAPseudo: {
1639-
auto *NewMBB = expandRestoreZA(MBB, MBBI);
1650+
auto *NewMBB = expandCommitOrRestoreZA(MBB, MBBI);
16401651
if (NewMBB != &MBB)
16411652
NextMBBI = MBB.end(); // The NextMBBI iterator is invalidated.
16421653
return true;
@@ -1647,6 +1658,8 @@ bool AArch64ExpandPseudo::expandMI(MachineBasicBlock &MBB,
16471658
NextMBBI = MBB.end(); // The NextMBBI iterator is invalidated.
16481659
return true;
16491660
}
1661+
case AArch64::InOutZAUsePseudo:
1662+
case AArch64::RequiresZASavePseudo:
16501663
case AArch64::COALESCER_BARRIER_FPR16:
16511664
case AArch64::COALESCER_BARRIER_FPR32:
16521665
case AArch64::COALESCER_BARRIER_FPR64:

llvm/lib/Target/AArch64/AArch64ISelLowering.cpp

Lines changed: 88 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -8277,53 +8277,54 @@ SDValue AArch64TargetLowering::LowerFormalArguments(
82778277
if (Subtarget->hasCustomCallingConv())
82788278
Subtarget->getRegisterInfo()->UpdateCustomCalleeSavedRegs(MF);
82798279

8280-
// Create a 16 Byte TPIDR2 object. The dynamic buffer
8281-
// will be expanded and stored in the static object later using a pseudonode.
8282-
if (Attrs.hasZAState()) {
8283-
TPIDR2Object &TPIDR2 = FuncInfo->getTPIDR2Obj();
8284-
TPIDR2.FrameIndex = MFI.CreateStackObject(16, Align(16), false);
8285-
SDValue SVL = DAG.getNode(AArch64ISD::RDSVL, DL, MVT::i64,
8286-
DAG.getConstant(1, DL, MVT::i32));
8287-
8288-
SDValue Buffer;
8289-
if (!Subtarget->isTargetWindows() && !hasInlineStackProbe(MF)) {
8290-
Buffer = DAG.getNode(AArch64ISD::ALLOCATE_ZA_BUFFER, DL,
8291-
DAG.getVTList(MVT::i64, MVT::Other), {Chain, SVL});
8292-
} else {
8293-
SDValue Size = DAG.getNode(ISD::MUL, DL, MVT::i64, SVL, SVL);
8294-
Buffer = DAG.getNode(ISD::DYNAMIC_STACKALLOC, DL,
8295-
DAG.getVTList(MVT::i64, MVT::Other),
8296-
{Chain, Size, DAG.getConstant(1, DL, MVT::i64)});
8297-
MFI.CreateVariableSizedObject(Align(16), nullptr);
8298-
}
8299-
Chain = DAG.getNode(
8300-
AArch64ISD::INIT_TPIDR2OBJ, DL, DAG.getVTList(MVT::Other),
8301-
{/*Chain*/ Buffer.getValue(1), /*Buffer ptr*/ Buffer.getValue(0)});
8302-
} else if (Attrs.hasAgnosticZAInterface()) {
8303-
// Call __arm_sme_state_size().
8304-
SDValue BufferSize =
8305-
DAG.getNode(AArch64ISD::GET_SME_SAVE_SIZE, DL,
8306-
DAG.getVTList(MVT::i64, MVT::Other), Chain);
8307-
Chain = BufferSize.getValue(1);
8308-
8309-
SDValue Buffer;
8310-
if (!Subtarget->isTargetWindows() && !hasInlineStackProbe(MF)) {
8311-
Buffer =
8312-
DAG.getNode(AArch64ISD::ALLOC_SME_SAVE_BUFFER, DL,
8313-
DAG.getVTList(MVT::i64, MVT::Other), {Chain, BufferSize});
8314-
} else {
8315-
// Allocate space dynamically.
8316-
Buffer = DAG.getNode(
8317-
ISD::DYNAMIC_STACKALLOC, DL, DAG.getVTList(MVT::i64, MVT::Other),
8318-
{Chain, BufferSize, DAG.getConstant(1, DL, MVT::i64)});
8319-
MFI.CreateVariableSizedObject(Align(16), nullptr);
8280+
if (!Subtarget->useNewSMEABILowering() || Attrs.hasAgnosticZAInterface()) {
8281+
// Old SME ABI lowering (deprecated):
8282+
// Create a 16 Byte TPIDR2 object. The dynamic buffer
8283+
// will be expanded and stored in the static object later using a
8284+
// pseudonode.
8285+
if (Attrs.hasZAState()) {
8286+
TPIDR2Object &TPIDR2 = FuncInfo->getTPIDR2Obj();
8287+
TPIDR2.FrameIndex = MFI.CreateStackObject(16, Align(16), false);
8288+
SDValue SVL = DAG.getNode(AArch64ISD::RDSVL, DL, MVT::i64,
8289+
DAG.getConstant(1, DL, MVT::i32));
8290+
SDValue Buffer;
8291+
if (!Subtarget->isTargetWindows() && !hasInlineStackProbe(MF)) {
8292+
Buffer = DAG.getNode(AArch64ISD::ALLOCATE_ZA_BUFFER, DL,
8293+
DAG.getVTList(MVT::i64, MVT::Other), {Chain, SVL});
8294+
} else {
8295+
SDValue Size = DAG.getNode(ISD::MUL, DL, MVT::i64, SVL, SVL);
8296+
Buffer = DAG.getNode(ISD::DYNAMIC_STACKALLOC, DL,
8297+
DAG.getVTList(MVT::i64, MVT::Other),
8298+
{Chain, Size, DAG.getConstant(1, DL, MVT::i64)});
8299+
MFI.CreateVariableSizedObject(Align(16), nullptr);
8300+
}
8301+
Chain = DAG.getNode(
8302+
AArch64ISD::INIT_TPIDR2OBJ, DL, DAG.getVTList(MVT::Other),
8303+
{/*Chain*/ Buffer.getValue(1), /*Buffer ptr*/ Buffer.getValue(0)});
8304+
} else if (Attrs.hasAgnosticZAInterface()) {
8305+
// Call __arm_sme_state_size().
8306+
SDValue BufferSize =
8307+
DAG.getNode(AArch64ISD::GET_SME_SAVE_SIZE, DL,
8308+
DAG.getVTList(MVT::i64, MVT::Other), Chain);
8309+
Chain = BufferSize.getValue(1);
8310+
SDValue Buffer;
8311+
if (!Subtarget->isTargetWindows() && !hasInlineStackProbe(MF)) {
8312+
Buffer = DAG.getNode(AArch64ISD::ALLOC_SME_SAVE_BUFFER, DL,
8313+
DAG.getVTList(MVT::i64, MVT::Other),
8314+
{Chain, BufferSize});
8315+
} else {
8316+
// Allocate space dynamically.
8317+
Buffer = DAG.getNode(
8318+
ISD::DYNAMIC_STACKALLOC, DL, DAG.getVTList(MVT::i64, MVT::Other),
8319+
{Chain, BufferSize, DAG.getConstant(1, DL, MVT::i64)});
8320+
MFI.CreateVariableSizedObject(Align(16), nullptr);
8321+
}
8322+
// Copy the value to a virtual register, and save that in FuncInfo.
8323+
Register BufferPtr =
8324+
MF.getRegInfo().createVirtualRegister(&AArch64::GPR64RegClass);
8325+
FuncInfo->setSMESaveBufferAddr(BufferPtr);
8326+
Chain = DAG.getCopyToReg(Chain, DL, BufferPtr, Buffer);
83208327
}
8321-
8322-
// Copy the value to a virtual register, and save that in FuncInfo.
8323-
Register BufferPtr =
8324-
MF.getRegInfo().createVirtualRegister(&AArch64::GPR64RegClass);
8325-
FuncInfo->setSMESaveBufferAddr(BufferPtr);
8326-
Chain = DAG.getCopyToReg(Chain, DL, BufferPtr, Buffer);
83278328
}
83288329

83298330
if (CallConv == CallingConv::PreserveNone) {
@@ -8340,6 +8341,15 @@ SDValue AArch64TargetLowering::LowerFormalArguments(
83408341
}
83418342
}
83428343

8344+
if (Subtarget->useNewSMEABILowering()) {
8345+
// Clear new ZT0 state. TODO: Move this to the SME ABI pass.
8346+
if (Attrs.isNewZT0())
8347+
Chain = DAG.getNode(
8348+
ISD::INTRINSIC_VOID, DL, MVT::Other, Chain,
8349+
DAG.getConstant(Intrinsic::aarch64_sme_zero_zt, DL, MVT::i32),
8350+
DAG.getTargetConstant(0, DL, MVT::i32));
8351+
}
8352+
83438353
return Chain;
83448354
}
83458355

@@ -8911,7 +8921,6 @@ static SDValue emitSMEStateSaveRestore(const AArch64TargetLowering &TLI,
89118921
MachineFunction &MF = DAG.getMachineFunction();
89128922
AArch64FunctionInfo *FuncInfo = MF.getInfo<AArch64FunctionInfo>();
89138923
FuncInfo->setSMESaveBufferUsed();
8914-
89158924
TargetLowering::ArgListTy Args;
89168925
TargetLowering::ArgListEntry Entry;
89178926
Entry.Ty = PointerType::getUnqual(*DAG.getContext());
@@ -9041,6 +9050,9 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
90419050
if (MF.getTarget().Options.EmitCallGraphSection && CB && CB->isIndirectCall())
90429051
CSInfo = MachineFunction::CallSiteInfo(*CB);
90439052

9053+
// Determine whether we need any streaming mode changes.
9054+
SMECallAttrs CallAttrs = getSMECallAttrs(MF.getFunction(), *this, CLI);
9055+
90449056
// Check callee args/returns for SVE registers and set calling convention
90459057
// accordingly.
90469058
if (CallConv == CallingConv::C || CallConv == CallingConv::Fast) {
@@ -9054,14 +9066,26 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
90549066
CallConv = CallingConv::AArch64_SVE_VectorCall;
90559067
}
90569068

9069+
bool UseNewSMEABILowering = Subtarget->useNewSMEABILowering();
9070+
bool IsAgnosticZAFunction = CallAttrs.caller().hasAgnosticZAInterface();
9071+
auto ZAMarkerNode = [&]() -> std::optional<unsigned> {
9072+
// TODO: Handle agnostic ZA functions.
9073+
if (!UseNewSMEABILowering || IsAgnosticZAFunction)
9074+
return std::nullopt;
9075+
if (!CallAttrs.caller().hasZAState() && !CallAttrs.caller().hasZT0State())
9076+
return std::nullopt;
9077+
return CallAttrs.requiresLazySave() ? AArch64ISD::REQUIRES_ZA_SAVE
9078+
: AArch64ISD::INOUT_ZA_USE;
9079+
}();
9080+
90579081
if (IsTailCall) {
90589082
// Check if it's really possible to do a tail call.
90599083
IsTailCall = isEligibleForTailCallOptimization(CLI);
90609084

90619085
// A sibling call is one where we're under the usual C ABI and not planning
90629086
// to change that but can still do a tail call:
9063-
if (!TailCallOpt && IsTailCall && CallConv != CallingConv::Tail &&
9064-
CallConv != CallingConv::SwiftTail)
9087+
if (!ZAMarkerNode.has_value() && !TailCallOpt && IsTailCall &&
9088+
CallConv != CallingConv::Tail && CallConv != CallingConv::SwiftTail)
90659089
IsSibCall = true;
90669090

90679091
if (IsTailCall)
@@ -9113,9 +9137,6 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
91139137
assert(FPDiff % 16 == 0 && "unaligned stack on tail call");
91149138
}
91159139

9116-
// Determine whether we need any streaming mode changes.
9117-
SMECallAttrs CallAttrs = getSMECallAttrs(MF.getFunction(), *this, CLI);
9118-
91199140
auto DescribeCallsite =
91209141
[&](OptimizationRemarkAnalysis &R) -> OptimizationRemarkAnalysis & {
91219142
R << "call from '" << ore::NV("Caller", MF.getName()) << "' to '";
@@ -9129,7 +9150,7 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
91299150
return R;
91309151
};
91319152

9132-
bool RequiresLazySave = CallAttrs.requiresLazySave();
9153+
bool RequiresLazySave = !UseNewSMEABILowering && CallAttrs.requiresLazySave();
91339154
bool RequiresSaveAllZA = CallAttrs.requiresPreservingAllZAState();
91349155
if (RequiresLazySave) {
91359156
const TPIDR2Object &TPIDR2 = FuncInfo->getTPIDR2Obj();
@@ -9204,10 +9225,21 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
92049225
AArch64ISD::SMSTOP, DL, DAG.getVTList(MVT::Other, MVT::Glue), Chain,
92059226
DAG.getTargetConstant((int32_t)(AArch64SVCR::SVCRZA), DL, MVT::i32));
92069227

9207-
// Adjust the stack pointer for the new arguments...
9228+
// Adjust the stack pointer for the new arguments... and mark ZA uses.
92089229
// These operations are automatically eliminated by the prolog/epilog pass
9209-
if (!IsSibCall)
9230+
assert((!IsSibCall || !ZAMarkerNode.has_value()) &&
9231+
"ZA markers require CALLSEQ_START");
9232+
if (!IsSibCall) {
92109233
Chain = DAG.getCALLSEQ_START(Chain, IsTailCall ? 0 : NumBytes, 0, DL);
9234+
if (ZAMarkerNode) {
9235+
// Note: We need the CALLSEQ_START to glue the ZAMarkerNode to, simply
9236+
// using a chain can result in incorrect scheduling. The markers referer
9237+
// to the position just before the CALLSEQ_START (though occur after as
9238+
// CALLSEQ_START lacks in-glue).
9239+
Chain = DAG.getNode(*ZAMarkerNode, DL, DAG.getVTList(MVT::Other),
9240+
{Chain, Chain.getValue(1)});
9241+
}
9242+
}
92119243

92129244
SDValue StackPtr = DAG.getCopyFromReg(Chain, DL, AArch64::SP,
92139245
getPointerTy(DAG.getDataLayout()));
@@ -9678,7 +9710,7 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
96789710
}
96799711
}
96809712

9681-
if (CallAttrs.requiresEnablingZAAfterCall())
9713+
if (RequiresLazySave || CallAttrs.requiresEnablingZAAfterCall())
96829714
// Unconditionally resume ZA.
96839715
Result = DAG.getNode(
96849716
AArch64ISD::SMSTART, DL, DAG.getVTList(MVT::Other, MVT::Glue), Result,
@@ -9700,7 +9732,6 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
97009732
SDValue TPIDR2_EL0 = DAG.getNode(
97019733
ISD::INTRINSIC_W_CHAIN, DL, MVT::i64, Result,
97029734
DAG.getConstant(Intrinsic::aarch64_sme_get_tpidr2, DL, MVT::i32));
9703-
97049735
// Copy the address of the TPIDR2 block into X0 before 'calling' the
97059736
// RESTORE_ZA pseudo.
97069737
SDValue Glue;
@@ -9712,7 +9743,6 @@ AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
97129743
DAG.getNode(AArch64ISD::RESTORE_ZA, DL, MVT::Other,
97139744
{Result, TPIDR2_EL0, DAG.getRegister(AArch64::X0, MVT::i64),
97149745
RestoreRoutine, RegMask, Result.getValue(1)});
9715-
97169746
// Finally reset the TPIDR2_EL0 register to 0.
97179747
Result = DAG.getNode(
97189748
ISD::INTRINSIC_VOID, DL, MVT::Other, Result,

llvm/lib/Target/AArch64/AArch64ISelLowering.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -173,6 +173,10 @@ class AArch64TargetLowering : public TargetLowering {
173173
MachineBasicBlock *EmitZTInstr(MachineInstr &MI, MachineBasicBlock *BB,
174174
unsigned Opcode, bool Op0IsDef) const;
175175
MachineBasicBlock *EmitZero(MachineInstr &MI, MachineBasicBlock *BB) const;
176+
177+
// Note: The following group of functions are only used as part of the old SME
178+
// ABI lowering. They will be removed once -aarch64-new-sme-abi=true is the
179+
// default.
176180
MachineBasicBlock *EmitInitTPIDR2Object(MachineInstr &MI,
177181
MachineBasicBlock *BB) const;
178182
MachineBasicBlock *EmitAllocateZABuffer(MachineInstr &MI,

0 commit comments

Comments
 (0)