-
Notifications
You must be signed in to change notification settings - Fork 15.1k
[AArch64] Break up AArch64FrameLowering::emitPrologue (NFCI)
#157485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
`emitPrologue` was almost 1k SLOC, with a large portion of that not actually related to emitting the vast majority of prologues. This patch creates a new class `AArch64PrologueEmitter` for emitting the prologue, which keeps common state/target classes as members. This makes adding methods that handle niche cases easy, and allows methods to be marked "const" when they don't redefine flags/state. With this change, the core "emitPrologue" is around 275 LOC, with cases like Windows stack probes, or Swift frame pointers, split off into their own routines.
|
@llvm/pr-subscribers-backend-aarch64 Author: Benjamin Maxwell (MacDue) Changes
This patch creates a new class With this change, the core "emitPrologue" is around 275 LOC, with cases like Windows stack probes or Swift frame pointers split into routines. This makes following the logic much easier. Patch is 91.53 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/157485.diff 5 Files Affected:
diff --git a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
index 87a09b72933db..175b5e04d82ff 100644
--- a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
@@ -211,6 +211,7 @@
#include "AArch64FrameLowering.h"
#include "AArch64InstrInfo.h"
#include "AArch64MachineFunctionInfo.h"
+#include "AArch64PrologueEpilogue.h"
#include "AArch64RegisterInfo.h"
#include "AArch64Subtarget.h"
#include "MCTargetDesc/AArch64AddressingModes.h"
@@ -218,7 +219,6 @@
#include "Utils/AArch64SMEAttributes.h"
#include "llvm/ADT/ScopeExit.h"
#include "llvm/ADT/SmallVector.h"
-#include "llvm/ADT/Statistic.h"
#include "llvm/Analysis/ValueTracking.h"
#include "llvm/CodeGen/CFIInstBuilder.h"
#include "llvm/CodeGen/LivePhysRegs.h"
@@ -293,8 +293,6 @@ static cl::opt<bool> DisableMultiVectorSpillFill(
cl::desc("Disable use of LD/ST pairs for SME2 or SVE2p1"), cl::init(false),
cl::Hidden);
-STATISTIC(NumRedZoneFunctions, "Number of functions using red zone");
-
/// Returns how much of the incoming argument stack area (in bytes) we should
/// clean up in an epilogue. For the C calling convention this will be 0, for
/// guaranteed tail call conventions it can be positive (a normal return or a
@@ -328,23 +326,20 @@ static int64_t getArgumentStackToRestore(MachineFunction &MF,
return ArgumentPopSize;
}
-static bool produceCompactUnwindFrame(MachineFunction &MF);
-static bool needsWinCFI(const MachineFunction &MF);
-static StackOffset getSVEStackSize(const MachineFunction &MF);
-static Register findScratchNonCalleeSaveRegister(MachineBasicBlock *MBB,
- bool HasCall = false);
-static bool requiresSaveVG(const MachineFunction &MF);
+static bool produceCompactUnwindFrame(const AArch64FrameLowering &,
+ MachineFunction &MF);
// Conservatively, returns true if the function is likely to have an SVE vectors
// on the stack. This function is safe to be called before callee-saves or
// object offsets have been determined.
-static bool isLikelyToHaveSVEStack(const MachineFunction &MF) {
+static bool isLikelyToHaveSVEStack(const AArch64FrameLowering &AFL,
+ const MachineFunction &MF) {
auto *AFI = MF.getInfo<AArch64FunctionInfo>();
if (AFI->isSVECC())
return true;
if (AFI->hasCalculatedStackSizeSVE())
- return bool(getSVEStackSize(MF));
+ return bool(AFL.getSVEStackSize(MF));
const MachineFrameInfo &MFI = MF.getFrameInfo();
for (int FI = MFI.getObjectIndexBegin(); FI < MFI.getObjectIndexEnd(); FI++) {
@@ -372,7 +367,7 @@ bool AArch64FrameLowering::homogeneousPrologEpilog(
return false;
// TODO: SVE is not supported yet.
- if (isLikelyToHaveSVEStack(MF))
+ if (isLikelyToHaveSVEStack(*this, MF))
return false;
// Bail on stack adjustment needed on return for simplicity.
@@ -409,7 +404,7 @@ bool AArch64FrameLowering::homogeneousPrologEpilog(
/// Returns true if CSRs should be paired.
bool AArch64FrameLowering::producePairRegisters(MachineFunction &MF) const {
- return produceCompactUnwindFrame(MF) || homogeneousPrologEpilog(MF);
+ return produceCompactUnwindFrame(*this, MF) || homogeneousPrologEpilog(MF);
}
/// This is the biggest offset to the stack pointer we can encode in aarch64
@@ -451,11 +446,10 @@ AArch64FrameLowering::getStackIDForScalableVectors() const {
return TargetStackID::ScalableVector;
}
-/// Returns the size of the fixed object area (allocated next to sp on entry)
-/// On Win64 this may include a var args area and an UnwindHelp object for EH.
-static unsigned getFixedObjectSize(const MachineFunction &MF,
- const AArch64FunctionInfo *AFI, bool IsWin64,
- bool IsFunclet) {
+unsigned
+AArch64FrameLowering::getFixedObjectSize(const MachineFunction &MF,
+ const AArch64FunctionInfo *AFI,
+ bool IsWin64, bool IsFunclet) const {
assert(AFI->getTailCallReservedStack() % 16 == 0 &&
"Tail call reserved stack must be aligned to 16 bytes");
if (!IsWin64 || IsFunclet) {
@@ -494,7 +488,8 @@ static unsigned getFixedObjectSize(const MachineFunction &MF,
}
/// Returns the size of the entire SVE stackframe (calleesaves + spills).
-static StackOffset getSVEStackSize(const MachineFunction &MF) {
+StackOffset
+AArch64FrameLowering::getSVEStackSize(const MachineFunction &MF) const {
const AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
return StackOffset::getScalable((int64_t)AFI->getStackSizeSVE());
}
@@ -683,70 +678,6 @@ MachineBasicBlock::iterator AArch64FrameLowering::eliminateCallFramePseudoInstr(
return MBB.erase(I);
}
-void AArch64FrameLowering::emitCalleeSavedGPRLocations(
- MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI) const {
- MachineFunction &MF = *MBB.getParent();
- MachineFrameInfo &MFI = MF.getFrameInfo();
-
- const std::vector<CalleeSavedInfo> &CSI = MFI.getCalleeSavedInfo();
- if (CSI.empty())
- return;
-
- CFIInstBuilder CFIBuilder(MBB, MBBI, MachineInstr::FrameSetup);
- for (const auto &Info : CSI) {
- unsigned FrameIdx = Info.getFrameIdx();
- if (MFI.getStackID(FrameIdx) == TargetStackID::ScalableVector)
- continue;
-
- assert(!Info.isSpilledToReg() && "Spilling to registers not implemented");
- int64_t Offset = MFI.getObjectOffset(FrameIdx) - getOffsetOfLocalArea();
- CFIBuilder.buildOffset(Info.getReg(), Offset);
- }
-}
-
-void AArch64FrameLowering::emitCalleeSavedSVELocations(
- MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI) const {
- MachineFunction &MF = *MBB.getParent();
- MachineFrameInfo &MFI = MF.getFrameInfo();
-
- // Add callee saved registers to move list.
- const std::vector<CalleeSavedInfo> &CSI = MFI.getCalleeSavedInfo();
- if (CSI.empty())
- return;
-
- const TargetSubtargetInfo &STI = MF.getSubtarget();
- const TargetRegisterInfo &TRI = *STI.getRegisterInfo();
- AArch64FunctionInfo &AFI = *MF.getInfo<AArch64FunctionInfo>();
- CFIInstBuilder CFIBuilder(MBB, MBBI, MachineInstr::FrameSetup);
-
- std::optional<int64_t> IncomingVGOffsetFromDefCFA;
- if (requiresSaveVG(MF)) {
- auto IncomingVG = *find_if(
- reverse(CSI), [](auto &Info) { return Info.getReg() == AArch64::VG; });
- IncomingVGOffsetFromDefCFA =
- MFI.getObjectOffset(IncomingVG.getFrameIdx()) - getOffsetOfLocalArea();
- }
-
- for (const auto &Info : CSI) {
- if (MFI.getStackID(Info.getFrameIdx()) != TargetStackID::ScalableVector)
- continue;
-
- // Not all unwinders may know about SVE registers, so assume the lowest
- // common denominator.
- assert(!Info.isSpilledToReg() && "Spilling to registers not implemented");
- MCRegister Reg = Info.getReg();
- if (!static_cast<const AArch64RegisterInfo &>(TRI).regNeedsCFI(Reg, Reg))
- continue;
-
- StackOffset Offset =
- StackOffset::getScalable(MFI.getObjectOffset(Info.getFrameIdx())) -
- StackOffset::getFixed(AFI.getCalleeSavedStackSize(MFI));
-
- CFIBuilder.insertCFIInst(
- createCFAOffset(TRI, Reg, Offset, IncomingVGOffsetFromDefCFA));
- }
-}
-
void AArch64FrameLowering::resetCFIToInitialState(
MachineBasicBlock &MBB) const {
@@ -1088,8 +1019,8 @@ void AArch64FrameLowering::emitZeroCallUsedRegs(BitVector RegsToZero,
}
}
-static bool windowsRequiresStackProbe(const MachineFunction &MF,
- uint64_t StackSizeInBytes) {
+bool AArch64FrameLowering::windowsRequiresStackProbe(
+ const MachineFunction &MF, uint64_t StackSizeInBytes) const {
const AArch64Subtarget &Subtarget = MF.getSubtarget<AArch64Subtarget>();
const AArch64FunctionInfo &MFI = *MF.getInfo<AArch64FunctionInfo>();
// TODO: When implementing stack protectors, take that into account
@@ -1108,19 +1039,9 @@ static void getLiveRegsForEntryMBB(LivePhysRegs &LiveRegs,
LiveRegs.addReg(CSRegs[i]);
}
-// Find a scratch register that we can use at the start of the prologue to
-// re-align the stack pointer. We avoid using callee-save registers since they
-// may appear to be free when this is called from canUseAsPrologue (during
-// shrink wrapping), but then no longer be free when this is called from
-// emitPrologue.
-//
-// FIXME: This is a bit conservative, since in the above case we could use one
-// of the callee-save registers as a scratch temp to re-align the stack pointer,
-// but we would then have to make sure that we were in fact saving at least one
-// callee-save register in the prologue, which is additional complexity that
-// doesn't seem worth the benefit.
-static Register findScratchNonCalleeSaveRegister(MachineBasicBlock *MBB,
- bool HasCall) {
+Register
+AArch64FrameLowering::findScratchNonCalleeSaveRegister(MachineBasicBlock *MBB,
+ bool HasCall) const {
MachineFunction *MF = MBB->getParent();
// If MBB is an entry block, use X9 as the scratch register
@@ -1193,13 +1114,14 @@ bool AArch64FrameLowering::canUseAsPrologue(
return true;
}
-static bool needsWinCFI(const MachineFunction &MF) {
+bool AArch64FrameLowering::needsWinCFI(const MachineFunction &MF) const {
const Function &F = MF.getFunction();
return MF.getTarget().getMCAsmInfo()->usesWindowsCFI() &&
F.needsUnwindTableEntry();
}
-static bool shouldSignReturnAddressEverywhere(const MachineFunction &MF) {
+bool AArch64FrameLowering::shouldSignReturnAddressEverywhere(
+ const MachineFunction &MF) const {
// FIXME: With WinCFI, extra care should be taken to place SEH_PACSignLR
// and SEH_EpilogEnd instructions in the correct order.
if (MF.getTarget().getMCAsmInfo()->usesWindowsCFI())
@@ -1475,13 +1397,13 @@ static void fixupSEHOpcode(MachineBasicBlock::iterator MBBI,
ImmOpnd->setImm(ImmOpnd->getImm() + LocalStackSize);
}
-bool requiresGetVGCall(MachineFunction &MF) {
- AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
+bool AArch64FrameLowering::requiresGetVGCall(const MachineFunction &MF) const {
+ auto *AFI = MF.getInfo<AArch64FunctionInfo>();
return AFI->hasStreamingModeChanges() &&
!MF.getSubtarget<AArch64Subtarget>().hasSVE();
}
-static bool requiresSaveVG(const MachineFunction &MF) {
+bool AArch64FrameLowering::requiresSaveVG(const MachineFunction &MF) const {
const AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
if (!AFI->needsDwarfUnwindInfo(MF) || !AFI->hasStreamingModeChanges())
return false;
@@ -1499,8 +1421,8 @@ static bool matchLibcall(const TargetLowering &TLI, const MachineOperand &MO,
StringRef(TLI.getLibcallName(LC)) == MO.getSymbolName();
}
-bool isVGInstruction(MachineBasicBlock::iterator MBBI,
- const TargetLowering &TLI) {
+bool AArch64FrameLowering::isVGInstruction(MachineBasicBlock::iterator MBBI,
+ const TargetLowering &TLI) const {
unsigned Opc = MBBI->getOpcode();
if (Opc == AArch64::CNTD_XPiI)
return true;
@@ -1514,15 +1436,12 @@ bool isVGInstruction(MachineBasicBlock::iterator MBBI,
return Opc == TargetOpcode::COPY;
}
-// Convert callee-save register save/restore instruction to do stack pointer
-// decrement/increment to allocate/deallocate the callee-save stack area by
-// converting store/load to use pre/post increment version.
-static MachineBasicBlock::iterator convertCalleeSaveRestoreToSPPrePostIncDec(
+MachineBasicBlock::iterator
+AArch64FrameLowering::convertCalleeSaveRestoreToSPPrePostIncDec(
MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI,
const DebugLoc &DL, const TargetInstrInfo *TII, int CSStackSizeInc,
bool NeedsWinCFI, bool *HasWinCFI, bool EmitCFI,
- MachineInstr::MIFlag FrameFlag = MachineInstr::FrameSetup,
- int CFAOffset = 0) {
+ MachineInstr::MIFlag FrameFlag, int CFAOffset) const {
unsigned NewOpc;
// If the function contains streaming mode changes, we expect instructions
@@ -1643,12 +1562,9 @@ static MachineBasicBlock::iterator convertCalleeSaveRestoreToSPPrePostIncDec(
return std::prev(MBB.erase(MBBI));
}
-// Fixup callee-save register save/restore instructions to take into account
-// combined SP bump by adding the local stack size to the stack offsets.
-static void fixupCalleeSaveRestoreStackOffset(MachineInstr &MI,
- uint64_t LocalStackSize,
- bool NeedsWinCFI,
- bool *HasWinCFI) {
+void AArch64FrameLowering::fixupCalleeSaveRestoreStackOffset(
+ MachineInstr &MI, uint64_t LocalStackSize, bool NeedsWinCFI,
+ bool *HasWinCFI) const {
if (AArch64InstrInfo::isSEHInstruction(MI))
return;
@@ -1703,7 +1619,8 @@ static unsigned getStackHazardSize(const MachineFunction &MF) {
}
// Convenience function to determine whether I is an SVE callee save.
-static bool IsSVECalleeSave(MachineBasicBlock::iterator I) {
+bool AArch64FrameLowering::isSVECalleeSave(
+ MachineBasicBlock::iterator I) const {
switch (I->getOpcode()) {
default:
return false;
@@ -1725,42 +1642,6 @@ static bool IsSVECalleeSave(MachineBasicBlock::iterator I) {
}
}
-static void emitShadowCallStackPrologue(const TargetInstrInfo &TII,
- MachineFunction &MF,
- MachineBasicBlock &MBB,
- MachineBasicBlock::iterator MBBI,
- const DebugLoc &DL, bool NeedsWinCFI,
- bool NeedsUnwindInfo) {
- // Shadow call stack prolog: str x30, [x18], #8
- BuildMI(MBB, MBBI, DL, TII.get(AArch64::STRXpost))
- .addReg(AArch64::X18, RegState::Define)
- .addReg(AArch64::LR)
- .addReg(AArch64::X18)
- .addImm(8)
- .setMIFlag(MachineInstr::FrameSetup);
-
- // This instruction also makes x18 live-in to the entry block.
- MBB.addLiveIn(AArch64::X18);
-
- if (NeedsWinCFI)
- BuildMI(MBB, MBBI, DL, TII.get(AArch64::SEH_Nop))
- .setMIFlag(MachineInstr::FrameSetup);
-
- if (NeedsUnwindInfo) {
- // Emit a CFI instruction that causes 8 to be subtracted from the value of
- // x18 when unwinding past this frame.
- static const char CFIInst[] = {
- dwarf::DW_CFA_val_expression,
- 18, // register
- 2, // length
- static_cast<char>(unsigned(dwarf::DW_OP_breg18)),
- static_cast<char>(-8) & 0x7f, // addend (sleb128)
- };
- CFIInstBuilder(MBB, MBBI, MachineInstr::FrameSetup)
- .buildEscape(StringRef(CFIInst, sizeof(CFIInst)));
- }
-}
-
static void emitShadowCallStackEpilogue(const TargetInstrInfo &TII,
MachineFunction &MF,
MachineBasicBlock &MBB,
@@ -1783,36 +1664,6 @@ static void emitShadowCallStackEpilogue(const TargetInstrInfo &TII,
.buildRestore(AArch64::X18);
}
-// Define the current CFA rule to use the provided FP.
-static void emitDefineCFAWithFP(MachineFunction &MF, MachineBasicBlock &MBB,
- MachineBasicBlock::iterator MBBI,
- unsigned FixedObject) {
- const AArch64Subtarget &STI = MF.getSubtarget<AArch64Subtarget>();
- const AArch64RegisterInfo *TRI = STI.getRegisterInfo();
- AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
-
- const int OffsetToFirstCalleeSaveFromFP =
- AFI->getCalleeSaveBaseToFrameRecordOffset() -
- AFI->getCalleeSavedStackSize();
- Register FramePtr = TRI->getFrameRegister(MF);
- CFIInstBuilder(MBB, MBBI, MachineInstr::FrameSetup)
- .buildDefCFA(FramePtr, FixedObject - OffsetToFirstCalleeSaveFromFP);
-}
-
-#ifndef NDEBUG
-/// Collect live registers from the end of \p MI's parent up to (including) \p
-/// MI in \p LiveRegs.
-static void getLivePhysRegsUpTo(MachineInstr &MI, const TargetRegisterInfo &TRI,
- LivePhysRegs &LiveRegs) {
-
- MachineBasicBlock &MBB = *MI.getParent();
- LiveRegs.addLiveOuts(MBB);
- for (const MachineInstr &MI :
- reverse(make_range(MI.getIterator(), MBB.instr_end())))
- LiveRegs.stepBackward(MI);
-}
-#endif
-
void AArch64FrameLowering::emitPacRetPlusLeafHardening(
MachineFunction &MF) const {
const AArch64Subtarget &Subtarget = MF.getSubtarget<AArch64Subtarget>();
@@ -1848,616 +1699,8 @@ void AArch64FrameLowering::emitPacRetPlusLeafHardening(
void AArch64FrameLowering::emitPrologue(MachineFunction &MF,
MachineBasicBlock &MBB) const {
- MachineBasicBlock::iterator MBBI = MBB.begin();
- const MachineFrameInfo &MFI = MF.getFrameInfo();
- const Function &F = MF.getFunction();
- const AArch64Subtarget &Subtarget = MF.getSubtarget<AArch64Subtarget>();
- const AArch64RegisterInfo *RegInfo = Subtarget.getRegisterInfo();
- const TargetInstrInfo *TII = Subtarget.getInstrInfo();
-
- AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
- bool EmitCFI = AFI->needsDwarfUnwindInfo(MF);
- bool EmitAsyncCFI = AFI->needsAsyncDwarfUnwindInfo(MF);
- bool HasFP = hasFP(MF);
- bool NeedsWinCFI = needsWinCFI(MF);
- bool HasWinCFI = false;
- auto Cleanup = make_scope_exit([&]() { MF.setHasWinCFI(HasWinCFI); });
-
- MachineBasicBlock::iterator End = MBB.end();
-#ifndef NDEBUG
- const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- // Collect live register from the end of MBB up to the start of the existing
- // frame setup instructions.
- MachineBasicBlock::iterator NonFrameStart = MBB.begin();
- while (NonFrameStart != End &&
- NonFrameStart->getFlag(MachineInstr::FrameSetup))
- ++NonFrameStart;
-
- LivePhysRegs LiveRegs(*TRI);
- if (NonFrameStart != MBB.end()) {
- getLivePhysRegsUpTo(*NonFrameStart, *TRI, LiveRegs);
- // Ignore registers used for stack management for now.
- LiveRegs.removeReg(AArch64::SP);
- LiveRegs.removeReg(AArch64::X19);
- LiveRegs.removeReg(AArch64::FP);
- LiveRegs.removeReg(AArch64::LR);
-
- // X0 will be clobbered by a call to __arm_get_current_vg in the prologue.
- // This is necessary to spill VG if required where SVE is unavailable, but
- // X0 is preserved around this call.
- if (requiresGetVGCall(MF))
- LiveRegs.removeReg(AArch64::X0);
- }
-
- auto VerifyClobberOnExit = make_scope_exit([&]() {
- if (NonFrameStart == MBB.end())
- return;
- // Check if any of the newly instructions clobber any of the live registers.
- for (MachineInstr &MI :
- make_range(MBB.instr_begin(), NonFrameStart->getIterator())) {
- for (auto &Op : MI.operands())
- if (Op.isReg() && Op.isDef())
- assert(!LiveRegs.contains(Op.getReg()) &&
- "live register clobbered by inserted prologue instructions");
- }
- });
-#endif
-
- bool IsFunclet = MBB.isEHFuncletEntry();
-
- // At this point, we're going to decide whether or not the function uses a
- // redzone. In most cases, the function doesn't have a redzone so let's
- // assume that's false and set it to true in the case that there's a redzone.
- AFI->setHasRedZone(false);
-
- // Debug location must be unknown since the first debug location is used
- // to determine the end of the prologue.
- DebugLoc DL;
-
- const auto &MFnI = *MF.getInfo<AArch64FunctionInfo>();
- if (MFnI.shouldSignReturnAddress(MF)) {
- // If pac-ret+leaf is in effect, PAUTH_PROLOGUE pseudo instructions
- // are inserted by emitPacRetPlusLeafHardening().
- if (!shouldSignReturnAddressEverywhere(MF)) {
- BuildMI(MBB, MBBI, DL, TII->get(AArch64::PAUTH_PROLOGUE))
- .setMIFlag(MachineInstr::FrameSetup);
- }
- // AArch64PointerAuth pass will insert SEH_PACSignLR
- HasWinCFI |= NeedsWinCFI;
- }
-
- if (MFnI.needsShadowCallStackPrologueEpilogue(MF)) {
- emitShadowCallStackPrologue(*TII, MF, MBB, MBBI, DL, NeedsWinCFI,
- MFnI.needsDwarfUnwindInfo(MF));
- HasWinCFI |= NeedsWinCFI;
- }
-
- if (EmitCFI && MFnI.isMTETagged()) {
- BuildMI(MBB, MBBI, DL, TII->get(AArch64::EMITMTETAGGED))
- .setMIFlag(MachineInstr::FrameSet...
[truncated]
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think organizing like this makes sense. LGTM. (Note this conflicts with #156467.)
|
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/88/builds/15941 Here is the relevant piece of the build log for the reference |
|
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/141/builds/11428 Here is the relevant piece of the build log for the reference |
This is much smaller than llvm#157485 (as the epilogue code was already a ore reasonable size); however, this change will allow some further tidy up of methods shared between the prologue and epilogue code (in a follow-up patch).
|
I think we may be seeing an issue on windows after this patch. I'm still bisecting, since 1) we've had other failures masking this, and 2) its a very large blame, but since this touches Aarch64 and mentions windows, I figured I'd ask if you've seen anything similar. Bot: https://ci.chromium.org/ui/p/fuchsia/builders/prod/clang-windows-x64/b8704080612626296337/overview Error: |
|
Thanks for letting me know. I've not seen that issue before, but I can investigate tomorrow. |
|
Ah, bisection got a bit further, and I'm pretty sure it ruled out your patch already. I think its something else and its the x86 codegen. It just happens to bork out on smth for aarch64. I'll ping you again when I have something confirmed, but I think its some of the changes that have forward fixes, like #158055 are more likely at this point. Anyway, sorry for the noise. |
This is much smaller than #157485 (as the epilogue code was already a reasonable size); however, this change will allow some further tidy up of methods shared between the prologue and epilogue code (in a follow-up patch).
emitProloguewas almost 1k SLOC, with a large portion not actually related to emitting the vast majority of prologues.This patch creates a new class
AArch64PrologueEmitterfor emitting the prologue, which keeps common state/target classes as members. This makes adding methods that handle niche cases easy, and allows methods to be marked "const" when they don't redefine flags/state.With this change, the core "emitPrologue" is around 275 LOC, with cases like Windows stack probes or Swift frame pointers split into routines. This makes following the logic much easier.