Skip to content

Commit 4a9c26f

Browse files
committed
[move-function] Make MandatoryInlining always convert builtin.move -> mark_unresolved_move_addr.
To give a bit more information, currently the way the move function is implemented is that: 1. SILGen emits a builtin "move" that is called within the function _move in the stdlib. 2. Mandatory Inlining today if the final inlined type is address only, inlines builtin "move" as mark_unresolved_move_addr. Otherwise, if the inlined type is loadable, it performs a load [take] + move [diagnostic] + store [init]. 3. In the diagnostic pipeline before any mem optimizations have run, we run the move checker for addresses. This eliminates /all/ mark_unresolved_move_addr as part of emitting diagnostics. In order to make this work, we perform a small optimization before the checker runs that moves the mark_unresolved_move_addr from being on temporary alloc_stacks to the true base underlying address we are trying to move. This optimization is necessary since _move is generic and often times SILGen will emit this temporary that we do not want. 4. Then after we have run the guaranteed mem optimizations, we run the object based move checker emitting diagnostics. This PR changes the scheme above to the following: 1. SILGen emits a builtin "move" that is called within the function _move in the stdlib. 2. Mandatory Inlining inlines builtin "move" as mark_unresolved_move_addr. 3. In the diagnostic pipeline before we have run any mem optimizations and before we have run the actual move address checker, we massage the IR as we do above but in a separate pass where in addition we try to match this pattern: ``` %temporary = alloc_stack $LoadableType store %1 to [init] %temporary : $*LoadableType mark_unresolved_move_addr %temporary to %otherAddr : $*LoadableType destroy_addr %temporary : $*LoadableType ``` and transform it to: ``` %temporary = alloc_stack $LoadableType %2 = move_value [allows_diagnostics] %1 : $*LoadableType store %2 to [init] %temporary : $*LoadableType destroy_addr %temporary : $*LoadableType ``` ensuring that the object move checker will handle this. 4. Then after we have run the guaranteed mem optimizations, we run the object based move checker emitting diagnostics.
1 parent 1ed3ce3 commit 4a9c26f

File tree

7 files changed

+266
-149
lines changed

7 files changed

+266
-149
lines changed

include/swift/SIL/SILInstruction.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4028,6 +4028,9 @@ class StoreInst
40284028
SILValue getSrc() const { return Operands[Src].get(); }
40294029
SILValue getDest() const { return Operands[Dest].get(); }
40304030

4031+
void setSrc(SILValue V) { Operands[Src].set(V); }
4032+
void setDest(SILValue V) { Operands[Dest].set(V); }
4033+
40314034
ArrayRef<Operand> getAllOperands() const { return Operands.asArray(); }
40324035
MutableArrayRef<Operand> getAllOperands() { return Operands.asArray(); }
40334036

include/swift/SILOptimizer/PassManager/Passes.def

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -430,6 +430,9 @@ PASS(LexicalLifetimeEliminator, "sil-lexical-lifetime-eliminator",
430430
PASS(MoveKillsCopyableAddressesChecker, "sil-move-kills-copyable-addresses-checker",
431431
"Pass that checks that any copyable (non-move only) address that is passed "
432432
"to _move do not have any uses later than the _move")
433+
PASS(MoveFunctionCanonicalization, "sil-move-function-canon",
434+
"Pass that canonicalizes certain parts of the IR before we perform move "
435+
"function checking.")
433436
PASS(PruneVTables, "prune-vtables",
434437
"Mark class methods that do not require vtable dispatch")
435438
PASS_RANGE(AllPasses, AADumper, PruneVTables)

lib/SILOptimizer/Mandatory/CMakeLists.txt

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,10 @@ target_sources(swiftSILOptimizer PRIVATE
1818
LexicalLifetimeEliminator.cpp
1919
LowerHopToActor.cpp
2020
MandatoryInlining.cpp
21-
MoveOnlyChecker.cpp
22-
MoveKillsCopyableValuesChecker.cpp
21+
MoveFunctionCanonicalization.cpp
2322
MoveKillsCopyableAddressesChecker.cpp
23+
MoveKillsCopyableValuesChecker.cpp
24+
MoveOnlyChecker.cpp
2425
NestedSemanticFunctionCheck.cpp
2526
OptimizeHopToExecutor.cpp
2627
PerformanceDiagnostics.cpp
Lines changed: 248 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,248 @@
1+
//===--- MoveFunctionCanonicalization.cpp ---------------------------------===//
2+
//
3+
// This source file is part of the Swift.org open source project
4+
//
5+
// Copyright (c) 2014 - 2021 Apple Inc. and the Swift project authors
6+
// Licensed under Apache License v2.0 with Runtime Library Exception
7+
//
8+
// See https://swift.org/LICENSE.txt for license information
9+
// See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
10+
//
11+
//===----------------------------------------------------------------------===//
12+
13+
#define DEBUG_TYPE "sil-move-function-canonicalization"
14+
15+
#include "swift/AST/DiagnosticsSIL.h"
16+
#include "swift/Basic/Defer.h"
17+
#include "swift/Basic/FrozenMultiMap.h"
18+
#include "swift/SIL/BasicBlockBits.h"
19+
#include "swift/SIL/BasicBlockDatastructures.h"
20+
#include "swift/SIL/Consumption.h"
21+
#include "swift/SIL/DebugUtils.h"
22+
#include "swift/SIL/InstructionUtils.h"
23+
#include "swift/SIL/MemAccessUtils.h"
24+
#include "swift/SIL/OwnershipUtils.h"
25+
#include "swift/SIL/SILArgument.h"
26+
#include "swift/SIL/SILBuilder.h"
27+
#include "swift/SIL/SILFunction.h"
28+
#include "swift/SIL/SILInstruction.h"
29+
#include "swift/SIL/SILUndef.h"
30+
#include "swift/SILOptimizer/Analysis/ClosureScope.h"
31+
#include "swift/SILOptimizer/PassManager/Transforms.h"
32+
#include "swift/SILOptimizer/Utils/CanonicalOSSALifetime.h"
33+
#include "llvm/ADT/PointerEmbeddedInt.h"
34+
#include "llvm/ADT/PointerSumType.h"
35+
36+
using namespace swift;
37+
38+
//===----------------------------------------------------------------------===//
39+
// Utility
40+
//===----------------------------------------------------------------------===//
41+
42+
/// Attempts to perform several small optimizations to setup both the address
43+
/// and object checkers. Returns true if we made a change to the IR.
44+
static bool tryConvertSimpleMoveFromAllocStackTemporary(
45+
MarkUnresolvedMoveAddrInst *markMoveAddr, AliasAnalysis *aa) {
46+
LLVM_DEBUG(llvm::dbgs() << "Trying to fix up: " << *markMoveAddr);
47+
48+
// We need a non-lexical alloc_stack as our source.
49+
auto *asi = dyn_cast<AllocStackInst>(markMoveAddr->getSrc());
50+
if (!asi || asi->isLexical()) {
51+
LLVM_DEBUG(llvm::dbgs()
52+
<< " Source isnt an alloc_stack or is lexical... Bailing!\n");
53+
return false;
54+
}
55+
56+
DestroyAddrInst *dai = nullptr;
57+
CopyAddrInst *cai = nullptr;
58+
StoreInst *si = nullptr;
59+
for (auto *use : asi->getUses()) {
60+
auto *user = use->getUser();
61+
LLVM_DEBUG(llvm::dbgs() << " Visiting User: " << *user);
62+
63+
// If we find our own instruction or a dealloc stack, just skip.
64+
if (user == markMoveAddr || isa<DeallocStackInst>(user)) {
65+
LLVM_DEBUG(
66+
llvm::dbgs()
67+
<< " Found our original inst or a dealloc stack... Ok!\n");
68+
continue;
69+
}
70+
71+
if (auto *destroyAddrInst = dyn_cast<DestroyAddrInst>(user)) {
72+
if (dai)
73+
return false;
74+
dai = destroyAddrInst;
75+
continue;
76+
}
77+
78+
if (auto *newCAI = dyn_cast<CopyAddrInst>(user)) {
79+
LLVM_DEBUG(llvm::dbgs()
80+
<< " Found copy_addr... checking if legal...\n");
81+
// We require that our copy_addr be an init into our temp and in the same
82+
// block as markMoveAddr.
83+
if (newCAI->getDest() == asi && bool(newCAI->isInitializationOfDest()) &&
84+
!bool(newCAI->isTakeOfSrc()) &&
85+
newCAI->getParent() == markMoveAddr->getParent()) {
86+
if (cai || si)
87+
return false;
88+
cai = newCAI;
89+
continue;
90+
}
91+
}
92+
93+
if (auto *newSI = dyn_cast<StoreInst>(user)) {
94+
LLVM_DEBUG(llvm::dbgs()
95+
<< " Found store... checking if legal...\n");
96+
// We require that our copy_addr be an init into our temp and in the same
97+
// block as markMoveAddr.
98+
if (newSI->getDest() == asi &&
99+
newSI->getOwnershipQualifier() == StoreOwnershipQualifier::Init &&
100+
newSI->getParent() == markMoveAddr->getParent()) {
101+
if (cai || si)
102+
return false;
103+
si = newSI;
104+
continue;
105+
}
106+
}
107+
108+
// If we do not find an instruction that we know about, return we can't
109+
// optimize.
110+
LLVM_DEBUG(
111+
llvm::dbgs()
112+
<< " Found instruction we did not understand! Bailing!\n");
113+
return false;
114+
}
115+
116+
// If we did not find an (init | store) or destroy_addr, just bail.
117+
if (!(cai || si) || !dai) {
118+
LLVM_DEBUG(llvm::dbgs()
119+
<< " Did not find a single init! Bailing!\n");
120+
return false;
121+
}
122+
123+
assert(bool(cai) != bool(si));
124+
125+
// Otherwise, lets walk from cai to markMoveAddr and make sure there aren't
126+
// any side-effect having instructions in between them.
127+
//
128+
// NOTE: We know that cai must be before the markMoveAddr in the block since
129+
// otherwise we would be moving from uninitialized memory.
130+
SILInstruction *init = nullptr;
131+
if (cai)
132+
init = cai;
133+
else
134+
init = si;
135+
136+
auto range = llvm::make_range(std::next(init->getIterator()),
137+
markMoveAddr->getIterator());
138+
if (llvm::any_of(range, [&](SILInstruction &iter) {
139+
if (!iter.mayHaveSideEffects()) {
140+
return false;
141+
}
142+
143+
if (auto *dvi = dyn_cast<DestroyAddrInst>(&iter)) {
144+
if (aa->isNoAlias(dvi->getOperand(), asi)) {
145+
// We are going to be extending the lifetime of our
146+
// underlying value, not shrinking it so we can ignore
147+
// destroy_addr on other non-aliasing values.
148+
return false;
149+
}
150+
}
151+
152+
// Ignore end of scope markers with side-effects.
153+
if (isEndOfScopeMarker(&iter)) {
154+
return false;
155+
}
156+
157+
LLVM_DEBUG(llvm::dbgs()
158+
<< " Found side-effect inst... Bailing!: " << iter);
159+
return true;
160+
}))
161+
return false;
162+
163+
LLVM_DEBUG(llvm::dbgs() << " Success! Performing optimization!\n");
164+
// Ok, we can perform our optimization! Change move_addr's source to be the
165+
// original copy_addr's src and add add uses of the stack location to an
166+
// instruction deleter. We will eliminate them later.
167+
if (cai) {
168+
markMoveAddr->setSrc(cai->getSrc());
169+
return true;
170+
}
171+
172+
// If we have a store [init], see if our src is a load [copy] from an
173+
// alloc_stack that is lexical var. In this case, we want to move our
174+
// mark_unresolved_move_addr onto that lexical var. This pattern occurs due to
175+
// SILGen always loading loadable values from memory when retrieving an
176+
// RValue. Calling _move then since _move is generic forces the value to be
177+
// re-materialized into an alloc_stack. In this example remembering that
178+
// mark_unresolved_move_addr is a copy_addr [init], we try to move the MUMA
179+
// onto the original lexical alloc_stack.
180+
//
181+
// TODO: Implement.
182+
183+
// If we do not have a load [copy], transform this mark_resolved_move_addr
184+
// into a move_value [diagnostic] + store [init]. Predictable mem opts is
185+
// smart enough to handle this case and promote away loads from the
186+
// allocation. This runs before the value move checker runs.
187+
SILBuilderWithScope builder(si);
188+
auto *newValue = builder.createMoveValue(si->getLoc(), si->getSrc());
189+
newValue->setAllowsDiagnostics(true);
190+
si->setSrc(newValue);
191+
markMoveAddr->eraseFromParent();
192+
return true;
193+
}
194+
195+
//===----------------------------------------------------------------------===//
196+
// Top Level Entrypoint
197+
//===----------------------------------------------------------------------===//
198+
199+
namespace {
200+
201+
class MoveFunctionCanonicalization : public SILFunctionTransform {
202+
void run() override {
203+
auto *fn = getFunction();
204+
205+
// If we do not have experimental move only enabled, do not emit
206+
// diagnostics.
207+
if (!astContext.LangOpts.EnableExperimentalMoveOnly)
208+
return;
209+
210+
// Don't rerun diagnostics on deserialized functions.
211+
if (getFunction()->wasDeserializedCanonical())
212+
return;
213+
214+
bool madeChange = false;
215+
216+
assert(fn->getModule().getStage() == SILStage::Raw &&
217+
"Should only run on Raw SIL");
218+
219+
auto *aa = getAnalysis<AliasAnalysis>(fn);
220+
221+
for (auto &block : *fn) {
222+
for (auto ii = block.begin(), ie = block.end(); ii != ie;) {
223+
auto *inst = &*ii;
224+
++ii;
225+
226+
// See if we see a mark_unresolved_move_addr inst from a simple
227+
// temporary and move it onto the temporary's source. This ensures that
228+
// the mark_unresolved_move_addr is always on the operand regardless if
229+
// in the caller we materalized the address into a temporary.
230+
if (auto *markMoveAddr = dyn_cast<MarkUnresolvedMoveAddrInst>(inst)) {
231+
madeChange |=
232+
tryConvertSimpleMoveFromAllocStackTemporary(markMoveAddr, aa);
233+
continue;
234+
}
235+
}
236+
}
237+
238+
if (madeChange) {
239+
invalidateAnalysis(SILAnalysis::InvalidationKind::Instructions);
240+
}
241+
}
242+
};
243+
244+
} // anonymous namespace
245+
246+
SILTransform *swift::createMoveFunctionCanonicalization() {
247+
return new MoveFunctionCanonicalization();
248+
}

0 commit comments

Comments
 (0)