-
Notifications
You must be signed in to change notification settings - Fork 15.3k
[SLP]Improve vectorization of gathered loads. #89129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SLP]Improve vectorization of gathered loads. #89129
Conversation
Created using spr 1.3.5
|
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-backend-systemz Author: Alexey Bataev (alexey-bataev) ChangesWhen building the vectorization graph, the compiler may end up with the Part of D57059 Differential Revision: https://reviews.llvm.org/D105986 Patch is 430.05 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/89129.diff 53 Files Affected:
diff --git a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
index 806e8085038b35..6730d0c4db7fea 100644
--- a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
+++ b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
@@ -1133,6 +1133,8 @@ class BoUpSLP {
MultiNodeScalars.clear();
MustGather.clear();
EntryToLastInstruction.clear();
+ GatheredLoads.clear();
+ GatheredLoadsEntriesFirst = NoGatheredLoads;
ExternalUses.clear();
ExternalUsesAsGEPs.clear();
for (auto &Iter : BlocksSchedules) {
@@ -1170,8 +1172,9 @@ class BoUpSLP {
/// identity order is important, or the actual order.
/// \param TopToBottom If true, include the order of vectorized stores and
/// insertelement nodes, otherwise skip them.
- std::optional<OrdersType> getReorderingData(const TreeEntry &TE,
- bool TopToBottom);
+ std::optional<OrdersType> getReorderingData(
+ const TreeEntry &TE, bool TopToBottom,
+ DenseMap<const TreeEntry *, TreeEntry *> &ScatterVectorizeToReorder);
/// Reorders the current graph to the most profitable order starting from the
/// root node to the leaf nodes. The best order is chosen only from the nodes
@@ -2558,6 +2561,11 @@ class BoUpSLP {
/// be beneficial even the tree height is tiny.
bool isFullyVectorizableTinyTree(bool ForReduction) const;
+ /// Run through the list of all gathered loads in the graph and try to find
+ /// vector loads/masked gathers instead of regular gathers. Later these loads
+ /// are reshufled to build final gathered nodes.
+ void tryToVectorizeGatheredLoads();
+
/// Reorder commutative or alt operands to get better probability of
/// generating vectorized code.
static void reorderInputsAccordingToOpcode(ArrayRef<Value *> VL,
@@ -3010,6 +3018,14 @@ class BoUpSLP {
CastMaxMinBWSizes =
std::make_pair(std::numeric_limits<unsigned>::max(), 1);
MustGather.insert(VL.begin(), VL.end());
+ if (GatheredLoadsEntriesFirst == NoGatheredLoads ||
+ Last->Idx < GatheredLoadsEntriesFirst || UserTreeIdx.UserTE ||
+ S.getOpcode() != Instruction::Load) {
+ // Build a map for gathered scalars to the nodes where they are used.
+ for (Value *V : VL)
+ if (!isConstant(V))
+ ValueToGatherNodes.try_emplace(V).first->getSecond().insert(Last);
+ }
}
if (UserTreeIdx.UserTE) {
@@ -3085,6 +3101,14 @@ class BoUpSLP {
DenseMap<Value *, SmallPtrSet<const TreeEntry *, 4>>;
ValueToGatherNodesMap ValueToGatherNodes;
+ /// A list of loads to be gathered during the vectorization process. We can
+ /// try to vectorize them at the end, if profitable.
+ SmallVector<SmallVector<std::pair<LoadInst *, int>>> GatheredLoads;
+
+ /// The index of the first gathered load entry in the VectorizeTree.
+ constexpr static int NoGatheredLoads = -1;
+ int GatheredLoadsEntriesFirst = NoGatheredLoads;
+
/// This POD struct describes one external user in the vectorized tree.
struct ExternalUser {
ExternalUser(Value *S, llvm::User *U, int L)
@@ -4604,12 +4628,17 @@ static bool areTwoInsertFromSameBuildVector(
return false;
}
-std::optional<BoUpSLP::OrdersType>
-BoUpSLP::getReorderingData(const TreeEntry &TE, bool TopToBottom) {
+std::optional<BoUpSLP::OrdersType> BoUpSLP::getReorderingData(
+ const TreeEntry &TE, bool TopToBottom,
+ DenseMap<const TreeEntry *, TreeEntry *> &ScatterVectorizeToReorder) {
// FIXME: Vectorizing is not supported yet for non-power-of-2 ops.
if (TE.isNonPowOf2Vec())
return std::nullopt;
+ if (GatheredLoadsEntriesFirst != NoGatheredLoads &&
+ TE.Idx >= GatheredLoadsEntriesFirst && TE.UserTreeIndices.empty() &&
+ &TE != VectorizableTree.front().get())
+ return std::nullopt;
// No need to reorder if need to shuffle reuses, still need to shuffle the
// node.
if (!TE.ReuseShuffleIndices.empty()) {
@@ -4781,6 +4810,7 @@ BoUpSLP::getReorderingData(const TreeEntry &TE, bool TopToBottom) {
return std::nullopt; // No need to reorder.
return std::move(ResOrder);
}
+ bool LoadsScatterVectorize = false;
if (TE.State == TreeEntry::NeedToGather && !TE.isAltShuffle() &&
allSameType(TE.Scalars)) {
// TODO: add analysis of other gather nodes with extractelement
@@ -4843,8 +4873,32 @@ BoUpSLP::getReorderingData(const TreeEntry &TE, bool TopToBottom) {
if (TE.Scalars.size() >= 4)
if (std::optional<OrdersType> Order = findPartiallyOrderedLoads(TE))
return Order;
- if (std::optional<OrdersType> CurrentOrder = findReusedOrderedScalars(TE))
+ if (TE.getOpcode() == Instruction::Load) {
+ SmallVector<Value *> PointerOps;
+ OrdersType CurrentOrder;
+ LoadsState Res = canVectorizeLoads(TE.Scalars, TE.Scalars.front(),
+ CurrentOrder, PointerOps);
+ if (Res == LoadsState::Vectorize || Res == LoadsState::StridedVectorize)
+ return std::move(CurrentOrder);
+ LoadsScatterVectorize = Res == LoadsState::ScatterVectorize;
+ }
+ if (std::optional<OrdersType> CurrentOrder = findReusedOrderedScalars(TE)) {
+ if (LoadsScatterVectorize) {
+ if (TreeEntry *ScatterVectorTE = getTreeEntry(TE.Scalars.front());
+ ScatterVectorTE &&
+ ScatterVectorTE->Idx >= GatheredLoadsEntriesFirst &&
+ ScatterVectorTE->UserTreeIndices.empty() &&
+ ScatterVectorTE->State == TreeEntry::ScatterVectorize &&
+ ScatterVectorTE->Scalars.size() == TE.Scalars.size() &&
+ all_of(TE.Scalars, [&](Value *V) {
+ return getTreeEntry(V) == ScatterVectorTE;
+ })) {
+ ScatterVectorizeToReorder.try_emplace(&TE, ScatterVectorTE);
+ return std::nullopt;
+ }
+ }
return CurrentOrder;
+ }
}
return std::nullopt;
}
@@ -4930,73 +4984,83 @@ void BoUpSLP::reorderTopToBottom() {
// Maps a TreeEntry to the reorder indices of external users.
DenseMap<const TreeEntry *, SmallVector<OrdersType, 1>>
ExternalUserReorderMap;
+ // Nodes with loads masked gathering built out of gathered loads that should
+ // be reordered to avoid extra shuffles.
+ DenseMap<const TreeEntry *, TreeEntry *> ScatterVectorizeToReorder;
// Find all reorderable nodes with the given VF.
// Currently the are vectorized stores,loads,extracts + some gathering of
// extracts.
- for_each(VectorizableTree, [&, &TTIRef = *TTI](
- const std::unique_ptr<TreeEntry> &TE) {
- // Look for external users that will probably be vectorized.
- SmallVector<OrdersType, 1> ExternalUserReorderIndices =
- findExternalStoreUsersReorderIndices(TE.get());
- if (!ExternalUserReorderIndices.empty()) {
- VFToOrderedEntries[TE->getVectorFactor()].insert(TE.get());
- ExternalUserReorderMap.try_emplace(TE.get(),
- std::move(ExternalUserReorderIndices));
- }
-
- // Patterns like [fadd,fsub] can be combined into a single instruction in
- // x86. Reordering them into [fsub,fadd] blocks this pattern. So we need
- // to take into account their order when looking for the most used order.
- if (TE->isAltShuffle()) {
- VectorType *VecTy =
- FixedVectorType::get(TE->Scalars[0]->getType(), TE->Scalars.size());
- unsigned Opcode0 = TE->getOpcode();
- unsigned Opcode1 = TE->getAltOpcode();
- // The opcode mask selects between the two opcodes.
- SmallBitVector OpcodeMask(TE->Scalars.size(), false);
- for (unsigned Lane : seq<unsigned>(0, TE->Scalars.size()))
- if (cast<Instruction>(TE->Scalars[Lane])->getOpcode() == Opcode1)
- OpcodeMask.set(Lane);
- // If this pattern is supported by the target then we consider the order.
- if (TTIRef.isLegalAltInstr(VecTy, Opcode0, Opcode1, OpcodeMask)) {
- VFToOrderedEntries[TE->getVectorFactor()].insert(TE.get());
- AltShufflesToOrders.try_emplace(TE.get(), OrdersType());
- }
- // TODO: Check the reverse order too.
- }
+ for_each(
+ ArrayRef(VectorizableTree)
+ .drop_back(GatheredLoadsEntriesFirst == NoGatheredLoads
+ ? 0
+ : VectorizableTree.size() - GatheredLoadsEntriesFirst),
+ [&, &TTIRef = *TTI](const std::unique_ptr<TreeEntry> &TE) {
+ // Look for external users that will probably be vectorized.
+ SmallVector<OrdersType, 1> ExternalUserReorderIndices =
+ findExternalStoreUsersReorderIndices(TE.get());
+ if (!ExternalUserReorderIndices.empty()) {
+ VFToOrderedEntries[TE->getVectorFactor()].insert(TE.get());
+ ExternalUserReorderMap.try_emplace(
+ TE.get(), std::move(ExternalUserReorderIndices));
+ }
- if (std::optional<OrdersType> CurrentOrder =
- getReorderingData(*TE, /*TopToBottom=*/true)) {
- // Do not include ordering for nodes used in the alt opcode vectorization,
- // better to reorder them during bottom-to-top stage. If follow the order
- // here, it causes reordering of the whole graph though actually it is
- // profitable just to reorder the subgraph that starts from the alternate
- // opcode vectorization node. Such nodes already end-up with the shuffle
- // instruction and it is just enough to change this shuffle rather than
- // rotate the scalars for the whole graph.
- unsigned Cnt = 0;
- const TreeEntry *UserTE = TE.get();
- while (UserTE && Cnt < RecursionMaxDepth) {
- if (UserTE->UserTreeIndices.size() != 1)
- break;
- if (all_of(UserTE->UserTreeIndices, [](const EdgeInfo &EI) {
- return EI.UserTE->State == TreeEntry::Vectorize &&
- EI.UserTE->isAltShuffle() && EI.UserTE->Idx != 0;
- }))
- return;
- UserTE = UserTE->UserTreeIndices.back().UserTE;
- ++Cnt;
- }
- VFToOrderedEntries[TE->getVectorFactor()].insert(TE.get());
- if (!(TE->State == TreeEntry::Vectorize ||
- TE->State == TreeEntry::StridedVectorize) ||
- !TE->ReuseShuffleIndices.empty())
- GathersToOrders.try_emplace(TE.get(), *CurrentOrder);
- if (TE->State == TreeEntry::Vectorize &&
- TE->getOpcode() == Instruction::PHI)
- PhisToOrders.try_emplace(TE.get(), *CurrentOrder);
- }
- });
+ // Patterns like [fadd,fsub] can be combined into a single instruction
+ // in x86. Reordering them into [fsub,fadd] blocks this pattern. So we
+ // need to take into account their order when looking for the most used
+ // order.
+ if (TE->isAltShuffle()) {
+ VectorType *VecTy = FixedVectorType::get(TE->Scalars[0]->getType(),
+ TE->Scalars.size());
+ unsigned Opcode0 = TE->getOpcode();
+ unsigned Opcode1 = TE->getAltOpcode();
+ // The opcode mask selects between the two opcodes.
+ SmallBitVector OpcodeMask(TE->Scalars.size(), false);
+ for (unsigned Lane : seq<unsigned>(0, TE->Scalars.size()))
+ if (cast<Instruction>(TE->Scalars[Lane])->getOpcode() == Opcode1)
+ OpcodeMask.set(Lane);
+ // If this pattern is supported by the target then we consider the
+ // order.
+ if (TTIRef.isLegalAltInstr(VecTy, Opcode0, Opcode1, OpcodeMask)) {
+ VFToOrderedEntries[TE->getVectorFactor()].insert(TE.get());
+ AltShufflesToOrders.try_emplace(TE.get(), OrdersType());
+ }
+ // TODO: Check the reverse order too.
+ }
+
+ if (std::optional<OrdersType> CurrentOrder = getReorderingData(
+ *TE, /*TopToBottom=*/true, ScatterVectorizeToReorder)) {
+ // Do not include ordering for nodes used in the alt opcode
+ // vectorization, better to reorder them during bottom-to-top stage.
+ // If follow the order here, it causes reordering of the whole graph
+ // though actually it is profitable just to reorder the subgraph that
+ // starts from the alternate opcode vectorization node. Such nodes
+ // already end-up with the shuffle instruction and it is just enough
+ // to change this shuffle rather than rotate the scalars for the whole
+ // graph.
+ unsigned Cnt = 0;
+ const TreeEntry *UserTE = TE.get();
+ while (UserTE && Cnt < RecursionMaxDepth) {
+ if (UserTE->UserTreeIndices.size() != 1)
+ break;
+ if (all_of(UserTE->UserTreeIndices, [](const EdgeInfo &EI) {
+ return EI.UserTE->State == TreeEntry::Vectorize &&
+ EI.UserTE->isAltShuffle() && EI.UserTE->Idx != 0;
+ }))
+ return;
+ UserTE = UserTE->UserTreeIndices.back().UserTE;
+ ++Cnt;
+ }
+ VFToOrderedEntries[TE->getVectorFactor()].insert(TE.get());
+ if (!(TE->State == TreeEntry::Vectorize ||
+ TE->State == TreeEntry::StridedVectorize) ||
+ !TE->ReuseShuffleIndices.empty())
+ GathersToOrders.try_emplace(TE.get(), *CurrentOrder);
+ if (TE->State == TreeEntry::Vectorize &&
+ TE->getOpcode() == Instruction::PHI)
+ PhisToOrders.try_emplace(TE.get(), *CurrentOrder);
+ }
+ });
// Reorder the graph nodes according to their vectorization factor.
for (unsigned VF = VectorizableTree.front()->getVectorFactor(); VF > 1;
@@ -5126,6 +5190,10 @@ void BoUpSLP::reorderTopToBottom() {
});
// Do an actual reordering, if profitable.
for (std::unique_ptr<TreeEntry> &TE : VectorizableTree) {
+ // Do not reorder gathered loads.
+ if (GatheredLoadsEntriesFirst != NoGatheredLoads &&
+ TE->Idx >= GatheredLoadsEntriesFirst)
+ continue;
// Just do the reordering for the nodes with the given VF.
if (TE->Scalars.size() != VF) {
if (TE->ReuseShuffleIndices.size() == VF) {
@@ -5242,12 +5310,20 @@ void BoUpSLP::reorderBottomToTop(bool IgnoreReorder) {
// Currently the are vectorized loads,extracts without alternate operands +
// some gathering of extracts.
SmallVector<TreeEntry *> NonVectorized;
- for (const std::unique_ptr<TreeEntry> &TE : VectorizableTree) {
+ // Nodes with loads masked gathering built out of gathered loads that should
+ // be reordered to avoid extra shuffles.
+ DenseMap<const TreeEntry *, TreeEntry *> ScatterVectorizeToReorder;
+ for (const std::unique_ptr<TreeEntry> &TE :
+ ArrayRef(VectorizableTree)
+ .drop_back(GatheredLoadsEntriesFirst == NoGatheredLoads
+ ? 0
+ : VectorizableTree.size() -
+ GatheredLoadsEntriesFirst)) {
if (TE->State != TreeEntry::Vectorize &&
TE->State != TreeEntry::StridedVectorize)
NonVectorized.push_back(TE.get());
- if (std::optional<OrdersType> CurrentOrder =
- getReorderingData(*TE, /*TopToBottom=*/false)) {
+ if (std::optional<OrdersType> CurrentOrder = getReorderingData(
+ *TE, /*TopToBottom=*/false, ScatterVectorizeToReorder)) {
OrderedEntries.insert(TE.get());
if (!(TE->State == TreeEntry::Vectorize ||
TE->State == TreeEntry::StridedVectorize) ||
@@ -5284,6 +5360,8 @@ void BoUpSLP::reorderBottomToTop(bool IgnoreReorder) {
// search. The graph currently does not provide this dependency directly.
for (EdgeInfo &EI : TE->UserTreeIndices) {
TreeEntry *UserTE = EI.UserTE;
+ if (!UserTE)
+ continue;
auto It = Users.find(UserTE);
if (It == Users.end())
It = Users.insert({UserTE, {}}).first;
@@ -5300,6 +5378,9 @@ void BoUpSLP::reorderBottomToTop(bool IgnoreReorder) {
return Data1.first->Idx > Data2.first->Idx;
});
for (auto &Data : UsersVec) {
+ if (GatheredLoadsEntriesFirst != NoGatheredLoads &&
+ Data.first->Idx >= GatheredLoadsEntriesFirst)
+ llvm_unreachable("Gathered loads nodes must not be reordered.");
// Check that operands are used only in the User node.
SmallVector<TreeEntry *> GatherOps;
if (!canReorderOperands(Data.first, Data.second, NonVectorized,
@@ -5327,7 +5408,8 @@ void BoUpSLP::reorderBottomToTop(bool IgnoreReorder) {
const auto Order = [&]() -> const OrdersType {
if (OpTE->State == TreeEntry::NeedToGather ||
!OpTE->ReuseShuffleIndices.empty())
- return getReorderingData(*OpTE, /*TopToBottom=*/false)
+ return getReorderingData(*OpTE, /*TopToBottom=*/false,
+ ScatterVectorizeToReorder)
.value_or(OrdersType(1));
return OpTE->ReorderIndices;
}();
@@ -5366,7 +5448,8 @@ void BoUpSLP::reorderBottomToTop(bool IgnoreReorder) {
return true;
if (TE->State == TreeEntry::NeedToGather) {
if (GathersToOrders.contains(TE))
- return !getReorderingData(*TE, /*TopToBottom=*/false)
+ return !getReorderingData(*TE, /*TopToBottom=*/false,
+ ScatterVectorizeToReorder)
.value_or(OrdersType(1))
.empty();
return true;
@@ -5515,6 +5598,71 @@ void BoUpSLP::reorderBottomToTop(bool IgnoreReorder) {
if (IgnoreReorder && !VectorizableTree.front()->ReorderIndices.empty() &&
VectorizableTree.front()->ReuseShuffleIndices.empty())
VectorizableTree.front()->ReorderIndices.clear();
+ // Reorder masked gather nodes built out of gathered loads.
+ SmallPtrSet<const TreeEntry *, 4> Processed;
+ for (const auto &SVLoadsData : ScatterVectorizeToReorder) {
+ if (!Processed.insert(SVLoadsData.second).second)
+ continue;
+ std::optional<OrdersType> CurrentOrder =
+ findReusedOrderedScalars(*SVLoadsData.first);
+ assert(CurrentOrder && "Expected order.");
+ if (CurrentOrder->empty() || !SVLoadsData.second->UserTreeIndices.empty())
+ continue;
+ SmallVector<TreeEntry *> Operands;
+ SmallVector<const TreeEntry *> Worklist(1, SVLoadsData.second);
+ while (!Worklist.empty()) {
+ const TreeEntry *CurrentEntry = Worklist.pop_back_val();
+ for (unsigned I = 0, E = CurrentEntry->getNumOperands(); I < E; ++I) {
+ if (CurrentEntry->getOpcode() == Instruction::ExtractElement)
+ continue;
+ if (CurrentEntry->getOpcode() == Instruction::Call) {
+ auto *CI = cast<CallInst>(CurrentEntry->getMainOp());
+ Intrinsic::ID ID = getVectorIntrinsicIDForCall(CI, TLI);
+ if (I >= CI->arg_size() ||
+ isVectorIntrinsicWithScalarOpAtArg(ID, I))
+ continue;
+ }
+ const TreeEntry *Op = getOperandEntry(CurrentEntry, I);
+ if (Op->ReuseShuffleIndices.empty())
+ Worklist.push_back(Op);
+ Operands.push_back(const_cast<TreeEntry *>(Op));
+ }
+ }
+ // If there are several users of the pointers tree entry, no need to
+ // reorder the scatter vectorize node, still have same number of shuffles.
+ if (any_of(Operands, [](const TreeEntry *TE) {
+ return TE->UserTreeIndices.size() > 1;
+ }))
+ continue;
+ // Reorder related masked gather node and its operands.
+ SmallVector<int> Mask(CurrentOrder->size(), PoisonMaskElem);
+ unsigned E = CurrentOrder->size();
+ transform(*CurrentOrder, Mask.begin(), [E](unsigned I) {
+ return I < E ? static_cast<int>(I) : PoisonMaskElem;
+ });
+ for (TreeEntry *OpTE : Operands) {
+ if (!OpTE->ReuseShuffleIndices.empty()) {
+ reorderReuses(OpTE->ReuseShuffleIndices, Mask);
+ } else if (OpTE->State == TreeEntry::NeedToGather) {
+ if (OpTE->ReorderIndices.empty())
+ reorderScalars(OpTE->Scalars, Mask);
+ else
+ reorderOrder(OpTE->ReorderIndices, Mask);
+ } else {
+ OpTE->reorderOperands(Mask);
+ if (OpTE->ReorderIndices.empty())
+ reorderScalars(OpTE->Scalars, Mask);
+ else
+ reorderOrder(OpTE->ReorderIndices, Mask);
+ }
+ }
+ SVLoadsData.second->reorderOperands(Mask);
+ ...
[truncated]
|
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
Created using spr 1.3.5
Created using spr 1.3.5
Created using spr 1.3.5
Created using spr 1.3.5
Created using spr 1.3.5
When building the vectorization graph, the compiler may end up with the
consecutive loads in the different branches, which end up to be
gathered. We can scan these loads and try to load them as final
vectorized load and then reshuffle between the branches to avoid extra
scalar loads in the code.
Enables x264 vectorization for RISC-V with the existing cost-model for
sifive-x280 and sifive-p670 cores.
Part of D57059
Metric: size..text
test-suite :: External/SPEC/CFP2006/433.milc/433.milc.test 138103.00 151779.00 9.9%
test-suite :: MultiSource/Benchmarks/ASCI_Purple/SMG2000/smg2000.test 242861.00 254269.00 4.7%
test-suite :: SingleSource/UnitTests/Vectorizer/VPlanNativePath/outer-loop-vect.test 25641.00 26329.00 2.7%
test-suite :: SingleSource/UnitTests/Vector/AVX512F/Vector-AVX512F-movedup.test 4044.00 4140.00 2.4%
test-suite :: SingleSource/UnitTests/Vector/AVX512BWVL/Vector-AVX512BWVL-psadbw.test 3028.00 3092.00 2.1%
test-suite :: SingleSource/Benchmarks/Shootout-C++/Shootout-C++-matrix.test 4617.00 4696.00 1.7%
test-suite :: SingleSource/UnitTests/Vector/SSE/Vector-sse.stepfft.test 10482.00 10658.00 1.7%
test-suite :: MultiSource/Benchmarks/DOE-ProxyApps-C++/miniFE/miniFE.test 92248.00 92840.00 0.6%
test-suite :: SingleSource/UnitTests/Vector/AVX512BWVL/Vector-AVX512BWVL-mask_set_bw.test 12805.00 12869.00 0.5%
test-suite :: External/SPEC/CINT2017speed/625.x264_s/625.x264_s.test 651525.00 654725.00 0.5%
test-suite :: External/SPEC/CINT2017rate/525.x264_r/525.x264_r.test 651525.00 654725.00 0.5%
test-suite :: External/SPEC/CINT2006/464.h264ref/464.h264ref.test 770360.00 774072.00 0.5%
test-suite :: External/SPEC/CFP2017speed/619.lbm_s/619.lbm_s.test 13215.00 13268.00 0.4%
test-suite :: SingleSource/Benchmarks/Misc/flops.test 8266.00 8298.00 0.4%
test-suite :: External/SPEC/CINT2017speed/631.deepsjeng_s/631.deepsjeng_s.test 97774.00 98142.00 0.4%
test-suite :: External/SPEC/CFP2006/470.lbm/470.lbm.test 15133.00 15186.00 0.4%
test-suite :: External/SPEC/CFP2017rate/519.lbm_r/519.lbm_r.test 15437.00 15490.00 0.3%
test-suite :: External/SPEC/CFP2017rate/526.blender_r/526.blender_r.test 12351116.00 12393212.00 0.3%
test-suite :: MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4.test 1035983.00 1039375.00 0.3%
test-suite :: SingleSource/UnitTests/Vector/AVX512BWVL/Vector-AVX512BWVL-unpack_msasm.test 10097.00 10129.00 0.3%
test-suite :: SingleSource/Benchmarks/Misc-C++/Large/ray.test 5160.00 5176.00 0.3%
test-suite :: External/SPEC/CINT2017rate/531.deepsjeng_r/531.deepsjeng_r.test 97710.00 97982.00 0.3%
test-suite :: MultiSource/Applications/JM/ldecod/ldecod.test 388211.00 389091.00 0.2%
test-suite :: SingleSource/Benchmarks/Linpack/linpack-pc.test 16388.00 16420.00 0.2%
test-suite :: MultiSource/Benchmarks/Olden/power/power.test 6792.00 6803.00 0.2%
test-suite :: MultiSource/Benchmarks/Prolangs-C/football/football.test 49563.00 49627.00 0.1%
test-suite :: MultiSource/Benchmarks/FreeBench/pifft/pifft.test 82742.00 82822.00 0.1%
test-suite :: External/SPEC/CFP2017speed/638.imagick_s/638.imagick_s.test 1391873.00 1393201.00 0.1%
test-suite :: External/SPEC/CFP2017rate/538.imagick_r/538.imagick_r.test 1391873.00 1393201.00 0.1%
test-suite :: MultiSource/Benchmarks/DOE-ProxyApps-C/miniAMR/miniAMR.test 70587.00 70651.00 0.1%
test-suite :: MicroBenchmarks/LCALS/SubsetALambdaLoops/lcalsALambda.test 302342.00 302598.00 0.1%
test-suite :: MicroBenchmarks/LCALS/SubsetARawLoops/lcalsARaw.test 302598.00 302854.00 0.1%
test-suite :: MultiSource/Benchmarks/MiBench/consumer-lame/consumer-lame.test 203486.00 203614.00 0.1%
test-suite :: MultiSource/Benchmarks/TSVC/ControlFlow-dbl/ControlFlow-dbl.test 137432.00 137496.00 0.0%
test-suite :: MultiSource/Benchmarks/VersaBench/beamformer/beamformer.test 34487.00 34503.00 0.0%
test-suite :: MicroBenchmarks/ImageProcessing/Blur/blur.test 229510.00 229606.00 0.0%
test-suite :: MultiSource/Benchmarks/7zip/7zip-benchmark.test 1037804.00 1038236.00 0.0%
test-suite :: External/SPEC/CFP2006/453.povray/453.povray.test 1140956.00 1141388.00 0.0%
test-suite :: MultiSource/Benchmarks/DOE-ProxyApps-C/miniGMG/miniGMG.test 43368.00 43384.00 0.0%
test-suite :: MicroBenchmarks/ImageProcessing/Interpolation/Interpolation.test 226022.00 226102.00 0.0%
test-suite :: External/SPEC/CFP2017rate/511.povray_r/511.povray_r.test 1158619.00 1159019.00 0.0%
test-suite :: MicroBenchmarks/LCALS/SubsetBLambdaLoops/lcalsBLambda.test 283126.00 283222.00 0.0%
test-suite :: MultiSource/Applications/JM/lencod/lencod.test 849139.00 849395.00 0.0%
test-suite :: MultiSource/Benchmarks/TSVC/NodeSplitting-flt/NodeSplitting-flt.test 119755.00 119791.00 0.0%
test-suite :: External/SPEC/CFP2006/447.dealII/447.dealII.test 597883.00 598059.00 0.0%
test-suite :: MicroBenchmarks/LoopInterchange/LoopInterchange.test 222294.00 222358.00 0.0%
test-suite :: MultiSource/Applications/siod/siod.test 167296.00 167344.00 0.0%
test-suite :: MicroBenchmarks/ImageProcessing/AnisotropicDiffusion/AnisotropicDiffusion.test 224006.00 224070.00 0.0%
test-suite :: MicroBenchmarks/ImageProcessing/BilateralFiltering/BilateralFilter.test 224054.00 224118.00 0.0%
test-suite :: MicroBenchmarks/Builtins/Int128/Builtins.test 226422.00 226486.00 0.0%
test-suite :: MicroBenchmarks/ImageProcessing/Dilate/Dilate.test 226502.00 226566.00 0.0%
test-suite :: MicroBenchmarks/SLPVectorization/SLPVectorizationBenchmarks.test 228262.00 228326.00 0.0%
test-suite :: MicroBenchmarks/ImageProcessing/Dither/Dither.test 229606.00 229670.00 0.0%
test-suite :: External/SPEC/CFP2006/444.namd/444.namd.test 243555.00 243619.00 0.0%
test-suite :: MultiSource/Benchmarks/TSVC/LoopRestructuring-dbl/LoopRestructuring-dbl.test 132871.00 132903.00 0.0%
test-suite :: MicroBenchmarks/LoopVectorization/LoopInterleavingBenchmarks.test 276726.00 276790.00 0.0%
test-suite :: MicroBenchmarks/LoopVectorization/LoopVectorizationBenchmarks.test 352150.00 352214.00 0.0%
test-suite :: MicroBenchmarks/LCALS/SubsetCRawLoops/lcalsCRaw.test 356198.00 356262.00 0.0%
test-suite :: MicroBenchmarks/LCALS/SubsetCLambdaLoops/lcalsCLambda.test 356486.00 356550.00 0.0%
test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test 360070.00 360134.00 0.0%
test-suite :: MicroBenchmarks/XRay/FDRMode/fdrmode-bench.test 361910.00 361974.00 0.0%
test-suite :: MicroBenchmarks/MemFunctions/MemFunctions.test 418182.00 418246.00 0.0%
test-suite :: MultiSource/Benchmarks/TSVC/StatementReordering-dbl/StatementReordering-dbl.test 127432.00 127451.00 0.0%
test-suite :: External/SPEC/CINT2006/400.perlbench/400.perlbench.test 1087068.00 1087228.00 0.0%
test-suite :: MultiSource/Benchmarks/TSVC/NodeSplitting-dbl/NodeSplitting-dbl.test 129112.00 129131.00 0.0%
test-suite :: MultiSource/Benchmarks/TSVC/CrossingThresholds-dbl/CrossingThresholds-dbl.test 131744.00 131760.00 0.0%
test-suite :: External/SPEC/CFP2006/482.sphinx3/482.sphinx3.test 163758.00 163774.00 0.0%
test-suite :: External/SPEC/CFP2017rate/510.parest_r/510.parest_r.test 2027278.00 2027406.00 0.0%
test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw.test 283414.00 283430.00 0.0%
test-suite :: MultiSource/Applications/ClamAV/clamscan.test 578344.00 578376.00 0.0%
test-suite :: MultiSource/Benchmarks/TSVC/StatementReordering-flt/StatementReordering-flt.test 118011.00 118015.00 0.0%
test-suite :: MultiSource/Benchmarks/TSVC/Recurrences-dbl/Recurrences-dbl.test 127528.00 127531.00 0.0%
test-suite :: External/SPEC/CINT2017speed/602.gcc_s/602.gcc_s.test 9677164.00 9677260.00 0.0%
test-suite :: External/SPEC/CINT2017rate/502.gcc_r/502.gcc_r.test 9677164.00 9677260.00 0.0%
test-suite :: External/SPEC/CINT2017speed/600.perlbench_s/600.perlbench_s.test 2060869.00 2060853.00 -0.0%
test-suite :: External/SPEC/CINT2017rate/500.perlbench_r/500.perlbench_r.test 2060869.00 2060853.00 -0.0%
test-suite :: External/SPEC/CINT2006/403.gcc/403.gcc.test 3074189.00 3073901.00 -0.0%
test-suite :: MultiSource/Benchmarks/TSVC/Expansion-dbl/Expansion-dbl.test 132542.00 132529.00 -0.0%
test-suite :: MultiSource/Benchmarks/TSVC/Recurrences-flt/Recurrences-flt.test 117995.00117983.00 -0.0%
test-suite :: SingleSource/UnitTests/matrix-types-spec.test 480780.00 480700.00 -0.0%
test-suite :: External/SPEC/CINT2017speed/623.xalancbmk_s/623.xalancbmk_s.test 2856678.00 2856102.00 -0.0%
test-suite :: External/SPEC/CINT2017rate/523.xalancbmk_r/523.xalancbmk_r.test 2856678.00 2856102.00 -0.0%
test-suite :: External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk.test 2390155.00 2389659.00 -0.0%
test-suite :: MultiSource/Benchmarks/TSVC/LoopRestructuring-flt/LoopRestructuring-flt.test 122843.00 122811.00 -0.0%
test-suite :: MultiSource/Benchmarks/TSVC/CrossingThresholds-flt/CrossingThresholds-flt.test 122292.00 122260.00 -0.0%
test-suite :: SingleSource/Benchmarks/Adobe-C++/loop_unroll.test 414404.00 414292.00 -0.0%
test-suite :: MultiSource/Benchmarks/mafft/pairlocalalign.test 229070.00 229006.00 -0.0%
test-suite :: External/SPEC/CFP2017rate/508.namd_r/508.namd_r.test 775315.00 775075.00 -0.0%
test-suite :: MultiSource/Benchmarks/TSVC/Expansion-flt/Expansion-flt.test 123265.00 123205.00 -0.0%
test-suite :: MultiSource/Applications/oggenc/oggenc.test 192539.00 192443.00 -0.0%
test-suite :: MultiSource/Benchmarks/TSVC/ControlFlow-flt/ControlFlow-flt.test 128163.00 128083.00 -0.1%
test-suite :: MultiSource/Benchmarks/mediabench/g721/g721encode/encode.test 6594.00 6587.00 -0.1%
test-suite :: MultiSource/Benchmarks/MiBench/office-ispell/office-ispell.test 67871.0067791.00 -0.1%
test-suite :: MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow.test 22749.00 22717.00 -0.1%
test-suite :: MultiSource/Benchmarks/Prolangs-C/bison/mybison.test 57182.00 57086.00 -0.2%
test-suite :: MicroBenchmarks/harris/harris.test 231558.00 231110.00 -0.2%
test-suite :: External/SPEC/CINT2006/445.gobmk/445.gobmk.test 912693.00 910873.00 -0.2%
test-suite :: MultiSource/Benchmarks/Prolangs-C/TimberWolfMC/timberwolfmc.test 279407.00 278799.00 -0.2%
test-suite :: MultiSource/Benchmarks/MallocBench/gs/gs.test 167759.00 167391.00 -0.2%
test-suite :: MultiSource/Benchmarks/Bullet/bullet.test 310638.00 309902.00 -0.2%
test-suite :: MultiSource/Benchmarks/MiBench/telecomm-gsm/telecomm-gsm.test 40045.00 39949.00 -0.2%
test-suite :: MultiSource/Benchmarks/mediabench/gsm/toast/toast.test 40042.00 39946.00 -0.2%
test-suite :: SingleSource/Benchmarks/Misc/oourafft.test 19815.00 19767.00 -0.2%
test-suite :: MultiSource/Benchmarks/McCat/18-imp/imp.test 12398.00 12366.00 -0.3%
test-suite :: SingleSource/Benchmarks/BenchmarkGame/n-body.test 3874.00 3860.00 -0.4%
test-suite :: SingleSource/Benchmarks/Stanford/Puzzle.test 3577.00 3561.00 -0.4%
test-suite :: MultiSource/Benchmarks/Prolangs-C/gnugo/gnugo.test 31161.00 30969.00 -0.6%
test-suite :: SingleSource/Benchmarks/Stanford/Oscar.test 6279.00 6231.00 -0.8%
test-suite :: MultiSource/Benchmarks/MiBench/security-blowfish/security-blowfish.test 11512.00 11160.00 -3.1%
test-suite :: SingleSource/Benchmarks/Misc-C++/oopack_v1p8.test 10968.00 10568.00 -3.6%
test-suite :: MultiSource/Benchmarks/McCat/08-main/main.test 6718.00 6366.00 -5.2%
CFP2006/433.milc - better vector code, some functions are not inlined
ASCI_Purple/SMG2000 - extra code vectorized
VPlanNativePath/outer-loop-vect - better vector code
AVX512F/Vector-AVX512F-movedup - better vector code
AVX512BWVL/Vector-AVX512BWVL-psadbw - better vector code
Benchmarks/Shootout-C++ - better vector code
Vector/SSE/Vector-sse.stepfft - extra code vectorized
DOE-ProxyApps-C++/miniFE - extra code vectorized
AVX512BWVL/Vector-AVX512BWVL-mask_set_bw - small changes in vector code
CINT2017speed/625.x264_s
CINT2017rate/525.x264_r - extra code vectorized, better vector code
CINT2006/464.h264ref - better vector code
Misc/flops - extra code vectorized
CFP2006/470.lbm - extra code vectorized
CFP2017speed/619.lbm_s
FP2017rate/519.lbm_r - extra code vectorized
CFP2017rate/526.blender_r - extra code vectorized
Benchmarks/tramp3d-v4 - extra code vectorized, better vector code
AVX512BWVL/Vector-AVX512BWVL-unpack_msasm - extra code vectorized
Large/ray - better vector code
CINT2017speed/631.deepsjeng_s
CINT2017rate/531.deepsjeng_r - better vector code, some extra code
vectorized
JM/ldecod - better vector code
Benchmarks/Linpack - better vector code
Olden/power - better vector code
Prolangs-C/football - better vector code
FreeBench/pifft - small changes in vector code
CFP2017speed/638.imagick_s
CFP2017rate/538.imagick_r - better vector code
DOE-ProxyApps-C/miniAMR - extra code vectorized
LCALS/SubsetALambdaLoops - extra code vectorized
LCALS/SubsetARawLoops - extra code vectorized
MiBench/consumer-lame - extra code vectorized, better vector code
TSVC/ControlFlow-dbl - extra code vectorized
VersaBench/beamformer - better vector code
ImageProcessing/Blur - extra code vectorized
Benchmarks/7zip - extra code vectorized, better vector code
CFP2006/453.povray - small variations in vector code, some extra
code vectorized
DOE-ProxyApps-C/miniGMG - small changes in vector code
ImageProcessing/Interpolation - extra code vectorized
CFP2017rate/511.povray_r - small variations in vector code, some extra
code vectorized
LCALS/SubsetBLambdaLoops - extra code vectorized
Applications/JM/lencod - extra code vectorized
TSVC/NodeSplitting-flt - extra code vectorized
CFP2006/447.dealII - better vector code
LoopInterchange/LoopInterchange - extra code vectorized
Applications/siod - small changes in the vectorized code
ImageProcessing/AnisotropicDiffusion - extra code vectorized
ImageProcessing/BilateralFiltering - extra code vectorized
Builtins/Int128 - extra code vectorized
ImageProcessing/Dilate - extra code vectorized
SLPVectorization/SLPVectorizationBenchmarks - extra code vectorized
ImageProcessing/Dither - extra code vectorized
CFP2006/444.namd - extra code vectorized
TSVC/LoopRestructuring-dbl - extra code vectorized
LoopVectorization/LoopInterleavingBenchmarks - extra code vectorized
LoopVectorization/LoopVectorizationBenchmarks - extra code vectorized
LCALS/SubsetCRawLoops - extra code vectorized
LCALS/SubsetCLambdaLoops - extra code vectorized
XRay/ReturnReference - extra code vectorized
XRay/FDRMode - extra code vectorized
MicroBenchmarks/MemFunctions - extra code vectorized
TSVC/StatementReordering-dbl - extra code gets vectorized
CINT2006/400.perlbench - extra code gets vectorized
TSVC/NodeSplitting-dbl - extra code gets vectorized
TSVC/CrossingThresholds-dbl - extra code gets vectorized
CFP2006/482.sphinx3 - better vector code
FP2017rate/510.parest_r - better vector code
LCALS/SubsetBRawLoops - extra code gets vectorized
Applications/ClamAV - extra code gets vectorized
TSVC/StatementReordering-flt - extra code gets vectorized
TSVC/Recurrences-dbl - extra code gets vectorized
CINT2017speed/602.gcc_s
CINT2017rate/502.gcc_r - small variations in vector code
CINT2017speed/600.perlbench_s
CINT2017rate/500.perlbench_r - small changes in vector code
CINT2006/403.gcc - small variations in vector code
TSVC/Expansion-dbl - extra code gets vectorized
TSVC/Recurrences-flt - extra code gets vectorized
UnitTests/matrix-types-spec - small changes in vector code because of
better cost
CINT2017speed/623.xalancbmk_s
CINT2017rate/523.xalancbmk_r - small variations in vector code
CINT2006/483.xalancbmk - small variations in vector code
TSVC/LoopRestructuring-flt - extra vector code
TSVC/CrossingThresholds-flt - extra vector code
Adobe-C++/loop_unroll - small variations in vector code
mafft/pairlocalalign - small variations in vector code
CFP2017rate/508.namd_r - small variations in vector code
TSVC/Expansion-flt - extra code gets vectorized
Applications/oggenc - small changes in vector code
TSVC/ControlFlow-flt - extra code gets vectorized
mediabench/g721 - small changes in vector code
MiBench/office-ispell - more accurate cost estimation vectorization
FreeBench/fourinarow - 32 x buildvector replaced by 7 4 x loads and
shuffles + 4 insertelements
Prolangs-C/bison - small changes in vector code
MicroBenchmarks/harris - extra code vectorized
CINT2006/445.gobmk - more cost correct vectorization, some functions are
not inlined anymore, some extra code gets vectorized, better code with
non-power-of-2
Prolangs-C/TimberWolfMC - some extra code gets vectorized
MallocBench/gs - small variations in vector code
Bullet/bullet - small variations in vector code, some extra function is
inlined
MiBench/telecomm-gsm - small variations in vector code
mediabench/gsm/toast - same as MiBench/telecomm-gsm
Misc/oourafft - 2 x code replaced by 4 x
McCat/18-imp - better vector code
BenchmarkGame/n-body - small variation in vector code
Stanford/Puzzle - small variation in the logical reduction, will be
fixed with non-power-of-2
Prolangs-C/gnugo - better vector code
Stanford/Oscar - extra code vectorized
MiBench/security-blowfish - extra code gets vectorized
Misc-C++/oopack_v1p8 - extra code gets vectorized
McCat/08-main - better vectorization of 4 x instead of 2 x in several
loops.
Differential Revision: https://reviews.llvm.org/D105986