Skip to content

Commit a382b45

Browse files
committed
[VPlan] Get Addr computation cost with scalar type if it is uniform for gather/scatter.
This patch query `getAddressComputationCost()` with scalar type if the address is uniform. This can help the cost for gather/scatter more accurate. In current LV, non consecutive VPWidenMemoryRecipe (gather/scatter) will account the cost of address computation. But there are some cases that the addr is uniform accross lanes, that makes the address can be calculated with scalar type and broadcast. I have a follow optimization that try to converts gather/scatter with uniform memory acces to scalar load/store + broadcast. With this optimization, we can remove this temporary change.
1 parent c5e681d commit a382b45

File tree

2 files changed

+11
-4
lines changed

2 files changed

+11
-4
lines changed

llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3126,10 +3126,17 @@ InstructionCost VPWidenMemoryRecipe::computeCost(ElementCount VF,
31263126
// TODO: Using the original IR may not be accurate.
31273127
// Currently, ARM will use the underlying IR to calculate gather/scatter
31283128
// instruction cost.
3129-
const Value *Ptr = getLoadStorePointerOperand(&Ingredient);
3130-
Type *PtrTy = toVectorTy(Ptr->getType(), VF);
31313129
assert(!Reverse &&
31323130
"Inconsecutive memory access should not have the order.");
3131+
3132+
const Value *Ptr = getLoadStorePointerOperand(&Ingredient);
3133+
Type *PtrTy = Ptr->getType();
3134+
3135+
// If the address value is uniform across all lane, then the address can be
3136+
// calculated with scalar type and broacast.
3137+
if (!vputils::isSingleScalar(getAddr()))
3138+
PtrTy = toVectorTy(PtrTy, VF);
3139+
31333140
return Ctx.TTI.getAddressComputationCost(PtrTy, nullptr, nullptr,
31343141
Ctx.CostKind) +
31353142
Ctx.TTI.getGatherScatterOpCost(Opcode, Ty, Ptr, IsMasked, Alignment,

llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-evl-crash.ll

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ define void @truncate_to_minimal_bitwidths_widen_cast_recipe(ptr %src) {
1515
; CHECK: [[VECTOR_BODY]]:
1616
; CHECK-NEXT: [[EVL_BASED_IV:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_EVL_NEXT:%.*]], %[[VECTOR_BODY]] ]
1717
; CHECK-NEXT: [[AVL:%.*]] = phi i64 [ 9, %[[VECTOR_PH]] ], [ [[AVL_NEXT:%.*]], %[[VECTOR_BODY]] ]
18-
; CHECK-NEXT: [[TMP7:%.*]] = call i32 @llvm.experimental.get.vector.length.i64(i64 [[AVL]], i32 2, i1 true)
19-
; CHECK-NEXT: call void @llvm.vp.scatter.nxv2i8.nxv2p0(<vscale x 2 x i8> zeroinitializer, <vscale x 2 x ptr> align 1 zeroinitializer, <vscale x 2 x i1> splat (i1 true), i32 [[TMP7]])
18+
; CHECK-NEXT: [[TMP7:%.*]] = call i32 @llvm.experimental.get.vector.length.i64(i64 [[AVL]], i32 8, i1 true)
19+
; CHECK-NEXT: call void @llvm.vp.scatter.nxv8i8.nxv8p0(<vscale x 8 x i8> zeroinitializer, <vscale x 8 x ptr> align 1 zeroinitializer, <vscale x 8 x i1> splat (i1 true), i32 [[TMP7]])
2020
; CHECK-NEXT: [[TMP9:%.*]] = zext i32 [[TMP7]] to i64
2121
; CHECK-NEXT: [[INDEX_EVL_NEXT]] = add nuw i64 [[TMP9]], [[EVL_BASED_IV]]
2222
; CHECK-NEXT: [[AVL_NEXT]] = sub nuw i64 [[AVL]], [[TMP9]]

0 commit comments

Comments
 (0)