Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 22 additions & 3 deletions llvm/lib/Target/RISCV/RISCVISelLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1664,7 +1664,11 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
PredictableSelectIsExpensive = Subtarget.predictableSelectIsExpensive();

MaxStoresPerMemsetOptSize = Subtarget.getMaxStoresPerMemset(/*OptSize=*/true);
MaxStoresPerMemset = Subtarget.getMaxStoresPerMemset(/*OptSize=*/false);
MaxStoresPerMemset = Subtarget.hasVInstructions()
? (Subtarget.getRealMinVLen() / 8 *
Subtarget.getMaxLMULForFixedLengthVectors() /
(Subtarget.is64Bit() ? 8 : 4))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this based on is64Bit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Op.size() exceeds Subtarget.getMaxLMULForFixedLengthVectors() * MinVLenInBytes, the memset should not be inlined. To determine whether inlining is profitable, llvm checks how many scalar stores would be required when using the widest scalar store type available on the target. On RV64, the widest scalar store type is i64, while on RV32 it is i32. Therefore, MaxStoresPerMemset should be computed differently depending on the target’s XLEN.

: Subtarget.getMaxStoresPerMemset(/*OptSize=*/false);

MaxGluedStoresPerMemcpy = Subtarget.getMaxGluedStoresPerMemcpy();
MaxStoresPerMemcpyOptSize = Subtarget.getMaxStoresPerMemcpy(/*OptSize=*/true);
Expand Down Expand Up @@ -23808,8 +23812,23 @@ EVT RISCVTargetLowering::getOptimalMemOpType(
// a large scalar constant and instead use vmv.v.x/i to do the
// broadcast. For everything else, prefer ELenVT to minimize VL and thus
// maximize the chance we can encode the size in the vsetvli.
MVT ELenVT = MVT::getIntegerVT(Subtarget.getELen());
MVT PreferredVT = (Op.isMemset() && !Op.isZeroMemset()) ? MVT::i8 : ELenVT;
// If Op size is greater than LMUL8 memory operation, we don't support inline
// of memset. Return EVT based on Op size to avoid redundant splitting and
// merging operations if Op size is no greater than LMUL8 memory operation.
if (Op.isMemset()) {
if (!Op.isZeroMemset())
return EVT::getVectorVT(Context, MVT::i8, Op.size());
if (Op.size() >
Subtarget.getMaxLMULForFixedLengthVectors() * MinVLenInBytes)
return MVT::Other;
if (Subtarget.hasVInstructionsI64() && Op.size() % 8 == 0)
return EVT::getVectorVT(Context, MVT::i64, Op.size() / 8);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to check alignment for any of these types?

if (Op.size() % 4 == 0)
return EVT::getVectorVT(Context, MVT::i32, Op.size() / 4);
return EVT::getVectorVT(Context, MVT::i8, Op.size());
}

MVT PreferredVT = MVT::getIntegerVT(Subtarget.getELen());

// Do we have sufficient alignment for our preferred VT? If not, revert
// to largest size allowed by our alignment criteria.
Expand Down
39 changes: 14 additions & 25 deletions llvm/test/CodeGen/RISCV/pr135206.ll
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,6 @@ define i1 @foo() nounwind "probe-stack"="inline-asm" "target-features"="+v" {
; CHECK-NEXT: sd ra, 2024(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 2016(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s1, 2008(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s2, 2000(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s3, 1992(sp) # 8-byte Folded Spill
; CHECK-NEXT: lui a0, 7
; CHECK-NEXT: sub t1, sp, a0
; CHECK-NEXT: lui t2, 1
Expand All @@ -24,8 +22,9 @@ define i1 @foo() nounwind "probe-stack"="inline-asm" "target-features"="+v" {
; CHECK-NEXT: bne sp, t1, .LBB0_1
; CHECK-NEXT: # %bb.2:
; CHECK-NEXT: addi sp, sp, -2048
; CHECK-NEXT: addi sp, sp, -96
; CHECK-NEXT: addi sp, sp, -80
; CHECK-NEXT: csrr t1, vlenb
; CHECK-NEXT: slli t1, t1, 2
; CHECK-NEXT: lui t2, 1
; CHECK-NEXT: .LBB0_3: # =>This Inner Loop Header: Depth=1
; CHECK-NEXT: sub sp, sp, t2
Expand All @@ -34,45 +33,35 @@ define i1 @foo() nounwind "probe-stack"="inline-asm" "target-features"="+v" {
; CHECK-NEXT: bge t1, t2, .LBB0_3
; CHECK-NEXT: # %bb.4:
; CHECK-NEXT: sub sp, sp, t1
; CHECK-NEXT: li a0, 86
; CHECK-NEXT: addi s0, sp, 48
; CHECK-NEXT: addi s1, sp, 32
; CHECK-NEXT: addi s2, sp, 16
; CHECK-NEXT: lui a1, 353637
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v8, a0
; CHECK-NEXT: li a0, 64
; CHECK-NEXT: li a1, 86
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
; CHECK-NEXT: vmv.v.x v8, a1
; CHECK-NEXT: lui a0, 8
; CHECK-NEXT: addi a0, a0, 32
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: vs1r.v v8, (a0) # vscale x 8-byte Folded Spill
; CHECK-NEXT: addi a0, a1, 1622
; CHECK-NEXT: vse8.v v8, (s0)
; CHECK-NEXT: vs4r.v v8, (a0) # vscale x 32-byte Folded Spill
; CHECK-NEXT: li s0, 56
; CHECK-NEXT: addi s1, sp, 16
; CHECK-NEXT: vsetvli zero, s0, e8, m4, ta, ma
; CHECK-NEXT: vse8.v v8, (s1)
; CHECK-NEXT: vse8.v v8, (s2)
; CHECK-NEXT: slli a1, a0, 32
; CHECK-NEXT: add s3, a0, a1
; CHECK-NEXT: sd s3, 64(sp)
; CHECK-NEXT: call bar
; CHECK-NEXT: lui a0, 8
; CHECK-NEXT: addi a0, a0, 32
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: vl1r.v v8, (a0) # vscale x 8-byte Folded Reload
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vse8.v v8, (s0)
; CHECK-NEXT: vl4r.v v8, (a0) # vscale x 32-byte Folded Reload
; CHECK-NEXT: vsetvli zero, s0, e8, m4, ta, ma
; CHECK-NEXT: vse8.v v8, (s1)
; CHECK-NEXT: vse8.v v8, (s2)
; CHECK-NEXT: sd s3, 64(sp)
; CHECK-NEXT: li a0, 0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 2
; CHECK-NEXT: add sp, sp, a1
; CHECK-NEXT: lui a1, 8
; CHECK-NEXT: addi a1, a1, -1952
; CHECK-NEXT: addi a1, a1, -1968
; CHECK-NEXT: add sp, sp, a1
; CHECK-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
; CHECK-NEXT: ld s0, 2016(sp) # 8-byte Folded Reload
; CHECK-NEXT: ld s1, 2008(sp) # 8-byte Folded Reload
; CHECK-NEXT: ld s2, 2000(sp) # 8-byte Folded Reload
; CHECK-NEXT: ld s3, 1992(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 2032
; CHECK-NEXT: ret
%1 = alloca %"buff", align 8
Expand Down
Loading