-
Notifications
You must be signed in to change notification settings - Fork 15.4k
[AArch64]SIMD fpcvt codegen for rounding nodes #165546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@llvm/pr-subscribers-backend-aarch64 @llvm/pr-subscribers-llvm-globalisel Author: None (Lukacma) ChangesThis is followup patch to #157680, which allows simd fpcvt instructions to be generated from l/llround and l/llrint nodes. Patch is 23.23 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/165546.diff 6 Files Affected:
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
index b9e299ef37454..f765c6e037176 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
@@ -6799,6 +6799,79 @@ defm : FPToIntegerPats<fp_to_uint, fp_to_uint_sat, fp_to_uint_sat_gi, ftrunc, "F
defm : FPToIntegerPats<fp_to_sint, fp_to_sint_sat, fp_to_sint_sat_gi, fround, "FCVTAS">;
defm : FPToIntegerPats<fp_to_uint, fp_to_uint_sat, fp_to_uint_sat_gi, fround, "FCVTAU">;
+// For global-isel we can use register classes to determine
+// which FCVT instruction to use.
+let Predicates = [HasFPRCVT] in {
+def : Pat<(i64 (any_lround f32:$Rn)),
+ (FCVTASDSr f32:$Rn)>;
+def : Pat<(i64 (any_llround f32:$Rn)),
+ (FCVTASDSr f32:$Rn)>;
+}
+def : Pat<(i64 (any_lround f64:$Rn)),
+ (FCVTASv1i64 f64:$Rn)>;
+def : Pat<(i64 (any_llround f64:$Rn)),
+ (FCVTASv1i64 f64:$Rn)>;
+
+let Predicates = [HasFPRCVT] in {
+ def : Pat<(f32 (bitconvert (i32 (any_lround f16:$Rn)))),
+ (FCVTASSHr f16:$Rn)>;
+ def : Pat<(f64 (bitconvert (i64 (any_lround f16:$Rn)))),
+ (FCVTASDHr f16:$Rn)>;
+ def : Pat<(f64 (bitconvert (i64 (any_llround f16:$Rn)))),
+ (FCVTASDHr f16:$Rn)>;
+ def : Pat<(f64 (bitconvert (i64 (any_lround f32:$Rn)))),
+ (FCVTASDSr f32:$Rn)>;
+ def : Pat<(f32 (bitconvert (i32 (any_lround f64:$Rn)))),
+ (FCVTASSDr f64:$Rn)>;
+ def : Pat<(f64 (bitconvert (i64 (any_llround f32:$Rn)))),
+ (FCVTASDSr f32:$Rn)>;
+}
+def : Pat<(f32 (bitconvert (i32 (any_lround f32:$Rn)))),
+ (FCVTASv1i32 f32:$Rn)>;
+def : Pat<(f64 (bitconvert (i64 (any_lround f64:$Rn)))),
+ (FCVTASv1i64 f64:$Rn)>;
+def : Pat<(f64 (bitconvert (i64 (any_llround f64:$Rn)))),
+ (FCVTASv1i64 f64:$Rn)>;
+
+// For global-isel we can use register classes to determine
+// which FCVT instruction to use.
+let Predicates = [HasFPRCVT] in {
+def : Pat<(i64 (any_lrint f16:$Rn)),
+ (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+def : Pat<(i64 (any_llrint f16:$Rn)),
+ (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+def : Pat<(i64 (any_lrint f32:$Rn)),
+ (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+def : Pat<(i64 (any_llrint f32:$Rn)),
+ (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+}
+def : Pat<(i64 (any_lrint f64:$Rn)),
+ (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+def : Pat<(i64 (any_llrint f64:$Rn)),
+ (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+
+let Predicates = [HasFPRCVT] in {
+ def : Pat<(f32 (bitconvert (i32 (any_lrint f16:$Rn)))),
+ (FCVTZSSHr (FRINTXHr f16:$Rn))>;
+ def : Pat<(f64 (bitconvert (i64 (any_lrint f16:$Rn)))),
+ (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+ def : Pat<(f64 (bitconvert (i64 (any_llrint f16:$Rn)))),
+ (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+ def : Pat<(f64 (bitconvert (i64 (any_lrint f32:$Rn)))),
+ (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+ def : Pat<(f32 (bitconvert (i32 (any_lrint f64:$Rn)))),
+ (FCVTZSSDr (FRINTXDr f64:$Rn))>;
+ def : Pat<(f64 (bitconvert (i64 (any_llrint f32:$Rn)))),
+ (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+}
+def : Pat<(f32 (bitconvert (i32 (any_lrint f32:$Rn)))),
+ (FCVTZSv1i32 (FRINTXSr f32:$Rn))>;
+def : Pat<(f64 (bitconvert (i64 (any_lrint f64:$Rn)))),
+ (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+def : Pat<(f64 (bitconvert (i64 (any_llrint f64:$Rn)))),
+ (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+
+
// f16 -> s16 conversions
let Predicates = [HasFullFP16] in {
def : Pat<(i16(fp_to_sint_sat_gi f16:$Rn)), (FCVTZSv1f16 f16:$Rn)>;
diff --git a/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp b/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp
index 6d2d70511e894..8bd982898b8d6 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp
@@ -858,7 +858,11 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
case TargetOpcode::G_FPTOSI_SAT:
case TargetOpcode::G_FPTOUI_SAT:
case TargetOpcode::G_FPTOSI:
- case TargetOpcode::G_FPTOUI: {
+ case TargetOpcode::G_FPTOUI:
+ case TargetOpcode::G_INTRINSIC_LRINT:
+ case TargetOpcode::G_INTRINSIC_LLRINT:
+ case TargetOpcode::G_LROUND:
+ case TargetOpcode::G_LLROUND: {
LLT DstType = MRI.getType(MI.getOperand(0).getReg());
if (DstType.isVector())
break;
@@ -879,12 +883,6 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR};
break;
}
- case TargetOpcode::G_INTRINSIC_LRINT:
- case TargetOpcode::G_INTRINSIC_LLRINT:
- if (MRI.getType(MI.getOperand(0).getReg()).isVector())
- break;
- OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR};
- break;
case TargetOpcode::G_FCMP: {
// If the result is a vector, it must use a FPR.
AArch64GenRegisterBankInfo::PartialMappingIdx Idx0 =
@@ -1224,12 +1222,6 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
}
break;
}
- case TargetOpcode::G_LROUND:
- case TargetOpcode::G_LLROUND: {
- // Source is always floating point and destination is always integer.
- OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR};
- break;
- }
}
// Finally construct the computed mapping.
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir
index 420c7cfb07b74..16100f01017a6 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir
@@ -14,7 +14,7 @@ body: |
; CHECK: liveins: $d0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: %fpr:fpr(s64) = COPY $d0
- ; CHECK-NEXT: %llround:gpr(s64) = G_LLROUND %fpr(s64)
+ ; CHECK-NEXT: %llround:fpr(s64) = G_LLROUND %fpr(s64)
; CHECK-NEXT: $d0 = COPY %llround(s64)
; CHECK-NEXT: RET_ReallyLR implicit $s0
%fpr:_(s64) = COPY $d0
@@ -35,7 +35,7 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: %gpr:gpr(s64) = COPY $x0
; CHECK-NEXT: [[COPY:%[0-9]+]]:fpr(s64) = COPY %gpr(s64)
- ; CHECK-NEXT: %llround:gpr(s64) = G_LLROUND [[COPY]](s64)
+ ; CHECK-NEXT: %llround:fpr(s64) = G_LLROUND [[COPY]](s64)
; CHECK-NEXT: $d0 = COPY %llround(s64)
; CHECK-NEXT: RET_ReallyLR implicit $s0
%gpr:_(s64) = COPY $x0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir
index 775c6ca773c68..5cb93f7c4646d 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir
@@ -14,7 +14,7 @@ body: |
; CHECK: liveins: $d0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: %fpr:fpr(s64) = COPY $d0
- ; CHECK-NEXT: %lround:gpr(s64) = G_LROUND %fpr(s64)
+ ; CHECK-NEXT: %lround:fpr(s64) = G_LROUND %fpr(s64)
; CHECK-NEXT: $d0 = COPY %lround(s64)
; CHECK-NEXT: RET_ReallyLR implicit $s0
%fpr:_(s64) = COPY $d0
@@ -35,7 +35,7 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: %gpr:gpr(s64) = COPY $x0
; CHECK-NEXT: [[COPY:%[0-9]+]]:fpr(s64) = COPY %gpr(s64)
- ; CHECK-NEXT: %lround:gpr(s64) = G_LROUND [[COPY]](s64)
+ ; CHECK-NEXT: %lround:fpr(s64) = G_LROUND [[COPY]](s64)
; CHECK-NEXT: $d0 = COPY %lround(s64)
; CHECK-NEXT: RET_ReallyLR implicit $s0
%gpr:_(s64) = COPY $x0
diff --git a/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll b/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll
new file mode 100644
index 0000000000000..000ff64131ccf
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll
@@ -0,0 +1,428 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc < %s -mtriple aarch64-unknown-unknown -mattr=+fprcvt,+fullfp16 | FileCheck %s --check-prefixes=CHECK,CHECK-SD
+; RUN: llc < %s -mtriple aarch64-unknown-unknown -global-isel -global-isel-abort=2 -mattr=+fprcvt,+fullfp16 2>&1 | FileCheck %s --check-prefixes=CHECK,CHECK-GI
+
+; CHECK-GI: warning: Instruction selection used fallback path for lround_i32_f16_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f64_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f32_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f16_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f16_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f32_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f64_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f32_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f64_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f16_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f32_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f64_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f16_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f64_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f32_simd
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f16_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f16_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f32_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f64_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f32_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f64_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f16_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f32_simd_exp
+; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f64_simd_exp
+
+;
+; (L/LL)Round
+;
+
+define float @lround_i32_f16_simd(half %x) {
+; CHECK-LABEL: lround_i32_f16_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas s0, h0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.lround.i32.f16(half %x)
+ %sum = bitcast i32 %val to float
+ ret float %sum
+}
+
+define double @lround_i64_f16_simd(half %x) {
+; CHECK-LABEL: lround_i64_f16_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, h0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.lround.i64.f16(half %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @lround_i64_f32_simd(float %x) {
+; CHECK-LABEL: lround_i64_f32_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, s0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.lround.i64.f32(float %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define float @lround_i32_f64_simd(double %x) {
+; CHECK-LABEL: lround_i32_f64_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas s0, d0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.lround.i32.f64(double %x)
+ %bc = bitcast i32 %val to float
+ ret float %bc
+}
+
+define float @lround_i32_f32_simd(float %x) {
+; CHECK-LABEL: lround_i32_f32_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas s0, s0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.lround.i32.f32(float %x)
+ %bc = bitcast i32 %val to float
+ ret float %bc
+}
+
+define double @lround_i64_f64_simd(double %x) {
+; CHECK-LABEL: lround_i64_f64_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, d0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.lround.i64.f64(double %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @llround_i64_f16_simd(half %x) {
+; CHECK-LABEL: llround_i64_f16_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, h0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.llround.i64.f16(half %x)
+ %sum = bitcast i64 %val to double
+ ret double %sum
+}
+
+define double @llround_i64_f32_simd(float %x) {
+; CHECK-LABEL: llround_i64_f32_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, s0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.llround.i64.f32(float %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @llround_i64_f64_simd(double %x) {
+; CHECK-LABEL: llround_i64_f64_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, d0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.llround.i64.f64(double %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+
+;
+; (L/LL)Round experimental
+;
+
+define float @lround_i32_f16_simd_exp(half %x) {
+; CHECK-LABEL: lround_i32_f16_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas s0, h0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.experimental.constrained.lround.i32.f16(half %x, metadata !"fpexcept.strict")
+ %sum = bitcast i32 %val to float
+ ret float %sum
+}
+
+define double @lround_i64_f16_simd_exp(half %x) {
+; CHECK-LABEL: lround_i64_f16_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, h0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.lround.i64.f16(half %x, metadata !"fpexcept.strict")
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @lround_i64_f32_simd_exp(float %x) {
+; CHECK-LABEL: lround_i64_f32_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, s0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.lround.i64.f32(float %x, metadata !"fpexcept.strict")
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define float @lround_i32_f64_simd_exp(double %x) {
+; CHECK-LABEL: lround_i32_f64_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas s0, d0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.experimental.constrained.lround.i32.f64(double %x, metadata !"fpexcept.strict")
+ %bc = bitcast i32 %val to float
+ ret float %bc
+}
+
+define float @lround_i32_f32_simd_exp(float %x) {
+; CHECK-LABEL: lround_i32_f32_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas s0, s0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.experimental.constrained.lround.i32.f32(float %x, metadata !"fpexcept.strict")
+ %bc = bitcast i32 %val to float
+ ret float %bc
+}
+
+define double @lround_i64_f64_simd_exp(double %x) {
+; CHECK-LABEL: lround_i64_f64_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, d0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.lround.i64.f64(double %x, metadata !"fpexcept.strict")
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @llround_i64_f16_simd_exp(half %x) {
+; CHECK-LABEL: llround_i64_f16_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, h0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.llround.i64.f16(half %x, metadata !"fpexcept.strict")
+ %sum = bitcast i64 %val to double
+ ret double %sum
+}
+
+define double @llround_i64_f32_simd_exp(float %x) {
+; CHECK-LABEL: llround_i64_f32_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, s0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.llround.i64.f32(float %x, metadata !"fpexcept.strict")
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @llround_i64_f64_simd_exp(double %x) {
+; CHECK-LABEL: llround_i64_f64_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvtas d0, d0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.llround.i64.f64(double %x, metadata !"fpexcept.strict")
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+;
+; (L/LL)Rint
+;
+
+define float @lrint_i32_f16_simd(half %x) {
+; CHECK-LABEL: lrint_i32_f16_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx h0, h0
+; CHECK-NEXT: fcvtzs s0, h0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.lrint.i32.f16(half %x)
+ %sum = bitcast i32 %val to float
+ ret float %sum
+}
+
+define double @lrint_i64_f16_simd(half %x) {
+; CHECK-LABEL: lrint_i64_f16_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx h0, h0
+; CHECK-NEXT: fcvtzs d0, h0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.lrint.i53.f16(half %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @lrint_i64_f32_simd(float %x) {
+; CHECK-LABEL: lrint_i64_f32_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx s0, s0
+; CHECK-NEXT: fcvtzs d0, s0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.lrint.i64.f32(float %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define float @lrint_i32_f64_simd(double %x) {
+; CHECK-LABEL: lrint_i32_f64_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx d0, d0
+; CHECK-NEXT: fcvtzs s0, d0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.lrint.i32.f64(double %x)
+ %bc = bitcast i32 %val to float
+ ret float %bc
+}
+
+define float @lrint_i32_f32_simd(float %x) {
+; CHECK-LABEL: lrint_i32_f32_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx s0, s0
+; CHECK-NEXT: fcvtzs s0, s0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.lrint.i32.f32(float %x)
+ %bc = bitcast i32 %val to float
+ ret float %bc
+}
+
+define double @lrint_i64_f64_simd(double %x) {
+; CHECK-LABEL: lrint_i64_f64_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx d0, d0
+; CHECK-NEXT: fcvtzs d0, d0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.lrint.i64.f64(double %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @llrint_i64_f16_simd(half %x) {
+; CHECK-LABEL: llrint_i64_f16_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx h0, h0
+; CHECK-NEXT: fcvtzs d0, h0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.llrint.i64.f16(half %x)
+ %sum = bitcast i64 %val to double
+ ret double %sum
+}
+
+define double @llrint_i64_f32_simd(float %x) {
+; CHECK-LABEL: llrint_i64_f32_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx s0, s0
+; CHECK-NEXT: fcvtzs d0, s0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.llrint.i64.f32(float %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @llrint_i64_f64_simd(double %x) {
+; CHECK-LABEL: llrint_i64_f64_simd:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx d0, d0
+; CHECK-NEXT: fcvtzs d0, d0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.llrint.i64.f64(double %x)
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+;
+; (L/LL)Rint experimental
+;
+
+define float @lrint_i32_f16_simd_exp(half %x) {
+; CHECK-LABEL: lrint_i32_f16_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx h0, h0
+; CHECK-NEXT: fcvtzs s0, h0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.experimental.constrained.lrint.i32.f16(half %x, metadata !"round.tonearest", metadata !"fpexcept.strict")
+ %sum = bitcast i32 %val to float
+ ret float %sum
+}
+
+define double @lrint_i64_f16_simd_exp(half %x) {
+; CHECK-LABEL: lrint_i64_f16_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx h0, h0
+; CHECK-NEXT: fcvtzs d0, h0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.lrint.i53.f16(half %x, metadata !"round.tonearest", metadata !"fpexcept.strict")
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define double @lrint_i64_f32_simd_exp(float %x) {
+; CHECK-LABEL: lrint_i64_f32_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx s0, s0
+; CHECK-NEXT: fcvtzs d0, s0
+; CHECK-NEXT: ret
+ %val = call i64 @llvm.experimental.constrained.lrint.i64.f32(float %x, metadata !"round.tonearest", metadata !"fpexcept.strict")
+ %bc = bitcast i64 %val to double
+ ret double %bc
+}
+
+define float @lrint_i32_f64_simd_exp(double %x) {
+; CHECK-LABEL: lrint_i32_f64_simd_exp:
+; CHECK: // %bb.0:
+; CHECK-NEXT: frintx d0, d0
+; CHECK-NEXT: fcvtzs s0, d0
+; CHECK-NEXT: ret
+ %val = call i32 @llvm.experimental.constrained.lrint.i32.f64(double %x, ...
[truncated]
|
CarolineConcatto
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Marian for the patch.
I am still puzzle by the GlobalISel patterns. Can you add examples for all the GlobalISel patterns too?
It looks to me that all the tests are checking only selection DAG, but not your changes in GlobalISel. Is that correct, or I am mistaken?
| // For global-isel we can use register classes to determine | ||
| // which FCVT instruction to use. | ||
| let Predicates = [HasFPRCVT] in { | ||
| def : Pat<(i64 (any_lround f32:$Rn)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we not add pattern for the GlobalISel too. It seams to me that all the tests are always falling back to the selection dag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not exactly sure, I understand your comment. If you are asking whether we should add patterns here for other type of lround, like for example f16 to i64 type, I don't think that is necessary as GlobalISel doesn't support those types anyway yet for these nodes. That's why you see it fallback to SDAG.
arsenm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the strictfp status on aarch64 with selectiondag? I thought it was mostly not implemented? I would expect a globaliseil patch for this to not be touching so many patterns, and to not combine changes to strictfp and non-strictfp cases
| ; (L/LL)Round experimental | ||
| ; | ||
|
|
||
| define float @lround_i32_f16_simd_exp(half %x) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the strictfp and non-strictfp cases should coexist in the same test file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Lukacma could you add these tests to another file.
Were any patterns added to just improve these tests?
I think that is the question @arsenm did. If so then maybe create another PR having these extra patterns.
But by your answer I understand that the patterns you added are not specific to these tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added more general patterns, which handle both strictfp and non-strictfp nodes so in that sense I didn't add strictfp specific patterns. I have split the tests though.
| ; RUN: llc < %s -mtriple aarch64-unknown-unknown -global-isel -global-isel-abort=2 -mattr=+fprcvt,+fullfp16 2>&1 | FileCheck %s --check-prefixes=CHECK,CHECK-GI | ||
|
|
||
| ; CHECK-GI: warning: Instruction selection used fallback path for lround_i32_f16_simd | ||
| ; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need better legalization for G_LROUND? I don't know if anyone has looked at that one yet in GISel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. There is a todo comment to enable this for f16 types and the other types should probably work as well, but are not mentioned as legal in code ?
// TODO: Libcall support for s128. // TODO: s16 should be legal with full FP16 support. getActionDefinitionsBuilder({G_LROUND, G_LLROUND}) .legalFor({{s64, s32}, {s64, s64}});
But I think this is beyond the scope of this work and should be handled separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that looks like it could do with more work - it's not one of the intrinsics we got to in our sweep of FP. It makes some of the patterns added here untested as they do not reach that far into the pipeline. I have put up #168427 to add some basic legalisation.
|
Thanks for review @arsenm. I have no clue about the status, but based on the test I think it works fine for purpose of this patch as both strict and non-strict IR nodes produce same assembly in SDAG. Sorry if the name is confusing, but this patch adds patterns for both SDAG and GlobalISel. That's why so many patterns are added. Admittedly, strictfp doesn't work in GlobalISel so patterns added purely for GlobalIsel, could be adjusted just for non-strict case, but I don't see any harm in keeping them general. It will save work when strictfp will start working.
I am not sure it would make sense to split the patch into non-strict and strict, as single pattern works for both cases, but I can split the testfile as you suggested. |
I think you should drop any attempt to handle strictfp from this patch. It's making it a lot harder to review, especially since most of the changes only do anything for the non-strict case. I don't think any of this strictfp stuff actually works as-is. The tests also should not coexist in the same file |
|
Apologies if I am missng smth obvious, but I dont think any of my code changes were added specifically to handle strictfp cases ? I only added the tests for strictfp cases to show, that the patterns added work for those nodes as well (at least with SDAG). Or are you suggesting, that instead of using any_* nodes to handle both cases with single pattern, I just use lrint and lround, to prevent strictfp nodes from matching ? |
|
Ping |
🐧 Linux x64 Test Results
✅ The build succeeded and all tests passed. |
| ; (L/LL)Round experimental | ||
| ; | ||
|
|
||
| define float @lround_i32_f16_simd_exp(half %x) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Lukacma could you add these tests to another file.
Were any patterns added to just improve these tests?
I think that is the question @arsenm did. If so then maybe create another PR having these extra patterns.
But by your answer I understand that the patterns you added are not specific to these tests
|
|
||
| // For global-isel we can use register classes to determine | ||
| // which FCVT instruction to use. | ||
| let Predicates = [HasFPRCVT] in { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if it will help, but maybe we could split the PR into patterns only for GlobalISel and another for the standard lowering/SelectionDAG?
I imagine that the ones from lines:
6810 till 6823
and
6848 till 6861
are only for GlobalISel while the other ones are for SelectionDAG
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes all the patterns without bitconvert node are for GlobalISel. May I ask what would be the value in splitting the patch ? As there is only couple of GlobalISel patterns added and the tests can be shared anyway, it makes sense to me to do it in one PR, but I can do that if you insist. The previous patches were also not split and it was fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main reason was to solve the issue raised by @arsenm about the complexity to review of the patch.
|
I have removed the GlobaISel work from this patch to aid the review and will submit it as a separate patch. |
That does make thinks clearer, thanks. I think the GISel part is the part that worked better, I'm not sure these will do much use in practice as the bitcast will often be optimized away. They don't look incorrect to me though, so long as A and X are the sensible conversions to use. |
| ; RUN: llc < %s -mtriple aarch64-unknown-unknown -mattr=+fprcvt,+fullfp16 | FileCheck %s --check-prefixes=CHECK | ||
|
|
||
| ; | ||
| ; (L/LL)Round experimental |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
experimental -> constrained fp or strictfp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| ; CHECK-NEXT: frintx h0, h0 | ||
| ; CHECK-NEXT: fcvtzs d0, h0 | ||
| ; CHECK-NEXT: ret | ||
| %val = call i64 @llvm.lrint.i53.f16(half %x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i53->i64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Thanks
| ret float %bc | ||
| } | ||
|
|
||
| define float @lround_i32_f32_simd(float %x) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order of these tests seems a bit random. It would be clearer if they were just in order doing f32->i32, f32->i64 f64->i32 and f64->i64.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| @@ -0,0 +1,199 @@ | |||
| ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5 | |||
| ; RUN: llc < %s -mtriple aarch64-unknown-unknown -mattr=+fprcvt,+fullfp16 | FileCheck %s --check-prefixes=CHECK | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a run line without +fprcvt - it looks like some of the patterns apply generally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
|
(Sorry, those were not submitted with the previous comment) |
These were added to handle cases like these, where there is extra unnecessary move. Though I couldn't generate this IR from C source, so maybe there is an issue in frontend preventing rounding intrinsics from being generated in which case I agree this will not help in practice until that is fixed. |
davemgreen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks LGTM.
These were added to handle cases like these, where there is extra unnecessary move. Though I couldn't generate this IR from C source, so maybe there is an issue in frontend preventing rounding intrinsics from being generated in which case I agree this will not help in practice until that is fixed.
I was thinking of things like store_i32(lrint), but wanted store_f32(bitcast(lrint)) then DAG would happily convert it back into store_i32(lrint) by optimizing the store+bitcast together. GISel has a nicer solution for this kind of problem, and I'm hoping it can be made more so in the future.
Yes that will be issue with SDAG for many cases. I am working on adding lowering for neon intrinsics nodes so at least those should work with new patterns, but unless the work is extended to all nodes, the impact of this work will be limited. |
This is followup patch to #157680, which allows simd fpcvt instructions to be generated from l/llround and l/llrint nodes.