Skip to content

Commit 1032033

Browse files
committed
Add support for the Brain 16-bit floating-point vector multiplication intrinsics
Adds intrinsic support for the Brain 16-bit floating-point vector multiplication instructions introduced by the FEAT_SVE_BFSCALE feature in 2024 dpISA. BFSCALE: BFloat16 adjust exponent by vector (predicated) BFSCALE (multiple and single vector): Multi-vector BFloat16 adjust exponent by vector BFSCALE (multiple vectors): Multi-vector BFloat16 adjust exponent BFMUL (multiple and single vector): Multi-vector BFloat16 floating-point multiply by vector BFMUL (multiple vectors): Multi-vector BFloat16 floating-point multiply
1 parent 8e36d16 commit 1032033

File tree

1 file changed

+51
-1
lines changed

1 file changed

+51
-1
lines changed

main/acle.md

Lines changed: 51 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -468,6 +468,7 @@ Armv8.4-A [[ARMARMv84]](#ARMARMv84). Support is added for the Dot Product intrin
468468
* Added support for FEAT_FPRCVT intrinsics and `__ARM_FEATURE_FPRCVT`.
469469
* Added support for modal 8-bit floating point matrix multiply-add widening intrinsics.
470470
* Added support for 16-bit floating point matrix multiply-add widening intrinsics.
471+
* Added support for Brain 16-bit floating-point vector multiplication intrinsics.
471472

472473
### References
473474

@@ -2125,6 +2126,16 @@ are available. Specifically, if this macro is defined to `1`, then:
21252126
for the FEAT_SME_B16B16 instructions and if their associated intrinsics
21262127
are available.
21272128

2129+
#### Brain 16-bit floating-point vector multiplication support
2130+
2131+
`__ARM_FEATURE_SVE_BFSCALE` is defined to `1` if there is hardware
2132+
support for the SVE BF16 vector multiplication extensions and if the
2133+
associated ACLE intrinsics are available.
2134+
2135+
See [Half-precision brain
2136+
floating-point](#half-precision-brain-floating-point) for details
2137+
of half-precision brain floating-point types.
2138+
21282139
### Cryptographic extensions
21292140

21302141
#### “Crypto” extension
@@ -2668,6 +2679,7 @@ be found in [[BA]](#BA).
26682679
| [`__ARM_FEATURE_SVE`](#scalable-vector-extension-sve) | Scalable Vector Extension (FEAT_SVE) | 1 |
26692680
| [`__ARM_FEATURE_SVE_B16B16`](#non-widening-brain-16-bit-floating-point-support) | Non-widening brain 16-bit floating-point intrinsics (FEAT_SVE_B16B16) | 1 |
26702681
| [`__ARM_FEATURE_SVE_BF16`](#brain-16-bit-floating-point-support) | SVE support for the 16-bit brain floating-point extension (FEAT_BF16) | 1 |
2682+
| [`__ARM_FEATURE_SVE_BFSCALE`](#brain-16-bit-floating-point-vector-multiplication-support) | SVE support for the 16-bit brain floating-point vector multiplication extension (FEAT_SVE_BFSCALE) | 1 |
26712683
| [`__ARM_FEATURE_SVE_BITS`](#scalable-vector-extension-sve) | The number of bits in an SVE vector, when known in advance | 256 |
26722684
| [`__ARM_FEATURE_SVE_MATMUL_FP32`](#multiplication-of-32-bit-floating-point-matrices) | 32-bit floating-point matrix multiply extension (FEAT_F32MM) | 1 |
26732685
| [`__ARM_FEATURE_SVE_MATMUL_FP64`](#multiplication-of-64-bit-floating-point-matrices) | 64-bit floating-point matrix multiply extension (FEAT_F64MM) | 1 |
@@ -11676,7 +11688,7 @@ Multi-vector floating-point fused multiply-add/subtract
1167611688
__arm_streaming __arm_inout("za");
1167711689
```
1167811690

11679-
#### BFMLA. BFMLS, FMLA, FMLS (indexed)
11691+
#### BFMLA, BFMLS, FMLA, FMLS (indexed)
1168011692

1168111693
Multi-vector floating-point fused multiply-add/subtract
1168211694

@@ -12769,6 +12781,44 @@ element types.
1276912781
svint8x4_t svuzpq[_s8_x4](svint8x4_t zn) __arm_streaming;
1277012782
```
1277112783

12784+
#### FMUL, BFMUL
12785+
12786+
Multi-vector floating-point multiply
12787+
12788+
``` c
12789+
// Variants are also available for:
12790+
// [_single_f32_x2]
12791+
// [_single_f64_x2]
12792+
// [_single_bf16_x2] (only if __ARM_FEATURE_SVE_BFSCALE != 0)
12793+
12794+
// [_single_f16_x4]
12795+
// [_single_f32_x4]
12796+
// [_single_f64_x4]
12797+
// [_single_bf16_x4] (only if __ARM_FEATURE_SVE_BFSCALE != 0)
12798+
svfloat16x2_t svmul[_single_f16_x2](svfloat16x2_t zd, svfloat16_t zm) __arm_streaming;
12799+
12800+
// Variants are also available for:
12801+
// [_f32_x2]
12802+
// [_f64_x2]
12803+
// [_bf16_x2] (only if __ARM_FEATURE_SVE_BFSCALE != 0)
12804+
// [_f16_x4]
12805+
// [_f32_x4]
12806+
// [_f64_x4]
12807+
// [_bf16_x2] (only if __ARM_FEATURE_SVE_BFSCALE != 0)
12808+
svfloat16x2_t svmul[_f16_x2](svfloat16x2_t zd, svfloat16x2_t zm) __arm_streaming;
12809+
```
12810+
12811+
#### BFSCALE
12812+
BFloat16 floating-point adjust exponent vectors.
12813+
12814+
``` c
12815+
// Only if __ARM_FEATURE_SVE_BFSCALE != 0
12816+
svbfloat16x2_t svscale[_bf16_x2](svbfloat16x2_t zdn, svint16x2_t zm);
12817+
svbfloat16x2_t svscale[_single_bf16_x2](svbfloat16x2_t zn, svint16_t zm);
12818+
svbfloat16x4_t svscale[_bf16_x4](svbfloat16x4_t zdn, svint16x4_t zm);
12819+
svbfloat16x4_t svscale[_single_bf16_x4](svbfloat16x4_t zn, svint16_t zm);
12820+
```
12821+
1277212822
### SME2.1 instruction intrinsics
1277312823

1277412824
The specification for SME2.1 is in

0 commit comments

Comments
 (0)