Skip to content

Commit 461fbab

Browse files
committed
Add support for the Brain 16-bit floating-point vector multiplication intrinsics
Adds intrinsic support for the Brain 16-bit floating-point vector multiplication instructions introduced by the FEAT_SVE_BFSCALE feature in 2024 dpISA. BFSCALE: BFloat16 adjust exponent by vector (predicated) BFSCALE (multiple and single vector): Multi-vector BFloat16 adjust exponent by vector BFSCALE (multiple vectors): Multi-vector BFloat16 adjust exponent BFMUL (multiple and single vector): Multi-vector BFloat16 floating-point multiply by vector BFMUL (multiple vectors): Multi-vector BFloat16 floating-point multiply
1 parent e102d32 commit 461fbab

File tree

1 file changed

+36
-1
lines changed

1 file changed

+36
-1
lines changed

main/acle.md

Lines changed: 36 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -468,6 +468,7 @@ Armv8.4-A [[ARMARMv84]](#ARMARMv84). Support is added for the Dot Product intrin
468468
* Added support for FEAT_FPRCVT intrinsics and `__ARM_FEATURE_FPRCVT`.
469469
* Added support for modal 8-bit floating point matrix multiply-accumulate widening intrinsics.
470470
* Added support for 16-bit floating point matrix multiply-accumulate widening intrinsics.
471+
* Added support for Brain 16-bit floating-point vector multiplication intrinsics.
471472

472473
### References
473474

@@ -2125,6 +2126,16 @@ are available. Specifically, if this macro is defined to `1`, then:
21252126
for the FEAT_SME_B16B16 instructions and if their associated intrinsics
21262127
are available.
21272128

2129+
#### Brain 16-bit floating-point vector multiplication support
2130+
2131+
`__ARM_FEATURE_SVE_BFSCALE` is defined to `1` if there is hardware
2132+
support for the SVE BF16 vector multiplication extensions and if the
2133+
associated ACLE intrinsics are available.
2134+
2135+
See [Half-precision brain
2136+
floating-point](#half-precision-brain-floating-point) for details
2137+
of half-precision brain floating-point types.
2138+
21282139
### Cryptographic extensions
21292140

21302141
#### “Crypto” extension
@@ -2665,6 +2676,7 @@ be found in [[BA]](#BA).
26652676
| [`__ARM_FEATURE_SVE`](#scalable-vector-extension-sve) | Scalable Vector Extension (FEAT_SVE) | 1 |
26662677
| [`__ARM_FEATURE_SVE_B16B16`](#non-widening-brain-16-bit-floating-point-support) | Non-widening brain 16-bit floating-point intrinsics (FEAT_SVE_B16B16) | 1 |
26672678
| [`__ARM_FEATURE_SVE_BF16`](#brain-16-bit-floating-point-support) | SVE support for the 16-bit brain floating-point extension (FEAT_BF16) | 1 |
2679+
| [`__ARM_FEATURE_SVE_BFSCALE`](#brain-16-bit-floating-point-vector-multiplication-support) | SVE support for the 16-bit brain floating-point vector multiplication extension (FEAT_SVE_BFSCALE) | 1 |
26682680
| [`__ARM_FEATURE_SVE_BITS`](#scalable-vector-extension-sve) | The number of bits in an SVE vector, when known in advance | 256 |
26692681
| [`__ARM_FEATURE_SVE_MATMUL_FP32`](#multiplication-of-32-bit-floating-point-matrices) | 32-bit floating-point matrix multiply extension (FEAT_F32MM) | 1 |
26702682
| [`__ARM_FEATURE_SVE_MATMUL_FP64`](#multiplication-of-64-bit-floating-point-matrices) | 64-bit floating-point matrix multiply extension (FEAT_F64MM) | 1 |
@@ -11698,7 +11710,7 @@ Multi-vector floating-point fused multiply-add/subtract
1169811710
__arm_streaming __arm_inout("za");
1169911711
```
1170011712

11701-
#### BFMLA. BFMLS, FMLA, FMLS (indexed)
11713+
#### BFMLA, BFMLS, FMLA, FMLS (indexed)
1170211714

1170311715
Multi-vector floating-point fused multiply-add/subtract
1170411716

@@ -12791,6 +12803,29 @@ element types.
1279112803
svint8x4_t svuzpq[_s8_x4](svint8x4_t zn) __arm_streaming;
1279212804
```
1279312805

12806+
#### BFMUL
12807+
12808+
BFloat16 Multi-vector floating-point multiply
12809+
12810+
``` c
12811+
// Only if __ARM_FEATURE_SVE_BFSCALE != 0
12812+
svbfloat16x2_t svmul[_bf16_x2](svbfloat16x2_t zd, svbfloat16x2_t zm) __arm_streaming;
12813+
svbfloat16x2_t svmul[_single_bf16_x2](svbfloat16x2_t zd, svbfloat16_t zm) __arm_streaming;
12814+
svbfloat16x4_t svmul[_bf16_x4](svbfloat16x4_t zd, svbfloat16x4_t zm) __arm_streaming;
12815+
svbfloat16x4_t svmul[_single_bf16_x4](svbfloat16x4_t zd, svbfloat16_t zm) __arm_streaming;
12816+
```
12817+
12818+
#### BFSCALE
12819+
BFloat16 floating-point adjust exponent vectors.
12820+
12821+
``` c
12822+
// Only if __ARM_FEATURE_SVE_BFSCALE != 0
12823+
svbfloat16x2_t svscale[_bf16_x2](svbfloat16x2_t zdn, svint16x2_t zm);
12824+
svbfloat16x2_t svscale[_single_bf16_x2](svbfloat16x2_t zn, svint16_t zm);
12825+
svbfloat16x4_t svscale[_bf16_x4](svbfloat16x4_t zdn, svint16x4_t zm);
12826+
svbfloat16x4_t svscale[_single_bf16_x4](svbfloat16x4_t zn, svint16_t zm);
12827+
```
12828+
1279412829
### SME2.1 instruction intrinsics
1279512830

1279612831
The specification for SME2.1 is in

0 commit comments

Comments
 (0)