Skip to content

Commit 8e36d16

Browse files
committed
Add floating point matrix multiply-add widening intrinsics
Adds intrinsic support for the FMMLA matrix multiply instructions introduced by the 2024 dpISA. FEAT_F8F32MM: Neon FP8 to single-precision FEAT_F8F16MM: Neon FP8 to half-precision FEAT_SVE_F16F32MM: SVE half-precision to single-precision FEAT_SSVE_F8F32MM: SVE FP8 to single-precision FEAT_SSVE_F8F16MM: SVE FP8 to half-precision
1 parent a4bc412 commit 8e36d16

File tree

4 files changed

+69
-0
lines changed

4 files changed

+69
-0
lines changed

main/acle.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -466,6 +466,8 @@ Armv8.4-A [[ARMARMv84]](#ARMARMv84). Support is added for the Dot Product intrin
466466
* Added feature test macro for FEAT_SSVE_FEXPA.
467467
* Added feature test macro for FEAT_CSSC.
468468
* Added support for FEAT_FPRCVT intrinsics and `__ARM_FEATURE_FPRCVT`.
469+
* Added support for modal 8-bit floating point matrix multiply-add widening intrinsics.
470+
* Added support for 16-bit floating point matrix multiply-add widening intrinsics.
469471

470472
### References
471473

@@ -2354,6 +2356,29 @@ is hardware support for the SVE forms of these instructions and if the
23542356
associated ACLE intrinsics are available. This implies that
23552357
`__ARM_FEATURE_MATMUL_INT8` and `__ARM_FEATURE_SVE` are both nonzero.
23562358

2359+
##### Multiplication of 8-bit floating-point matrices
2360+
2361+
This section is in
2362+
[**Beta** state](#current-status-and-anticipated-changes) and might change or be
2363+
extended in the future.
2364+
2365+
`__ARM_FEATURE_SSVE_F8F32MM` is defined to `1` if there is hardware support
2366+
for the SVE 8-bit floating-point matrix multiply (FEAT_SSVE_F8F32MM)
2367+
instructions and if the associated ACLE intrinsics are available.
2368+
This implies that `__ARM_FEATURE_SSVE_FP8DOT4` is nonzero.
2369+
2370+
`__ARM_FEATURE_SSVE_F8F16MM` is defined to `1` if there is hardware support
2371+
for the SVE 8-bit floating-point matrix multiply (FEAT_SSVE_F8F16MM)
2372+
instructions and if the associated ACLE intrinsics are available.
2373+
This implies that `__ARM_FEATURE_SSVE_FP8DOT4` and `__ARM_FEATURE_SSVE_F8F32MM` are nonzero.
2374+
2375+
##### Multiplication of 16-bit floating-point matrices
2376+
2377+
`__ARM_FEATURE_SVE_F16F32MM` is defined to `1` if there is hardware support
2378+
for the SVE 16-bit floating-point to 32-bit floating-point matrix multiply and add
2379+
(FEAT_SVE_F16F32MM) instructions and if the associated ACLE intrinsics are available.
2380+
This implies that `__ARM_FEATURE_SVE2p1` is nonzero.
2381+
23572382
##### Multiplication of 32-bit floating-point matrices
23582383

23592384
`__ARM_FEATURE_SVE_MATMUL_FP32` is defined to `1` if there is hardware support
@@ -2646,6 +2671,9 @@ be found in [[BA]](#BA).
26462671
| [`__ARM_FEATURE_SVE_BITS`](#scalable-vector-extension-sve) | The number of bits in an SVE vector, when known in advance | 256 |
26472672
| [`__ARM_FEATURE_SVE_MATMUL_FP32`](#multiplication-of-32-bit-floating-point-matrices) | 32-bit floating-point matrix multiply extension (FEAT_F32MM) | 1 |
26482673
| [`__ARM_FEATURE_SVE_MATMUL_FP64`](#multiplication-of-64-bit-floating-point-matrices) | 64-bit floating-point matrix multiply extension (FEAT_F64MM) | 1 |
2674+
| [`__ARM_FEATURE_SVE_F16F32MM`](#multiplication-of-16-bit-floating-point-matrices) | 16-bit floating-point matrix multiply extension (FEAT_SVE_F16F32MM) | 1 |
2675+
| [`__ARM_FEATURE_SSVE_F8F16MM`](#multiplication-of-8-bit-floating-point-matrices) | Modal 8-bit floating-point matrix multiply extension (FEAT_SSVE_F8F16MM) | 1 |
2676+
| [`__ARM_FEATURE_SSVE_F8F32MM`](#multiplication-of-8-bit-floating-point-matrices) | Modal 8-bit floating-point matrix multiply extension (FEAT_SSVE_F8F32MM) | 1 |
26492677
| [`__ARM_FEATURE_SVE_MATMUL_INT8`](#multiplication-of-8-bit-integer-matrices) | SVE support for the integer matrix multiply extension (FEAT_I8MM) | 1 |
26502678
| [`__ARM_FEATURE_SVE_PREDICATE_OPERATORS`](#scalable-vector-extension-sve) | Level of support for C and C++ operators on SVE vector types | 1 |
26512679
| [`__ARM_FEATURE_SVE_VECTOR_OPERATORS`](#scalable-vector-extension-sve) | Level of support for C and C++ operators on SVE predicate types | 1 |
@@ -13676,6 +13704,30 @@ Single-precision convert, narrow, and interleave to 8-bit floating-point (top an
1367613704
uint64_t imm0_15, fpm_t fpm);
1367713705
```
1367813706

13707+
13708+
#### FMMLA (widening, FP8 to FP16)
13709+
13710+
8-bit floating-point matrix multiply-add to half-precision.
13711+
```c
13712+
// Only if (__ARM_FEATURE_SVE2 && __ARM_FEATURE_F8F16MM) || __ARM_FEATURE_SSVE_F8F16MM
13713+
svfloat16_t svmmmla[_f16_mf8](svfloat16_t zda, svmfloat8_t zn, svmfloat8_t zm);
13714+
```
13715+
13716+
#### FMMLA (widening, FP8 to FP32)
13717+
13718+
8-bit floating-point matrix multiply-add to single-precision.
13719+
```c
13720+
// Only if (__ARM_FEATURE_SVE2 && __ARM_FEATURE_F8F32MM) || __ARM_FEATURE_SSVE_F8F32MM
13721+
svfloat32_t svmmmla[_f32_mf8](svfloat32_t zda, svmfloat8_t zn, svmfloat8_t zm);
13722+
```
13723+
#### FMMLA (widening, FP16 to FP32)
13724+
13725+
16-bit floating-point matrix multiply-add to single-precision.
13726+
```c
13727+
// Only if (__ARM_FEATURE_SVE2 && __ARM_FEATURE_SVE_F16F32MM) || __ARM_FEATURE_SME_FA64
13728+
svfloat32_t svmmmla[_f32_f16](svfloat32_t zda, svfloat16_t zn, svfloat16_t zm);
13729+
```
13730+
1367913731
### SME2 modal 8-bit floating-point intrinsics
1368013732

1368113733
The intrinsics in this section are defined by the header file

neon_intrinsics/advsimd.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6202,3 +6202,14 @@ The intrinsics in this section are guarded by the macro ``__ARM_NEON``.
62026202
| <code>float32x4_t <a href="https://developer.arm.com/architectures/instruction-sets/intrinsics/vmlalltbq_laneq_f32_mf8_fpm" target="_blank">vmlalltbq_laneq_f32_mf8_fpm</a>(<br>&nbsp;&nbsp;&nbsp;&nbsp; float32x4_t vd,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t vn,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t vm,<br>&nbsp;&nbsp;&nbsp;&nbsp; const int lane,<br>&nbsp;&nbsp;&nbsp;&nbsp; fpm_t fpm)</code> | `vd -> Vd.4S`<br>`vm -> Vn.16B`<br>`vm -> Vm.B`<br>`0 <= lane <= 15` | `FMLALLBB Vd.4S, Vn.16B, Vm.B[lane]` | `Vd.4S -> result` | `A64` |
62036203
| <code>float32x4_t <a href="https://developer.arm.com/architectures/instruction-sets/intrinsics/vmlallttq_lane_f32_mf8_fpm" target="_blank">vmlallttq_lane_f32_mf8_fpm</a>(<br>&nbsp;&nbsp;&nbsp;&nbsp; float32x4_t vd,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t vn,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x8_t vm,<br>&nbsp;&nbsp;&nbsp;&nbsp; const int lane,<br>&nbsp;&nbsp;&nbsp;&nbsp; fpm_t fpm)</code> | `vd -> Vd.4S`<br>`vm -> Vn.16B`<br>`vm -> Vm.B`<br>`0 <= lane <= 7` | `FMLALLBB Vd.4S, Vn.16B, Vm.B[lane]` | `Vd.4S -> result` | `A64` |
62046204
| <code>float32x4_t <a href="https://developer.arm.com/architectures/instruction-sets/intrinsics/vmlallttq_laneq_f32_mf8_fpm" target="_blank">vmlallttq_laneq_f32_mf8_fpm</a>(<br>&nbsp;&nbsp;&nbsp;&nbsp; float32x4_t vd,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t vn,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t vm,<br>&nbsp;&nbsp;&nbsp;&nbsp; const int lane,<br>&nbsp;&nbsp;&nbsp;&nbsp; fpm_t fpm)</code> | `vd -> Vd.4S`<br>`vm -> Vn.16B`<br>`vm -> Vm.B`<br>`0 <= lane <= 15` | `FMLALLBB Vd.4S, Vn.16B, Vm.B[lane]` | `Vd.4S -> result` | `A64` |
6205+
6206+
## Matrix multiplication intrinsics from Armv9.6-A
6207+
6208+
### Vector arithmetic
6209+
6210+
#### Matrix multiply
6211+
6212+
| Intrinsic | Argument preparation | AArch64 Instruction | Result | Supported architectures |
6213+
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|-------------------------------|-------------------|---------------------------|
6214+
| <code>float16x4_t <a href="https://developer.arm.com/architectures/instruction-sets/intrinsics/vmmlaq_f16" target="_blank">vmmlaq_f16</a>(<br>&nbsp;&nbsp;&nbsp;&nbsp; float16x4_t r,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t a,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t b)</code> | `r -> Vd.4H`<br>`a -> Vn.16B`<br>`b -> Vm.16B` | `FMMLA Vd.4H, Vn.16B, Vm.16B` | `Vd.4H -> result` | `A64` |
6215+
| <code>float32x4_t <a href="https://developer.arm.com/architectures/instruction-sets/intrinsics/vmmlaq_f32" target="_blank">vmmlaq_f32</a>(<br>&nbsp;&nbsp;&nbsp;&nbsp; float32x4_t r,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t a,<br>&nbsp;&nbsp;&nbsp;&nbsp; mfloat8x16_t b)</code> | `r -> Vd.4S`<br>`a -> Vn.16B`<br>`b -> Vm.16B` | `FMMLA Vd.4S, Vn.16B, Vm.16B` | `Vd.4S -> result` | `A64` |

tools/intrinsic_db/advsimd.csv

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4830,3 +4830,7 @@ float32x4_t vmlalltbq_lane_f32_mf8_fpm(float32x4_t vd, mfloat8x16_t vn, mfloat8x
48304830
float32x4_t vmlalltbq_laneq_f32_mf8_fpm(float32x4_t vd, mfloat8x16_t vn, mfloat8x16_t vm, __builtin_constant_p(lane), fpm_t fpm) vd -> Vd.4S;vm -> Vn.16B; vm -> Vm.B; 0 <= lane <= 15 FMLALLBB Vd.4S, Vn.16B, Vm.B[lane] Vd.4S -> result A64
48314831
float32x4_t vmlallttq_lane_f32_mf8_fpm(float32x4_t vd, mfloat8x16_t vn, mfloat8x8_t vm, __builtin_constant_p(lane), fpm_t fpm) vd -> Vd.4S;vm -> Vn.16B; vm -> Vm.B; 0 <= lane <= 7 FMLALLBB Vd.4S, Vn.16B, Vm.B[lane] Vd.4S -> result A64
48324832
float32x4_t vmlallttq_laneq_f32_mf8_fpm(float32x4_t vd, mfloat8x16_t vn, mfloat8x16_t vm, __builtin_constant_p(lane), fpm_t fpm) vd -> Vd.4S;vm -> Vn.16B; vm -> Vm.B; 0 <= lane <= 15 FMLALLBB Vd.4S, Vn.16B, Vm.B[lane] Vd.4S -> result A64
4833+
4834+
<SECTION> Matrix multiplication intrinsics from Armv9.6-A
4835+
float16x4_t vmmlaq_f16(float16x4_t r, mfloat8x16_t a, mfloat8x16_t b) r -> Vd.4H;a -> Vn.16B;b -> Vm.16B FMMLA Vd.4H, Vn.16B, Vm.16B Vd.4H -> result A64
4836+
float32x4_t vmmlaq_f32(float32x4_t r, mfloat8x16_t a, mfloat8x16_t b) r -> Vd.4S;a -> Vn.16B;b -> Vm.16B FMMLA Vd.4S, Vn.16B, Vm.16B Vd.4S -> result A64

tools/intrinsic_db/advsimd_classification.csv

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4717,3 +4717,5 @@ vmlalltbq_lane_f32_mf8_fpm Vector arithmetic|Multiply|Multiply-accumulate and wi
47174717
vmlalltbq_laneq_f32_mf8_fpm Vector arithmetic|Multiply|Multiply-accumulate and widen
47184718
vmlallttq_lane_f32_mf8_fpm Vector arithmetic|Multiply|Multiply-accumulate and widen
47194719
vmlallttq_laneq_f32_mf8_fpm Vector arithmetic|Multiply|Multiply-accumulate and widen
4720+
vmmlaq_f16 Vector arithmetic|Matrix multiply
4721+
vmmlaq_f32 Vector arithmetic|Matrix multiply

0 commit comments

Comments
 (0)