Skip to content

Conversation

@amilendra
Copy link
Contributor

@amilendra amilendra commented Sep 15, 2025


name: Pull request
about: Technical issues, document format problems, bugs in scripts or feature proposal.


Thank you for submitting a pull request!

If this PR is about a bugfix:

Please use the bugfix label and make sure to go through the checklist below.

If this PR is about a proposal:

We are looking forward to evaluate your proposal, and if possible to
make it part of the Arm C Language Extension (ACLE) specifications.

We would like to encourage you reading through the contribution
guidelines
, in particular the section on submitting
a proposal
.

Please use the proposal label.

As for any pull request, please make sure to go through the below
checklist.

Checklist: (mark with X those which apply)

  • If an issue reporting the bug exists, I have mentioned it in the
    PR (do not bother creating the issue if all you want to do is
    fixing the bug yourself).
  • I have added/updated the SPDX-FileCopyrightText lines on top
    of any file I have edited. Format is SPDX-FileCopyrightText: Copyright {year} {entity or name} <{contact informations}>
    (Please update existing copyright lines if applicable. You can
    specify year ranges with hyphen , as in 2017-2019, and use
    commas to separate gaps, as in 2018-2020, 2022).
  • I have updated the Copyright section of the sources of the
    specification I have edited (this will show up in the text
    rendered in the PDF and other output format supported). The
    format is the same described in the previous item.
  • I have run the CI scripts (if applicable, as they might be
    tricky to set up on non-*nix machines). The sequence can be
    found in the contribution
    guidelines
    . Don't
    worry if you cannot run these scripts on your machine, your
    patch will be automatically checked in the Actions of the pull
    request.
  • I have added an item that describes the changes I have
    introduced in this PR in the section Changes for next
    release
    of the section Change Control/Document history
    of the document. Create Changes for next release if it does
    not exist. Notice that changes that are not modifying the
    content and rendering of the specifications (both HTML and PDF)
    do not need to be listed.
  • When modifying content and/or its rendering, I have checked the
    correctness of the result in the PDF output (please refer to the
    instructions on how to build the PDFs
    locally
    ).
  • The variable draftversion is set to true in the YAML header
    of the sources of the specifications I have modified.
  • Please DO NOT add my GitHub profile to the list of contributors
    in the README page of the project.

FEAT_FPRCVT adds 4 new variants for each FCVTAS, FCVTAU, FCVTMS, FCVTMU,
FCVTNS, FCVTNU, FCVTPS, FCVTPU, FCVTZS, and FCVTZU instruction.
1) Half Precision to 32-bit
2) Half Precision to 64-bit
3) Single Precision to 64-bit
4) Double Precision to 32-bit

For the Single Precision to 64-bit and Double Precision to 32-bit variants,
this patch adds two new intrinsics, that reduce to
- Single Precision to 64-bit : <INST> Dd,Sn
- Double Precision to 32-bit : <INST> Sd,Dn

The intrinsics for conversions from Half Precision are already defined.
However they are documented as reducing to the incorrect instruction format;
<INST> Hd,Hn, so this patch fixes them to be
- Half Precision to 32-bit   : <INST> Sd,Hn
- Half Precision to 64-bit   : <INST> Dd,Hn
main/acle.md Outdated
svfloat32_t svmmmla[_f32_mf8](svfloat32_t zda, svmfloat8_t zn, svmfloat8_t zm);
```
#### FMMLA (widening, FP16 to FP32)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be in different section as it doesn't operate on modal 8-bit type

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is under the section SVE2 floating-point matrix multiply-accumulate instructions without anything specific to the floating point size, so okay as it is?

@amilendra amilendra force-pushed the 2025-acle-fmmla branch 2 times, most recently from 17749f2 to 4303ef0 Compare September 24, 2025 14:02
main/acle.md Outdated
##### Multiplication of modal 8-bit floating-point matrices

This section is in
[**Beta** state](#current-status-and-anticipated-changes) and might change or be
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just in case, we are still on Alpha

main/acle.md Outdated
Modal 8-bit floating-point matrix multiply-accumulate to half-precision.
```c
// Only if (__ARM_FEATURE_SVE2 && __ARM_FEATURE_F8F16MM)
svfloat16_t svmmla[_f16_mf8]_fpm(svfloat16_t zda, svmfloat8_t zn, svmfloat8_t zm, fpm_t fpmd);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/fpmd/fpm/g

main/acle.md Outdated
Modal 8-bit floating-point matrix multiply-accumulate to single-precision.
```c
// Only if (__ARM_FEATURE_SVE2 && __ARM_FEATURE_F8F32MM)
svfloat32_t svmmla[_f32_mf8]_fpm(svfloat32_t zda, svmfloat8_t zn, svmfloat8_t zm, fpm_t fpmd);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/fpmd/fpm/g

Adds intrinsic support for the FMMLA matrix multiply-add widening instructions
introduced by the 2024 dpISA.

FEAT_F8F32MM: Neon/SVE2 FP8 to single-precision
FEAT_F8F16MM: Neon/SVE2 FP8 to half-precision
FEAT_SVE_F16F32MM: SVE half-precision to single-precision
Copy link
Contributor

@Lukacma Lukacma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@vhscampos vhscampos merged commit 56fb677 into ARM-software:main Oct 3, 2025
3 of 4 checks passed
@vhscampos
Copy link
Member

@all-contributors please add @amilendra for content.

@allcontributors
Copy link
Contributor

@vhscampos

I've put up a pull request to add @amilendra! 🎉

vhscampos pushed a commit that referenced this pull request Oct 3, 2025
Adds @amilendra as a contributor for content.

This was requested by vhscampos [in this
comment](#409 (comment))

[skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
vhscampos added a commit that referenced this pull request Oct 3, 2025
…nsics" (#415)

Reverts #409

I've accidentally merged the PR before full approval.
@vhscampos
Copy link
Member

I've mistakenly merged the PR too soon, before full approval. I am really sorry for that.

It's been reverted since.

float32x4_t vmlallttq_laneq_f32_mf8_fpm(float32x4_t vd, mfloat8x16_t vn, mfloat8x16_t vm, __builtin_constant_p(lane), fpm_t fpm) vd -> Vd.4S;vm -> Vn.16B; vm -> Vm.B; 0 <= lane <= 15 FMLALLBB Vd.4S, Vn.16B, Vm.B[lane] Vd.4S -> result A64

<SECTION> Matrix multiplication intrinsics from Armv9.6-A
float16x4_t vmmlaq_f16_mf8(float16x4_t r, mfloat8x16_t a, mfloat8x16_t b) r -> Vd.4H;a -> Vn.16B;b -> Vm.16B FMMLA Vd.4H, Vn.16B, Vm.16B Vd.4H -> result A64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These intrinsics are missing FPMR parameter i think

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be float16x8_t?

Suggested change
float16x4_t vmmlaq_f16_mf8(float16x4_t r, mfloat8x16_t a, mfloat8x16_t b) r -> Vd.4H;a -> Vn.16B;b -> Vm.16B FMMLA Vd.4H, Vn.16B, Vm.16B Vd.4H -> result A64
float16x8_t vmmlaq_f16_mf8(float16x8_t r, mfloat8x16_t a, mfloat8x16_t b) r -> Vd.4H;a -> Vn.16B;b -> Vm.16B FMMLA Vd.4H, Vn.16B, Vm.16B Vd.4H -> result A64

vhscampos pushed a commit that referenced this pull request Oct 24, 2025
Adds intrinsic support for the FMMLA matrix multiply-add widening
instructions introduced by the 2024 dpISA.

FEAT_F8F32MM: Neon/SVE2 FP8 to single-precision
FEAT_F8F16MM: Neon/SVE2 FP8 to half-precision
FEAT_SVE_F16F32MM: SVE half-precision to single-precision

Relands PR #409 that was approved, mistakenly merged and subsequently
reverted.
Modal 8-bit floating-point matrix multiply-accumulate to half-precision.
```c
// Only if (__ARM_FEATURE_SVE2 && __ARM_FEATURE_F8F16MM)
svfloat16_t svmmla[_f16_mf8]_fpm(svfloat16_t zda, svmfloat8_t zn, svmfloat8_t zm, fpm_t fpm);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why it needs to have _f16_mf8, does in conflicts with others svmmla?
Could it be only: svmmla[_f16]_fpm

Copy link
Contributor Author

@amilendra amilendra Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. I don't think it would conflict with existing intrinsics.
So I suppose similarly svmmla[_f32_mf8]_fpm can be svmmla[_f32]_fpm ?
@AlfieRichardsArm FYI and do you agree? I understand you already have a draft based on the merged #418. Would these changes cause any problems with that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm will require a bit of reworking but certainly doable. It will require special casing some of our logic, as currently if a set of intrinsics (same mnemonic) differ by 2 argument types we put both in the suffix.

It does seem inconsistent with other intrinsics (like svfloat32_t svmlalltt[_f32_mf8]_fpm) so I would be gently against the change, but not enough to strongly oppose it if @CarolineConcatto prefers it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will note though, I would need a decision quite quickly, as support for this is quite urgent, and so we would like it to be in GCC 16 which is closing to contributions imminently.

Copy link
Contributor

@Lukacma Lukacma Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess since we do it that way everywhere else, this ship has failed and we should stay consistent and keep both types.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants