Skip to content

Conversation

bbeckca
Copy link

@bbeckca bbeckca commented Oct 15, 2025

Summary:
Moving float8 cutlass sparse layout into its own class:
https://github.com/pytorch/ao/blob/main/torchao/dtypes/floatx/cutlass_semi_sparse_layout.py

Differential Revision: D84467190
Copy link

pytorch-bot bot commented Oct 15, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3182

Note: Links to docs will display an error until the docs builds have been completed.

❌ 8 New Failures

As of commit fc80e43 with merge base 30082cb (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 15, 2025
needed for the rest of the system to understand the specific format that's adopted.
"""
OPAQUE = "opaque"
# todo: add semi-sparse
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerryzh168 It seems we may want to add a packing format for sparse. Wondering if there's a preference between adding it here or in a separate file (similar to int4) for float8?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need packing format if we have a separate config? It looks like packing format is mostly to support different Int4WeightOnlyConfig kernel options (tinygemm, sparse marlin, etc).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I noticed that we seem to replace the dense weight with quantized semi-sparse in the transform Would it make more sense to integrate Float8SemiSparseTensor here rather than gating with packing-format as I proposed previously? cc @jerryzh168

from torchao.testing.utils import skip_if_rocm
from torchao.utils import torch_version_at_least

BF16_ACT_CONFIG = Float8WeightOnlyConfig(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this config makes sense, it's not something we support. From what I understand this is a bf16 a + fp8 sparse weight? We only have kernel support for fp8xfp8 +2:4 sparse matmul, no support for mixed input dtypes currently.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, it seems I should be mirroring test_fp8_cutlass_sparse (from test_sparse_api.py) instead
with the difference being using the new flag/config which exposes the tensor subclass being added?

implements_torch_function = Float8SemiSparseTensor.implements_torch_function


@implements(aten.linear.default)
Copy link
Contributor

@jcaip jcaip Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll also need to make sure mm and addmm are supported ops as well. The arg order is different from linear but it should be the same logic.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, I'm onboard with that. Mind if I add those ops in a follow-up diff after this lands?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants