Skip to content

Add Float8Tensor #2463

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 6, 2025
Merged

Add Float8Tensor #2463

merged 1 commit into from
Aug 6, 2025

Conversation

jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Jun 30, 2025

Stacked PRs:


Add Float8Tensor

Summary:

  • Added Float8Tensor that's using fbgemm kernels and scaled_mm:
    • per row activation + per row weight linear calling torch._scaled_mm op (for compatibilty with SM 8.9)
    • per tensor activation + per tensor weight quant linear calling torch._scaled_mm op (for compatibilty with SM 8.9)
    • per row activation + per row weight bmm calling torch.ops.fbgemm.f8f8bf16_rowwise_batched kernel (only works for SM 9.0+) can use batched scaled mm from torch when it's supported: [RFC]: PyTorch Low-Precision GEMMs Public API pytorch#157950
  • dynamic quantization kwargs is added to the Float8Tensor directly
  • Added QuantizeTensorKwargs and QuantizeTensorToFloat8Kwargs to store key word args for Float8Tensor.to_float8
  • Updated Float8DynamicActivationFloat8WeightConfig and Float8WeightOnlyConfig to use Float8Tensor

Test Plan:
python test/dtypes/test_affine_quantized_float.py
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py

Reviewers:

Subscribers:

Tasks:

Tags:

Copy link

pytorch-bot bot commented Jun 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2463

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure

As of commit b0c2cf3 with merge base b757fb9 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

jerryzh168 added a commit that referenced this pull request Jun 30, 2025
Summary:
Splits out the float8 rowwise quantized path (both act and weight) of AQT to Float8RowwiseTensor

Next: could potentially incorporate the per tensor activation path there as well
Next: we can split the per tensor weight path to another Tensor as well, so we can deprecate AQT path for float8

Test Plan:
python test/dtypes/test_affine_quantized_float.py
python test/quantization/quantize_/test_float8_rowwise_tensor.py

Reviewers:

Subscribers:

Tasks:

Tags:

stack-info: PR: #2463, branch: jerryzh168/stack/9
@jerryzh168 jerryzh168 force-pushed the jerryzh168/stack/9 branch from da79207 to 5cae4d0 Compare June 30, 2025 23:01
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 30, 2025
@jerryzh168 jerryzh168 added the topic: new feature Use this tag if this PR adds a new feature label Jun 30, 2025
@jerryzh168 jerryzh168 changed the base branch from jerryzh168/stack/4 to main July 2, 2025 01:58
@jerryzh168 jerryzh168 force-pushed the jerryzh168/stack/9 branch from 5cae4d0 to 33ca58e Compare July 2, 2025 01:58
@jerryzh168 jerryzh168 changed the title Add Float8RowwiseTensor Add Float8Tensor Jul 2, 2025
@jerryzh168 jerryzh168 changed the base branch from main to jerryzh168/stack/4 July 2, 2025 01:58
@jerryzh168 jerryzh168 changed the base branch from jerryzh168/stack/4 to main July 2, 2025 20:35
@jerryzh168 jerryzh168 force-pushed the jerryzh168/stack/9 branch from 33ca58e to 897ec7e Compare July 2, 2025 20:36
@jerryzh168 jerryzh168 changed the base branch from main to jerryzh168/stack/4 July 2, 2025 20:36
@jerryzh168 jerryzh168 changed the base branch from jerryzh168/stack/4 to main July 2, 2025 21:42
@jerryzh168 jerryzh168 force-pushed the jerryzh168/stack/9 branch 2 times, most recently from 7897dcf to 99a1bb1 Compare July 2, 2025 21:42
@jerryzh168 jerryzh168 changed the base branch from main to jerryzh168/stack/11 July 2, 2025 21:42
@jerryzh168 jerryzh168 changed the base branch from jerryzh168/stack/11 to main July 2, 2025 23:44
@jerryzh168 jerryzh168 force-pushed the jerryzh168/stack/9 branch from 99a1bb1 to 7e9f224 Compare July 2, 2025 23:44
@jerryzh168 jerryzh168 changed the base branch from main to jerryzh168/stack/11 July 2, 2025 23:44
@jerryzh168 jerryzh168 changed the base branch from jerryzh168/stack/11 to main July 3, 2025 00:09
@jerryzh168 jerryzh168 force-pushed the jerryzh168/stack/9 branch from 7e9f224 to 442bd6c Compare July 3, 2025 00:09
@jerryzh168 jerryzh168 changed the base branch from main to jerryzh168/stack/11 July 3, 2025 00:09
Summary:
* Added Float8Tensor that's using fbgemm kernels and scaled_mm:
    * per row activation + per row weight linear calling torch._scaled_mm op (for compatibilty with SM 8.9)
    * per tensor activation + per tensor weight quant linear calling torch._scaled_mm op (for compatibilty with SM 8.9)
    * per row activation + per row weight bmm calling torch.ops.fbgemm.f8f8bf16_rowwise_batched kernel (only works for SM 9.0+) can use batched scaled mm from torch when it's supported: pytorch/pytorch#157950
* dynamic quantization kwargs is added to the Float8Tensor directly
* Added QuantizeTensorKwargs and QuantizeTensorToFloat8Kwargs to store key word args for Float8Tensor.to_float8
* Updated Float8DynamicActivationFloat8WeightConfig and Float8WeightOnlyConfig to use Float8Tensor

Test Plan:
python test/dtypes/test_affine_quantized_float.py
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py

Reviewers:

Subscribers:

Tasks:

Tags:

stack-info: PR: #2463, branch: jerryzh168/stack/9
@clee2000
Copy link
Contributor

clee2000 commented Aug 6, 2025

/easycla

@jerryzh168 jerryzh168 merged commit 3b4bc98 into main Aug 6, 2025
18 of 20 checks passed
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 8, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 8, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 8, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 11, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 11, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 11, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 12, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
jerryzh168 added a commit to jerryzh168/ao that referenced this pull request Aug 12, 2025
Summary:
We have recently updated our design for structuring tensor subclasses in torchao
to remove unnecessary abstractions and reduce indirections and having a structuring that
aligns better with people's intuitive understanding of different quantization use cases,
examples using the new design are: pytorch#2463, pytorch#2687

Test Plan:
check generated doc
Reviewers:

Subscribers:

Tasks:

Tags:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants