Skip to content

Update torchao requirement from ==0.15.* to ==0.17.* in /requirements/full#7470

Closed
dependabot[bot] wants to merge 1 commit intodevfrom
dependabot/pip/requirements/full/dev/torchao-eq-0.17.star
Closed

Update torchao requirement from ==0.15.* to ==0.17.* in /requirements/full#7470
dependabot[bot] wants to merge 1 commit intodevfrom
dependabot/pip/requirements/full/dev/torchao-eq-0.17.star

Conversation

@dependabot
Copy link
Copy Markdown
Contributor

@dependabot dependabot bot commented on behalf of github Apr 6, 2026

Updates the requirements on torchao to permit the latest version.

Release notes

Sourced from torchao's releases.

v0.17.0

Highlights

We are excited to announce the 0.17 release of torchao! This release adds support for cuteDSL MXFP8 MoE kernels, per-head FP8 quantized low precision attention, ABI stability, and more!

CuteDSL MXFP8 MoE Kernels

We added a new CuteDSL MXFP8 quantization kernel for 3d expert weights that writes scale factors directly to blocked layout for tensorcores: pytorch/ao#4090

  • Used for scaling along dim1 in the backward pass of MoE training with grouped GEMMs.
  • ~12% speedup over previous 2 kernel “quantize then scale layout transformation” approach!

Per-Head FP8 Quantized Low Precision Attention

We added a new API for per-head fp8 quantized attention with FA3 as the backend (pytorch/ao#3959 and pytorch/ao#3857)

  • Users can either choose to use the elementary blocks as direct replacements for `F.scaled_dot_product_attention` or use the high-level wrapper, which replaces all F.SDPA calls within a module with the low precision attention variant.
  • Running torch.compile on a wrapped module will enable RoPE fusion where appropriate
  • Results show a 1.84x speedup on Wan2.1-T2V-1.3B, 1.23x speedup on LLaMA 3 prefill with high sequence lengths (131k), 1.07x speedup on flux.1-schnell with 2048x2048 image size

Example Usage of Direct Replacement:

from torchao.prototype.attention.fp8_fa3 import fp8_fa3_sdpa, fp8_fa3_rope_sdpa
out = fp8_fa3_sdpa(q, k, v)

Example Usage of Wrapper:

from torchao.prototype.attention import (
    AttentionBackend,
    LowPrecisionAttentionConfig,
    apply_low_precision_attention,
)
# Instantiate any nn.Module()
model = MyModel()
Simple SDPA replacement
config = LowPrecisionAttentionConfig(backend=AttentionBackend.FP8_FA3)
model = apply_low_precision_attention(model, config)
Flash activation is handled internally by the wrapper
output = model(inputs)
Torch.compile will enable rope fusion
model = torch.compile(model)

PyTorch ABI stability

... (truncated)

Commits
  • 02105d4 [mxfp8 training] add cutedsl kernel for mxfp8 quantation along dim0 (#4156)
  • d17c61b clean up unused rocm references in test_training.py (#4170)
  • 136cacb Remove tensor parallel test for v1 of Int8DynamicActivationInt8WeightConfig (...
  • 8fca033 [xpu][test] Skip WIP config for Intel GPU in test_safetensors_support.py and ...
  • 6a2f643 Fix rocm CI (#4167)
  • a927712 Move bitpacking.py to prototype and add uintx_utils.py (#4152)
  • 9ea1e67 Skip test_fsdp2 if PyTorch version is 2.11.0 or higher (#4168)
  • 3330d29 [reland][xpu] INT8 quantization on Intel XPU (#3782)
  • ac0b820 Fix test_sparse_api failures for builds without hipSPARSELt (#4125) (#4125)
  • 1f90b4d Delete deprecated PackedLinearInt8DynamicActivationIntxWeightLayout and relat...
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Updates the requirements on [torchao](https://github.com/pytorch/ao) to permit the latest version.
- [Release notes](https://github.com/pytorch/ao/releases)
- [Commits](pytorch/ao@v0.15.0-rc1...v0.17.0)

---
updated-dependencies:
- dependency-name: torchao
  dependency-version: 0.17.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update python code labels Apr 6, 2026
@oobabooga oobabooga closed this Apr 15, 2026
@dependabot @github
Copy link
Copy Markdown
Contributor Author

dependabot bot commented on behalf of github Apr 15, 2026

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

@dependabot dependabot bot deleted the dependabot/pip/requirements/full/dev/torchao-eq-0.17.star branch April 15, 2026 01:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant