Skip to content

chore(deps): update dependency accelerate to v1.12.0#469

Open
konflux-internal-p02[bot] wants to merge 1 commit intorhoai-3.4-ea.1from
konflux/mintmaker/rhoai-3.4-ea.1/accelerate-1.x
Open

chore(deps): update dependency accelerate to v1.12.0#469
konflux-internal-p02[bot] wants to merge 1 commit intorhoai-3.4-ea.1from
konflux/mintmaker/rhoai-3.4-ea.1/accelerate-1.x

Conversation

@konflux-internal-p02
Copy link

Note: This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
accelerate ==1.0.1 -> ==1.12.0 age confidence

Release Notes

huggingface/accelerate (accelerate)

v1.12.0: : Deepspeed Ulysses/ALST

Compare Source

Deepspeed Ulysses/ALST integration

Deepspeed Ulysses/ALST is an efficient way of training on long sequences by employing sequence parallelism and attention head parallelism. You can learn more about this technology in this paper https://arxiv.org/abs/2506.13996 or this deepspeed tutorial https://www.deepspeed.ai/tutorials/ulysses-alst-sequence-parallelism/.

0d8bd9e0

To enable Deepspeed Ulysses, you first need to create ParallelismConfig and setting sp related args:

parallelism_config = ParallelismConfig(
    sp_backend="deepspeed",
    sp_size=2,
    sp_handler=DeepSpeedSequenceParallelConfig(...),
)

Then, you need to make sure to compute the correct loss as described on our docs

        ...
        losses_per_rank = torch.distributed.nn.functional.all_gather(loss, group=sp_group)
        good_tokens = (shift_labels != -100).view(-1).sum()
        good_tokens_per_rank = torch.distributed.nn.functional.all_gather(good_tokens, group=sp_group)
        total_loss = sum(
            losses_per_rank[rank] * good_tokens_per_rank[rank]
            for rank in range(sp_world_size)
            if good_tokens_per_rank[rank] > 0
        )
        total_good_tokens = sum(good_tokens_per_rank)
        loss = total_loss / max(total_good_tokens, 1)

Thanks @​S1ro1 for starting this work and for @​stas00 for finishing this work. Also thanks @​kashif for adding docs and reviewing/testing this PR !

This feature will also be available in HF Trainer thanks for this PR from @​stas00: huggingface/transformers#41832

Minor changes

New Contributors

Full Changelog: huggingface/accelerate@v1.11.0...v1.12.0

v1.11.0: : TE MXFP8, FP16/BF16 with MPS, Python 3.10

Compare Source

TE MXFP8 support

We've added support for MXFP8 in our TransformerEngine integration. To use that, you need to set use_mxfp8_block_scaling in fp8_config. See nvidia docs [here]. (https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html#MXFP8-and-block-scaling)

FP16/BF16 Training for MPS devices

BF16 and FP16 support for MPS devices is finally here. You can now pass mixed_precision = "fp16" or "bf16" when training on a mac (fp16 requires torch 2.8 and bf16 requires torch 2.6)

FSDP updates

The following PRs add respectively support to ignored_params and no_sync() for FSDPv2:

Mixed precision can now be passed as a dtype string from accelerate cli flag or fsdp_config in accelerate config file:

Nd-parallel updates

Some minor updates concerning nd-parallelism.

Bump to Python 3.10

We've dropped support for python 3.9 as it reached EOL in October.

Lots of minor fixes:

New Contributors

Full Changelog: huggingface/accelerate@v1.10.1...v1.11.0

v1.10.1: : Patchfix

Compare Source

Full Changelog: huggingface/accelerate@v1.10.0...v1.10.1

v1.10.0: : N-D Parallelism

Compare Source

N-D Parallelism

Training large models across multiple GPUs can be complex, especially when combining different parallelism strategies (e.g TP, CP, DP). To simplify this process, we've collaborated with Axolotl to introduce an easy-to-use integration that allows you to apply any combination of parallelism strategies directly in your training script. Just pass a ParallelismConfig specifying the size of each parallelism type—it's that simple.
Learn more about how it works in our latest blogpost.

parallelism_config = ParallelismConfig(
    dp_shard_size=2,
    dp_replicate_size=2,
    cp_size=2,
    tp_size=2,
)
accelerator = Accelerator(
    parallelism_config=parallelism_config,
   ...
)
model = AutoModelForCausalLM.from_pretrained("your-model-name", device_mesh=accelerator.torch_device_mesh)
model = accelerator.prepare(model)

FSDP improvements

We've fixed ignored modules attribute. With this, it is now possible to train PEFT model that moe layers that contrains q_proj and v_proj parameters. This is especially important for fine-tuning gpt-oss model.

Minor improvements

New Contributors

Full Changelog: huggingface/accelerate@v1.9.0...v1.10.0

v1.9.0: : Trackio support, Model loading speedup, Minor distributed improvements

Compare Source

Trackio tracker support

We've added support for a trackio, lightweight, 💯 free experiment tracking Python library built on top of 🤗 Datasets and Spaces.

Screen Recording 2025-06-11 at 5 39 32 PM

Main features are:

  • Local-first design: dashboard runs locally by default. You can also host it on Spaces by specifying a space_id.
  • Persists logs locally (or in a private Hugging Face Dataset)
  • Visualize experiments with a Gradio dashboard locally (or on Hugging Face Spaces)
  • Everything here, including hosting on Hugging Faces, is free!

To use it with accelerate, you need to set log_with and initialize the trackers

accelerator = Accelerator(log_with="trackio")
config={"learning_rate": 0.001, "batch_size": 32}

# init_kwargs in order to host the dashboard on spaces
init_kwargs = {"trackio": {"space_id": "hf_username/space_name"}
accelerator.init_trackers("example_project", config=config, init_kwargs=init_kwargs})

Thanks @​pcuenca for the integration !

Model loading speedup when relying set_module_tensor_to_device

Setting tensor while clearing cache is very slow, so we added clear_device option to disable it.
Another small optimization is using non_blocking everywhere and syncing just before returning control to the user. This makes the loading slightly faster.

FDSP, Deepspeed, FP8 minor improvements
🚨🚨🚨 Breaking changes 🚨🚨🚨

find_executable_batch_size() will no longer halves the batch after every OOM. Instead, we will multiply the batch size by 0.9. This should help user not waste gpu capacity.

What's Changed
New Contributors

Full Changelog: huggingface/accelerate@v1.8.1...v1.9.0

v1.8.1: : Patchfix

Compare Source

Full Changelog: huggingface/accelerate@v1.8.0...v1.8.1

v1.8.0: : FSDPv2 + FP8, Regional Compilation for DeepSpeed, Faster Distributed Training on Intel CPUs, ipex.optimize deprecation

Compare Source

FSDPv2 refactor + FP8 support

We've simplified how to prepare FSDPv2 models, as there were too many ways to compose FSDP2 with other features (e.g., FP8, torch.compile, activation checkpointing, etc.). Although the setup is now more restrictive, it leads to fewer errors and a more performant user experience. We’ve also added support for FP8. You can read about the results here. Thanks to @​S1ro1 for this contribution!

Faster Distributed Training on Intel CPUs

We updated the CCL_WORKER_COUNT variable and added KMP parameters for Intel CPU users. This significantly improves distributed training performance (e.g., Tensor Parallelism), with up to a 40% speed-up on Intel 4th Gen Xeon when training transformer TP models.

Regional Compilation for DeepSpeed

We added support for regional compilation with the DeepSpeed engine. DeepSpeed’s .compile() modifies models in-place using torch.nn.Module.compile(...), rather than the out-of-place torch.compile(...), so we had to account for that. Thanks @​IlyasMoutawwakil for this feature!

ipex.optimize deprecation

ipex.optimize is being deprecated. Most optimizations have been upstreamed to PyTorch, and future improvements will land there directly. For users without PyTorch 2.8, we’ll continue to rely on IPEX for now.

Better XPU Support

We've greatly expanded and stabilized support for Intel XPUs:

Trackers

We've added support for SwanLab as an experiment tracking backend. Huge thanks to @​ShaohonChen for this contribution ! We also deferred all tracker initializations to prevent premature setup of distributed environments.

What's Changed
New Contributors

Full Changelog: huggingface/accelerate@v1.7.0...v1.8.0

v1.7.0: : Regional compilation, Layerwise casting hook, FSDPv2 + QLoRA

Compare Source

Regional compilation

Instead of compiling the entire model at once, regional compilation targets repeated blocks (such as decoder layers) first. This allows the compiler to cache and reuse optimized code for subsequent blocks, significantly reducing the cold start compilation time typically seen during the first inference. Thanks @​IlyasMoutawwakil for the feature ! You can view the full benchmark here, and check out our updated compilation guide for more details!

compilation_time-1

To enable this feature, set use_regional_compilation=True in the TorchDynamoPlugin configuration.

# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
    use_regional_compilation=True,
    ... # other parameters
)

# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)

# This will apply compile_regions to your model
model = accelerator.prepare(model)
Layerwise casting hook

We've introduced a new hook that enables per-layer upcasting and downcasting (e.g., for Linear layers) during inference. This allows users to run models with separate storage and compute dtypes, resulting in memory savings. The concept was first implemented in diffusers, where downcasting models to FP8 proved effective without major quality degradation. Contributed by @​sayakpaul in #​3427

model = ....
storage_dtype = torch.float8_e4m3fn
compute_dtype = torch.bfloat16
attach_layerwise_casting_hooks(
            model,
            storage_dtype=storage_dtype,
            compute_dtype=compute_dtype,
        )
Better FSDP2 support

This release includes numerous new features and bug fixes. Notably, we’ve added support for FULL_STATE_DICT, a widely used option in FSDP, now enabling .save_pretrained() in transformers to work with FSDP2 wrapped models. QLoRA training is now supported as well but more testing is needed. We have also resolved a backend issue related to parameter offloading to CPU. Additionally, a significant memory spike that occurred when cpu_ram_efficient_loading=True was enabled has been fixed. Several other minor improvements and fixes are also included—see the What’s Changed section for full details.

Better HPU support:

We have added a documentation for Intel Gaudi hardware !
The support is already available since v1.5.0 through this PR.

Torch.compile breaking change for dynamic argument

We've updated the logic for setting self.dynamic to explicitly preserve None rather than defaulting to False when the USE_DYNAMIC environment variable is unset. This change aligns the behavior with the PyTorch documentation for torch.compile. Thanks to @​yafshar for contributing this improvement in #​3567.

What's Changed
New Contributors

Full Changelog: huggingface/accelerate@v1.6.0...v1.7.0

v1.6.0: : FSDPv2, DeepSpeed TP and XCCL backend support

Compare Source

FSDPv2 support

This release introduces the support for FSDPv2 thanks to @​S1ro1.

If you are using python code, you need to set fsdp_version=2 in FullyShardedDataParallelPlugin:

from accelerate import FullyShardedDataParallelPlugin, Accelerator

fsdp_plugin = FullyShardedDataParallelPlugin(
    fsdp_version=2
    # other options...
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)

If want to convert a YAML config that contains the FSDPv1 config to FSDPv2 one , use our conversion tool:

accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml`

To learn more about the difference between FSDPv1 and FSDPv2, read the following documentation.

DeepSpeed TP support

We have added initial support for DeepSpeed + TP. Not many changes were required as the DeepSpeed APIs was already compatible. We only needed to make sure that the dataloader was compatible with TP and that we were able to save the TP weights. Thanks @​inkcherry for the work ! #​3390.

To use TP with deepspeed, you need to update the setting in the deepspeed config file by including tensor_parallel key:

    ....
    "tensor_parallel":{
      "autotp_size": ${autotp_size}
    },
   ...

More details in this deepspeed PR.

Support for XCCL distributed backend

We've added support for XCCL which is an Intel distributed backend which can be used with XPU devices. More details in this torch PR. Thanks @​dvrogozh for the [integration](https://redirect.github.com/hugg


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

To execute skipped test pipelines write comment /ok-to-test.


Documentation

Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.

Signed-off-by: konflux-internal-p02 <170854209+konflux-internal-p02[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants

Comments