-
Notifications
You must be signed in to change notification settings - Fork 31.8k
43125: Ensure correct checkpoint saving behavior by simplifying Trainer.save_model parallelism logic #43314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
43125: Ensure correct checkpoint saving behavior by simplifying Trainer.save_model parallelism logic #43314
Conversation
…er.save_model parallelism logic
src/transformers/trainer.py
Outdated
| self._save(output_dir) | ||
| # If we drop to here, we're in 1D parallelism, so all ranks need to go to `save_pretrained` | ||
| elif (tp_size := getattr(self.model, "_tp_size", 0)) is not None and tp_size > 1: | ||
| self._save(output_dir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this also
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’ve removed the TP branch and pushed the update.
I didn’t drop it earlier because I wasn’t fully sure whether _tp_size > 1 still required all ranks to go through _save() for correct shard handling, and I wanted to avoid removing something that might still be relied on implicitly.
SunMarc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, just a nit
thanks for the review @SunMarc, updated the PR |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
the CI is failing on following test which seems unrelated to this change I just verified a few things on my local |
|
View the CircleCI Test Summary for this PR: https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=43314&sha=df6f55 |
What does this PR do?
This PR improves the correctness and robustness of model checkpoint saving by removing a redundant parallelism_config branch from Trainer.save_model().
The removed logic attempted to special-case checkpoint saving whenever accelerator.parallelism_config was present. In practice, this routing was unnecessary and error-prone, as checkpointing behavior for supported parallelism strategies (FSDP, DeepSpeed, tensor parallelism) is already handled by their respective, dedicated code paths.
As discussed in the linked issue, the presence of this branch could intercept the save flow prematurely, leading to incorrect save semantics—particularly in configurations involving FSDP—by invoking a generic _save() path instead of the required backend-specific logic.
Fixes #43125
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@SunMarc
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.