Skip to content

Conversation

@meichangsu1
Copy link
Contributor

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

kevssim and others added 3 commits December 23, 2025 14:14
lint fix

update docs
- Add CCE support to tuner.py and trainers
- Add documentation and examples
- Add test for CCE functionality
- Fix code-assist bot issues
- Use Axolotl fork to support more models
- Limit CCE model mapping to Liger-supported types
[feat] support activation cpu offload in fsdp and fsdp2 lint fix

feat(examples): update FSDP2 config and training script for activation CPU offload

- Refactor FSDP2 JSON config to include detailed documentation and recommended settings
- Add activation_cpu_offload parameter to FSDP config for memory optimization
- Update training script to use new Swift CLI syntax and adjusted hyperparameters

feat(plugin): update FSDP version key and add gradient requirement for checkpointing

- Change key from 'fsdp_version' to 'version' in fsdp_config for consistency
- Add call to model.enable_input_require_grads() when activation checkpointing is enabled to ensure proper gradient computation during CPU offloading

feat(plugin): fix whitespace in activation CPU offload callback

docs: remove activation_cpu_offload parameter documentation
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @meichangsu1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances memory efficiency and training acceleration by introducing three major features: Cut-Cross-Entropy (CCE) fused operators, Tiled MLP for long sequence processing, and Activation CPU Offloading for FSDP. These additions aim to reduce GPU memory consumption and speed up training, especially for large models and long sequences, making the training process more scalable and resource-efficient.

Highlights

  • Cut-Cross-Entropy (CCE) Integration: Added support for cut-cross-entropy fused operators to optimize GPU memory usage and accelerate training, with a new command-line parameter use_cce.
  • Tiled MLP Implementation: Introduced Tiled MLP for memory-efficient training of long sequences. This feature replaces standard MLP layers with a tiled version that processes sequences in chunks, supporting both FSDP2 (custom implementation) and other distributed strategies like DeepSpeed (via Liger kernel) using use_tiled_mlp and tiled_mlp_num_shards parameters.
  • Activation CPU Offloading: Implemented asynchronous CPU offloading for activations, specifically designed for FSDP strategies, to further reduce GPU memory footprint during the backward pass. This is controlled via the activation_cpu_offload setting within the FSDP configuration.
  • Documentation and Examples: Updated both Chinese and English documentation for command-line parameters to include the new use_cce, use_tiled_mlp, and tiled_mlp_num_shards options. New example scripts have been added for CCE, Tiled MLP, and Activation CPU Offloading to demonstrate their usage.
  • UI Integration: Integrated the use_cce option into the UI for LLM training, LLM GRPO, and LLM RLHF modes, allowing users to easily enable this feature.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new features to improve training efficiency, including support for CCE, Tiled MLP, and activation CPU offload. The changes are extensive, adding new modules for these features, updating documentation, and providing example usage scripts. The code is generally well-structured. My review focuses on improving the clarity of examples, fixing a bug in the activation offload logic, and enhancing code quality and maintainability.

# Check if fsdp_config is a dictionary and has activation_cpu_offload enabled
if isinstance(fsdp_config, dict) and fsdp_config.get('activation_cpu_offload', False):
# Get FSDP version from fsdp_config
strategy = fsdp_config.get('version', None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The FSDP configuration key for the version is fsdp_version, not version. Using the wrong key will prevent the activation offloading from being correctly configured based on the FSDP version.

Suggested change
strategy = fsdp_config.get('version', None)
strategy = fsdp_config.get('fsdp_version', None)

{
"_description": "FSDP2 configuration for distributed training (PyTorch native FSDP v2)",
"_requires": "torch>=2.4.0",
"_note": "This is the recommended configuration for multi-GPU training without CPU offloading. NOTE: When using FSDP2, do NOT use --gradient_checkpointing, use activation_checkpointing in fsdp_config instead.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The _note field is a bit confusing. It states that this configuration is for training "without CPU offloading", but the file is in a directory named activation_cpu_offload and the configuration itself enables activation_cpu_offload. This should be corrected to avoid confusion.

Suggested change
"_note": "This is the recommended configuration for multi-GPU training without CPU offloading. NOTE: When using FSDP2, do NOT use --gradient_checkpointing, use activation_checkpointing in fsdp_config instead.",
"_note": "This is the recommended configuration for multi-GPU training with CPU offloading. NOTE: When using FSDP2, do NOT use --gradient_checkpointing, use activation_checkpointing in fsdp_config instead.",

CUDA_VISIBLE_DEVICES=0,1 \
swift sft \
--model 'Qwen/Qwen3-0.6B' \
--dataset 'swift/self-cognition#1000' \ \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is an extra backslash \ at the end of this line, which is unnecessary and could potentially cause issues in some shell environments. It's best to remove it for clarity and correctness.

Suggested change
--dataset 'swift/self-cognition#1000' \ \
--dataset 'swift/self-cognition#1000' \

"fsdp_auto_wrap_policy": "TRANSFORMER_BASED_WRAP",
"fsdp_cpu_ram_efficient_loading": true,
"fsdp_reshard_after_forward": true,
"fsdp_state_dict_type": "FULL_STATE_DICT",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using FULL_STATE_DICT can lead to high memory usage on rank 0, potentially causing out-of-memory errors, especially with large models. It's generally recommended to use SHARDED_STATE_DICT with FSDP to save memory by having each rank save only its own shard.

Suggested change
"fsdp_state_dict_type": "FULL_STATE_DICT",
"fsdp_state_dict_type": "SHARDED_STATE_DICT",

self.model_parameters_storage = new_storage


def get_torch_device() -> any:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint any is not a standard Python type. It should be Any from the typing module, which is already imported in this file. Using the correct type hint improves code quality and static analysis.

Suggested change
def get_torch_device() -> any:
def get_torch_device() -> Any:

Comment on lines +128 to +136
raise NotImplementedError(
'`tensor_push is not implented in OffloadHandler class. Inherit this class and implement your '
'custom tensor_push.')

def tensor_pop(self, tensor_tag: Any, **kwargs):
"""Tensor pop."""
raise NotImplementedError(
'`tensor_pop is not implented in OffloadHandler class. Inherit this class and implement your '
'custom tensor_pop.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are a couple of typos ('implented' should be 'implemented') in the error messages. This can be confusing for developers who might encounter these errors.

Suggested change
raise NotImplementedError(
'`tensor_push is not implented in OffloadHandler class. Inherit this class and implement your '
'custom tensor_push.')
def tensor_pop(self, tensor_tag: Any, **kwargs):
"""Tensor pop."""
raise NotImplementedError(
'`tensor_pop is not implented in OffloadHandler class. Inherit this class and implement your '
'custom tensor_pop.')
raise NotImplementedError(
'`tensor_push` is not implemented in OffloadHandler class. Inherit this class and implement your '
'custom tensor_push.')
def tensor_pop(self, tensor_tag: Any, **kwargs):
"""Tensor pop."""
raise NotImplementedError(
'`tensor_pop` is not implemented in OffloadHandler class. Inherit this class and implement your '
'custom tensor_pop.')

else:
wrapped_module = child
if isinstance(child, FSDP):
wrapped_module = child._fsdp_wrapped_module
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Accessing the private attribute _fsdp_wrapped_module is brittle and might break in future versions of PyTorch. It's better to use a public API if available. If not, a comment explaining this dependency would be helpful for future maintenance.

Comment on lines +188 to +189
if self.use_cce and self.use_liger_kernel:
logger.warning('Enabling both use_cce and use_liger_kernel may lead to duplicated kernel patches.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's good that you've added a warning for when use_cce and use_liger_kernel are enabled simultaneously. This helps prevent unexpected behavior from conflicting kernel patches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants