Skip to content

feat(dflash): add checkpoint resume support and clean up comments#464

Merged
sleepcoo merged 1 commit intosgl-project:mainfrom
xiaomin-D:feat/dflash-training-improvements
Feb 9, 2026
Merged

feat(dflash): add checkpoint resume support and clean up comments#464
sleepcoo merged 1 commit intosgl-project:mainfrom
xiaomin-D:feat/dflash-training-improvements

Conversation

@xiaomin-D
Copy link
Contributor

Motivation

Add --ckpt-dir and --resume flags for DFlash training checkpoint resume, aligned with eagle3 training script pattern.

  • --ckpt-dir: specify checkpoint directory explicitly
  • --resume: auto-detect latest checkpoint from output-dir
  • Restores model weights and scheduler state (lr position)

Modifications

Related Issues

Accuracy Test

Benchmark & Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @xiaomin-D, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the DFlash training script by integrating robust checkpoint resume functionality. Users can now specify a checkpoint directory or automatically resume from the latest saved state, which includes restoring model weights and the learning rate scheduler. This improves the fault tolerance and flexibility of long-running training jobs. Additionally, the PR refactors the training script by removing unnecessary comments, making the codebase cleaner and easier to maintain. The accompanying example script has also been updated to reflect current best practices and dataset configurations.

Highlights

  • Checkpoint Resume Support: Introduced --ckpt-dir and --resume flags to enable checkpointing and resuming DFlash training, allowing for continuation from a specified directory or auto-detection of the latest checkpoint.
  • State Restoration: Implemented logic to restore draft model weights and the optimizer's learning rate scheduler state when resuming training, ensuring seamless continuation.
  • Code Readability Improvements: Cleaned up the train_dflash.py script by removing numerous redundant and outdated comments, enhancing code clarity.
  • Example Script Updates: Updated the run_qwen3_8b_dflash_online.sh example script to use 8 GPUs by default, switch to a new training dataset (perfectblend), and adjust output and Weights & Biases project names accordingly.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/run_qwen3_8b_dflash_online.sh
    • Updated default NUM_GPUS from 1 to 8.
    • Changed training data path from sharegpt_train.jsonl to perfectblend_qwen3-8b_regen.jsonl.
    • Modified output directory from qwen3-8b-dflash-sharegpt to qwen3-8b-perfectblend.
    • Updated Weights & Biases project name from qwen3-8b-dflash-sharegpt to qwen3-8b-dflash-perfectblend.
  • scripts/train_dflash.py
    • Imported get_last_checkpoint utility for automatic checkpoint detection.
    • Added --ckpt-dir argument to specify a checkpoint directory for resuming.
    • Added --resume flag to enable automatic detection of the latest checkpoint in the output directory.
    • Implemented logic to load draft model weights from a specified or detected checkpoint.
    • Added functionality to restore the optimizer's scheduler state from a checkpoint.
    • Adjusted the training loop to correctly resume from the epoch and step indicated by the loaded checkpoint.
    • Removed various comments related to logging configuration, FSDP warnings, dataloader caching, dflash config, and target embeddings to improve code clarity.
Activity
  • No human activity (comments, reviews, or progress updates) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@sleepcoo sleepcoo merged commit 73a27ea into sgl-project:main Feb 9, 2026
2 checks passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces checkpoint resume functionality for DFlash training and cleans up some comments. The implementation of the resume logic has a couple of critical issues. First, the checkpoint discovery logic is flawed and will not correctly identify the latest checkpoint, preventing the resume feature from working as intended. Second, when resuming, the optimizer's state is not restored, which can negatively impact training convergence. I've provided specific comments and code suggestions to address these critical problems. The comment cleanup is a good improvement for code readability.

Comment on lines +364 to +366
draft_model_last_checkpoint = get_last_checkpoint(
args.output_dir, prefix=r"epoch_\d+_step"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The call to get_last_checkpoint is incorrect. The prefix argument is not treated as a regular expression, so r"epoch_\d+_step" will not match checkpoint directories like epoch_0_step_1000. Additionally, the get_last_checkpoint function's sorting logic is not suitable for checkpoint names with both epoch and step numbers, as it only sorts by a single trailing number. This will cause the auto-resume functionality to fail or pick the wrong checkpoint. A more robust method is needed to parse and sort checkpoint directories by both epoch and step.

Suggested change
draft_model_last_checkpoint = get_last_checkpoint(
args.output_dir, prefix=r"epoch_\d+_step"
)
checkpoint_dirs = [d for d in os.listdir(args.output_dir) if d.startswith("epoch_") and os.path.isdir(os.path.join(args.output_dir, d))]
if checkpoint_dirs:
latest_checkpoint_dir = max(checkpoint_dirs, key=lambda d: [int(s) for s in d.split("_") if s.isdigit()])
draft_model_last_checkpoint = os.path.join(args.output_dir, latest_checkpoint_dir)

start_epoch = 0
global_step = 0
if resume_state is not None:
optimizer.scheduler.load_state_dict(resume_state["scheduler_state_dict"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

When resuming from a checkpoint, only the learning rate scheduler's state is being restored. The optimizer's state (e.g., momentum buffers for Adam) is not loaded, which can negatively impact training convergence. The BF16Optimizer class provides a load_state_dict method that correctly restores both the optimizer and scheduler states.

Suggested change
optimizer.scheduler.load_state_dict(resume_state["scheduler_state_dict"])
optimizer.load_state_dict(resume_state)

f"Provided ckpt dir {args.ckpt_dir} is not a valid directory."
)

if args.resume and os.path.isdir(args.output_dir):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If both --ckpt-dir and --resume are specified, the auto-detection from --resume will override the explicit path provided via --ckpt-dir. The explicit path should have higher precedence. Using elif here will ensure that auto-detection is only attempted if --ckpt-dir is not provided.

Suggested change
if args.resume and os.path.isdir(args.output_dir):
elif args.resume and os.path.isdir(args.output_dir):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants