Skip to content

Conversation

@yefei12
Copy link
Contributor

@yefei12 yefei12 commented Dec 29, 2025

Motivation

The goal of this PR is to enable training support for EAGLE-3 speculative decoding specifically tailored for the Ling-flash-2.0 model. This implementation aligns with SGLang v0.5.6 and incorporates the changes from SGLang PR #15119.

Modifications

  • Pipeline: Added <role> chat template and regex-based parser for robust data ingestion.
  • Config: Defined ling-flash-2.0-eagle3.json (4096 hidden / 157k vocab) for architecture alignment.
  • Fixes: Enabled trust_remote_code; resolved tokenization and loss masking mismatches.
  • Testing: Added dedicated preprocessing tests, isolated for focused verification.

Accuracy Test

image

Benchmark & Profiling

Benchmarking was conducted using FP8 precision, TP = 1, on the H200 series hardware.

image image image

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yefei12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates EAGLE-3 speculative decoding training capabilities for the Ling-flash-2.0 model. It introduces a new model configuration, enhances data parsing to support a <role>-based chat template, and ensures proper model loading by enabling trust_remote_code. The changes also include example training scripts and dedicated tests to validate the new data preprocessing logic.

Highlights

  • EAGLE-3 Support for Ling-flash-2.0: Enabled training support for EAGLE-3 speculative decoding specifically for the Ling-flash-2.0 model, aligning with SGLang v0.5.6 and SGLang PR #15119.
  • New Configuration: Introduced ling-flash-2.0-eagle3.json to define the architecture for the new model, including vocab_size and draft_vocab_size.
  • Chat Template and Parser Enhancements: Added a <role> chat template and updated the regex-based parser in specforge/data/parse.py for robust data ingestion, specifically handling <role> tokens and resolving tokenization/loss masking mismatches.
  • trust_remote_code Integration: Enabled the trust_remote_code flag across various model and tokenizer loading functions (AutoConfig, AutoModelForCausalLM, AutoTokenizer, TargetHead) to enhance flexibility and compatibility.
  • Training Scripts: Provided new example shell scripts (run_ling_flash_2.0_eagle3_offline.sh, run_ling_flash_2.0_eagle3_online.sh) for both offline and online training of the EAGLE-3 model.
  • Preprocessing Tests: Added dedicated preprocessing tests for the new ling-flash-2.0 chat template to ensure correct tokenization and loss masking.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for EAGLE-3 speculative decoding training for the ling-flash-2.0 model. The changes are comprehensive, including a new model configuration, training scripts for both offline and online modes, a new chat template with corresponding parsing logic, and necessary test cases. A trust_remote_code flag has also been added to enhance security when loading models. My review found a critical issue in the new chat template definition that would lead to a runtime error, and a minor inconsistency in a training script. I've provided suggestions to address these points.

Comment on lines 271 to 277
template=ChatTemplate(
system_header="<role>SYSTEM</role>",
system_prompt="You are a helpful assistant.",
user_header="<role>HUMAN</role>",
assistant_header="<role>ASSISTANT</role>",
end_of_turn_token="<|role_end|>",
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The ChatTemplate class does not have a system_header field. Including it here will cause a pydantic.ValidationError at runtime when this template is registered, preventing the application from starting correctly. The system prompt is already handled by the system_prompt field and the logic in GeneralParser. Please remove the system_header line.

    template=ChatTemplate(
        system_prompt="You are a helpful assistant.",
        user_header="<role>HUMAN</role>",
        assistant_header="<role>ASSISTANT</role>",
        end_of_turn_token="<|role_end|>",
    ),

--output-path $ROOT_DIR/cache/hidden_states/perfect-blend-ling-flash-2.0 \
--chat-template ling-flash-2.0 \
--max-length 2048 \
--tp-size 8 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and to avoid potential bugs when command-line arguments are provided, it's better to use the $TP_SIZE variable here instead of a hardcoded value. The TP_SIZE variable is defined at the top of the script but is not used here, which could lead to unexpected behavior if a user specifies a different tensor parallelism size.

Suggested change
--tp-size 8 \
--tp-size $TP_SIZE \

self.tokenizer.pad_token_id = self.tokenizer.unk_token_id

assistant_pattern = f"{re.escape(self.assistant_message_separator)}(.*?(?:{re.escape(self.chat_template.end_of_turn_token)}|$))"
sep = re.escape(self.assistant_message_separator)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What specific situation caused the previous code to parse incorrectly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for the oversight. This adjustment was originally designed for a custom tokenizer and might not be strictly necessary for the open-source Ling model, so I have reverted the changes for now.
At the time, my primary consideration was using the lookahead as a safety measure. In multi-turn segments, if an EOS token is missing or malformed, the first pattern tends to 'bleed' into subsequent user messages. By ensuring data integrity at the role boundary, the second pattern provides better robustness.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants