-
Notifications
You must be signed in to change notification settings - Fork 132
feat: support EAGLE-3 speculative decoding training in ling-flash-2.0 #398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: GeLee-Q <[email protected]> Co-authored-by: Gao016 <[email protected]>
Summary of ChangesHello @yefei12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates EAGLE-3 speculative decoding training capabilities for the Ling-flash-2.0 model. It introduces a new model configuration, enhances data parsing to support a Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for EAGLE-3 speculative decoding training for the ling-flash-2.0 model. The changes are comprehensive, including a new model configuration, training scripts for both offline and online modes, a new chat template with corresponding parsing logic, and necessary test cases. A trust_remote_code flag has also been added to enhance security when loading models. My review found a critical issue in the new chat template definition that would lead to a runtime error, and a minor inconsistency in a training script. I've provided suggestions to address these points.
| template=ChatTemplate( | ||
| system_header="<role>SYSTEM</role>", | ||
| system_prompt="You are a helpful assistant.", | ||
| user_header="<role>HUMAN</role>", | ||
| assistant_header="<role>ASSISTANT</role>", | ||
| end_of_turn_token="<|role_end|>", | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ChatTemplate class does not have a system_header field. Including it here will cause a pydantic.ValidationError at runtime when this template is registered, preventing the application from starting correctly. The system prompt is already handled by the system_prompt field and the logic in GeneralParser. Please remove the system_header line.
template=ChatTemplate(
system_prompt="You are a helpful assistant.",
user_header="<role>HUMAN</role>",
assistant_header="<role>ASSISTANT</role>",
end_of_turn_token="<|role_end|>",
),| --output-path $ROOT_DIR/cache/hidden_states/perfect-blend-ling-flash-2.0 \ | ||
| --chat-template ling-flash-2.0 \ | ||
| --max-length 2048 \ | ||
| --tp-size 8 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency and to avoid potential bugs when command-line arguments are provided, it's better to use the $TP_SIZE variable here instead of a hardcoded value. The TP_SIZE variable is defined at the top of the script but is not used here, which could lead to unexpected behavior if a user specifies a different tensor parallelism size.
| --tp-size 8 \ | |
| --tp-size $TP_SIZE \ |
specforge/data/parse.py
Outdated
| self.tokenizer.pad_token_id = self.tokenizer.unk_token_id | ||
|
|
||
| assistant_pattern = f"{re.escape(self.assistant_message_separator)}(.*?(?:{re.escape(self.chat_template.end_of_turn_token)}|$))" | ||
| sep = re.escape(self.assistant_message_separator) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What specific situation caused the previous code to parse incorrectly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the oversight. This adjustment was originally designed for a custom tokenizer and might not be strictly necessary for the open-source Ling model, so I have reverted the changes for now.
At the time, my primary consideration was using the lookahead as a safety measure. In multi-turn segments, if an EOS token is missing or malformed, the first pattern tends to 'bleed' into subsequent user messages. By ensuring data integrity at the role boundary, the second pattern provides better robustness.
Motivation
The goal of this PR is to enable training support for EAGLE-3 speculative decoding specifically tailored for the Ling-flash-2.0 model. This implementation aligns with SGLang v0.5.6 and incorporates the changes from SGLang PR #15119.
Modifications
<role>chat template and regex-based parser for robust data ingestion.ling-flash-2.0-eagle3.json(4096 hidden / 157k vocab) for architecture alignment.trust_remote_code; resolved tokenization and loss masking mismatches.Accuracy Test
Benchmark & Profiling
Benchmarking was conducted using FP8 precision, TP = 1, on the H200 series hardware.
Checklist