Skip to content

Qwen3 H5 exporting to HF safetensor format#2571

Open
LakshmiKalaKadali wants to merge 1 commit intokeras-team:masterfrom
LakshmiKalaKadali:safetensor-qwen3-br
Open

Qwen3 H5 exporting to HF safetensor format#2571
LakshmiKalaKadali wants to merge 1 commit intokeras-team:masterfrom
LakshmiKalaKadali:safetensor-qwen3-br

Conversation

@LakshmiKalaKadali
Copy link
Contributor

This PR extends the KerasHub export pipeline to support the Qwen3 model family. It enables users to convert KerasHub Qwen3Backbone models into the Hugging Face safetensors format, allowing for seamless integration with the transformers ecosystem.
The following changes has done:
New Conversion Logic (qwen3.py)
Config Mapping: Implemented get_qwen3_config which maps KerasHub configuration to Hugging Face's Qwen2Config.

Weight Mapping: Implemented get_qwen3_weights_map to handle:
Transposition: Converting Keras EinsumDense kernels (Input, Output) to Hugging Face Linear weights (Output, Input).
Bias Flattening: Reshaping Attention biases from (Heads, Dim) to (Hidden,).

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @LakshmiKalaKadali, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the KerasHub export pipeline by adding full support for the Qwen3 model family. It allows users to convert KerasHub Qwen3 models, including their configurations and weights, into the Hugging Face safetensors format, facilitating interoperability with the broader Hugging Face ecosystem. The changes include new conversion logic for model parameters and tokenizer settings, along with robust testing to ensure accurate transformation.

Highlights

  • Qwen3 Model Export Logic: Introduced new Python module qwen3.py containing functions to convert KerasHub Qwen3 model configurations and weights to Hugging Face's Qwen2Config and safetensors format.
  • Integration with HF Exporter: Updated hf_exporter.py to register Qwen3Backbone and Qwen3Tokenizer for configuration, weight mapping, and tokenizer export, enabling seamless integration into the existing export pipeline.
  • Comprehensive Export Testing: Added qwen3_test.py with an end-to-end test case that verifies the correctness of the Qwen3 model and tokenizer export by comparing logits between the KerasHub model and the exported Hugging Face model.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • keras_hub/src/utils/transformers/export/hf_exporter.py
    • Imported Qwen3-specific utility functions (get_qwen3_config, get_qwen3_tokenizer_config, get_qwen3_weights_map).
    • Registered Qwen3Backbone and Qwen3Tokenizer in MODEL_CONFIGS, MODEL_EXPORTERS, and MODEL_TOKENIZER_CONFIGS for automatic export handling.
    • Modified the export_tokenizer function to recognize and process Qwen3Tokenizer instances.
  • keras_hub/src/utils/transformers/export/qwen3.py
    • New file dedicated to Qwen3 export utilities.
    • Implemented get_qwen3_config to map KerasHub Qwen3 model parameters (e.g., vocabulary_size, hidden_dim, num_layers) to their corresponding Hugging Face Qwen2Config attributes.
    • Developed get_qwen3_weights_map to handle the complex weight transformations required, including transposing Keras EinsumDense kernels and flattening attention biases, to match Hugging Face's Linear layer expectations.
    • Provided get_qwen3_tokenizer_config to define the Hugging Face compatible tokenizer configuration for Qwen3.
  • keras_hub/src/utils/transformers/export/qwen3_test.py
    • New file containing TestQwen3Export class.
    • Added test_export_to_hf method which performs a full export and verification flow:
    • Creates dummy tokenizer assets and a Qwen3Tokenizer.
    • Instantiates a small Qwen3Backbone and Qwen3CausalLM model.
    • Randomizes model weights for a realistic test.
    • Exports the Keras model to Hugging Face safetensors format using export_to_safetensors.
    • Loads the exported model and tokenizer using Hugging Face AutoModelForCausalLM and AutoTokenizer.
    • Asserts that the Hugging Face model configuration matches the Keras backbone's parameters.
    • Compares the output logits of the Keras and Hugging Face models for a given input, ensuring numerical equivalence after export.
Activity
  • No specific activity (comments, reviews, etc.) was provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for exporting KerasHub Qwen3 models to the Hugging Face safetensors format. The implementation is clean and follows the existing structure for model exporters. The new conversion logic in keras_hub/src/utils/transformers/export/qwen3.py is well-written, and the addition of keras_hub/src/utils/transformers/export/qwen3_test.py provides a thorough integration test that validates the end-to-end export process. The changes are a valuable extension to the KerasHub export pipeline. I have a few minor suggestions to improve code style and consistency.

# 2. BPE Models (Qwen)
elif tokenizer_type == "QwenTokenizer":
# 2. BPE Models (Qwen/Qwen3)
elif tokenizer_type in ["QwenTokenizer","Qwen3Tokenizer"]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and to adhere to standard Python formatting conventions, please add a space after the comma in the list. This is a minor style issue that a formatter like ruff would typically correct.

Suggested change
elif tokenizer_type in ["QwenTokenizer","Qwen3Tokenizer"]:
elif tokenizer_type in ["QwenTokenizer", "Qwen3Tokenizer"]:
References
  1. The style guide requires using ruff for code formatting. This change aligns with common ruff configurations for list formatting. (link)

"unk_token": None,
"model_max_length": 32768,
}

No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Please add a newline at the end of the file. It's a standard convention for text files and is often enforced by formatters like ruff to prevent issues with file concatenation and some tools.

References
  1. The style guide requires using ruff for code formatting. Standard ruff configurations enforce a final newline in files. (link)

}
keras_logits = keras_model(keras_inputs)

import torch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better code organization and readability, it's a best practice to place all imports at the top of the file. Since torch is a requirement for this test to run the Hugging Face model, moving this import to the top makes the dependency explicit.

keras_logits_np = ops.convert_to_numpy(keras_logits)
hf_logits_np = hf_logits.detach().cpu().numpy()

self.assertAllClose(keras_logits_np, hf_logits_np, atol=1e-3, rtol=1e-3) No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Please add a newline at the end of the file. It's a standard convention for text files and is often enforced by formatters like ruff to prevent issues with file concatenation and some tools.

References
  1. The style guide requires using ruff for code formatting. Standard ruff configurations enforce a final newline in files. (link)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments