Skip to content

support z-image with lora#872

Merged
gushiqiao merged 1 commit intomainfrom
gsq/z-fix
Feb 4, 2026
Merged

support z-image with lora#872
gushiqiao merged 1 commit intomainfrom
gsq/z-fix

Conversation

@gushiqiao
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @gushiqiao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the ZImageRunner by integrating Low-Rank Adaptation (LoRA) capabilities into the ZImageTransformerModel. This allows for more efficient and flexible fine-tuning of the Z-image generation process, enabling users to adapt models with smaller, task-specific LoRA weights without modifying the full base model. The changes streamline the application of LoRA, supporting different deployment strategies and improving the model's adaptability.

Highlights

  • LoRA Integration: Introduced LoraAdapter and a new helper function build_z_image_model_with_lora to enable Low-Rank Adaptation (LoRA) support for the ZImageTransformerModel.
  • Flexible LoRA Application: Implemented support for both dynamic LoRA application (passing parameters during model initialization) and merging LoRA weights post-initialization using LoraAdapter, with checks for compatibility with quantized and lazy-load modes.
  • Conditional Transformer Loading: Updated the ZImageRunner.load_transformer method to conditionally apply LoRA configurations when loading the ZImageTransformerModel based on the presence of lora_configs in the runner's configuration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lightx2v/models/runners/z_image/z_image_runner.py
    • Added LoraAdapter import.
    • Implemented build_z_image_model_with_lora function for conditional LoRA model construction.
    • Modified load_transformer method to utilize the new LoRA building logic.
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gushiqiao gushiqiao merged commit 1d08137 into main Feb 4, 2026
2 checks passed
@gushiqiao gushiqiao deleted the gsq/z-fix branch February 4, 2026 03:26
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for LoRA to the Z-Image runner. It introduces a new helper function build_z_image_model_with_lora to handle both dynamic application and merging of LoRA weights. The load_transformer method is updated to use this new functionality.

My review focuses on improving the robustness and clarity of the new LoRA handling logic. I've suggested changes to handle LoRA configurations more safely and to provide clearer error messages to the user. Specifically, I've pointed out that the dynamic LoRA application path only uses the first LoRA configuration and should warn the user if more are provided, and that an assertion message could be more explicit about why LoRA merging is not supported in certain cases.

Comment on lines +38 to +41
lora_path = lora_configs[0]["path"]
lora_strength = lora_configs[0]["strength"]
model_kwargs["lora_path"] = lora_path
model_kwargs["lora_strength"] = lora_strength
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation for dynamic LoRA application has a couple of issues:

  1. It only considers the first LoRA configuration and silently ignores others if multiple are provided.
  2. It accesses lora_configs[0]['strength'] directly, which can cause a KeyError if the key is missing.

The suggested change addresses these by using .get() for safe access to 'strength' and adding a warning when more than one LoRA is configured for dynamic application, as this path appears to support only one.

Suggested change
lora_path = lora_configs[0]["path"]
lora_strength = lora_configs[0]["strength"]
model_kwargs["lora_path"] = lora_path
model_kwargs["lora_strength"] = lora_strength
lora_config = lora_configs[0]
if len(lora_configs) > 1:
logger.warning("Dynamic LoRA apply only supports one LoRA configuration, using the first one.")
model_kwargs["lora_path"] = lora_config["path"]
model_kwargs["lora_strength"] = lora_config.get("strength", 1.0)

model_kwargs["lora_strength"] = lora_strength
model = z_image_module(**model_kwargs)
else:
assert not config.get("dit_quantized", False), "Online LoRA only for quantized models; merging LoRA is unsupported."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assertion message is a bit confusing. It correctly prevents LoRA merging for quantized models, but the message 'Online LoRA only for quantized models; merging LoRA is unsupported.' could be clearer. A more direct message explaining that merging is not supported for quantized models and suggesting the alternative would be more helpful for the user.

Suggested change
assert not config.get("dit_quantized", False), "Online LoRA only for quantized models; merging LoRA is unsupported."
assert not config.get("dit_quantized", False), "Merging LoRA is not supported for quantized models. Use `lora_dynamic_apply=True` instead."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments