Skip to content

fix: handle LoKr format keys in Z-image LoRA conversion#13250

Open
s-zx wants to merge 1 commit intohuggingface:mainfrom
s-zx:fix/13221-zimage-lora-lokr
Open

fix: handle LoKr format keys in Z-image LoRA conversion#13250
s-zx wants to merge 1 commit intohuggingface:mainfrom
s-zx:fix/13221-zimage-lora-lokr

Conversation

@s-zx
Copy link

@s-zx s-zx commented Mar 10, 2026

Summary

The _convert_non_diffusers_z_image_lora_to_diffusers function did not consume LoKr-format keys (.lokr_w1, .lokr_w2, .alpha) from external Z-image LoRA checkpoints (e.g. Kohya/LyCORIS), causing a ValueError when the state_dict was not empty after conversion.

Root Cause

The conversion function handled standard LoRA formats (lora_down/lora_up, lora_A/lora_B, lora.down/up) but lacked handling for LoKr decomposition format used by some trainers.

Fix

Add handling to convert LoKr decomposition to standard lora_A/lora_B format: for linear layers, lokr_w1 @ lokr_w2 maps to lora_B @ lora_A with the same alpha scaling. The conversion applies the standard alpha/rank scaling used by other formats.

Fixes #13221

The _convert_non_diffusers_z_image_lora_to_diffusers function did not consume
LoKr-format keys (.lokr_w1, .lokr_w2, .alpha) from external Z-image LoRA
checkpoints (e.g. Kohya/LyCORIS), causing a ValueError when the state_dict
was not empty after conversion.

Add handling to convert LoKr decomposition to standard lora_A/lora_B format:
for linear layers, lokr_w1 @ lokr_w2 maps to lora_B @ lora_A with the same
alpha scaling used by other formats.

Fixes huggingface#13221
Signed-off-by: s-zx <s-zx@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Zimage lora support issue too

1 participant