Describe the bug
I run the diffusers pipe using the standard process with a custom diffusers trained lora:
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
pipe.scheduler = scheduler
pipe.load_lora_weights("lora/customdiffusers_lora.safetensors")
etc...
it runs without error but the effect was not applied, and I see the following warning:
No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any WanTransformer3DModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new
Is there any config file I need to change for this to work? Thanks
Reproduction
N/A as a custom Lora
Logs
System Info
0.33, linux, python 3.10
Who can help?
No response