Skip to content

Commit eb2ad02

Browse files
sayakpaula-r-r-o-w
andauthored
Apply suggestions from code review
Co-authored-by: Aryan <[email protected]>
1 parent 4e68e84 commit eb2ad02

File tree

2 files changed

+5
-2
lines changed

2 files changed

+5
-2
lines changed

src/diffusers/loaders/lora_pipeline.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2357,7 +2357,10 @@ def _maybe_expand_transformer_param_shape_or_error_(
23572357
expanded_module = torch.nn.Linear(
23582358
in_features, out_features, bias=bias, dtype=module_weight.dtype
23592359
)
2360-
# Only weights are expanded and biases are not.
2360+
# Only weights are expanded and biases are not. This is because only the input dimensions
2361+
# are changed while the output dimensions remain the same. The shape of the weight tensor
2362+
# is (out_features, in_features), while the shape of bias tensor is (out_features,), which
2363+
# explains the reason why only weights are expanded.
23612364
new_weight = torch.zeros_like(
23622365
expanded_module.weight.data, device=module_weight.device, dtype=module_weight.dtype
23632366
)

tests/lora/test_lora_layers_flux.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -583,7 +583,7 @@ def test_fuse_expanded_lora_with_regular_lora(self):
583583
self.assertTrue(np.allclose(lora_output_3, lora_output_4, atol=1e-3, rtol=1e-3))
584584

585585
def test_load_regular_lora(self):
586-
# This test checks if a regular lora (think of one trained Flux.1 Dev for example) can be loaded
586+
# This test checks if a regular lora (think of one trained on Flux.1 Dev for example) can be loaded
587587
# into the transformer with more input channels than Flux.1 Dev, for example. Some examples of those
588588
# transformers include Flux Fill, Flux Control, etc.
589589
components, _, _ = self.get_dummy_components(FlowMatchEulerDiscreteScheduler)

0 commit comments

Comments
 (0)