Skip to content

Commit acba7b7

Browse files
committed
casted -> cast
1 parent c2b1ec5 commit acba7b7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/diffusers/hooks/layerwise_casting.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ class PeftInputAutocastDisableHook(ModelHook):
9292
1. Making forward implementations independent of device/dtype casting operations as much as possible.
9393
2. Peforming inference without losing information from casting to different precisions. With the current
9494
PEFT implementation (as linked in the reference above), and assuming running layerwise casting inference
95-
with storage_dtype=torch.float8_e4m3fn and compute_dtype=torch.bfloat16, inputs are casted to
95+
with storage_dtype=torch.float8_e4m3fn and compute_dtype=torch.bfloat16, inputs are cast to
9696
torch.float8_e4m3fn in the lora layer. We will then upcast back to torch.bfloat16 when we continue the
9797
forward pass in PEFT linear forward or Diffusers layer forward, with a `send_to_dtype` operation from
9898
LayerwiseCastingHook. This will be a lossy operation and result in poorer generation quality.

0 commit comments

Comments
 (0)