Skip to content

Commit c813353

Browse files
committed
Changing the way we infer dtype to avoid force evaluation of lazy tensors
1 parent 7853bfb commit c813353

File tree

1 file changed

+1
-1
lines changed
  • src/diffusers/models/autoencoders

1 file changed

+1
-1
lines changed

src/diffusers/models/autoencoders/vae.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,7 @@ def forward(
286286

287287
sample = self.conv_in(sample)
288288

289-
upscale_dtype = next(iter(self.up_blocks.parameters())).dtype
289+
upscale_dtype = self.conv_out.weight.dtype
290290
if torch.is_grad_enabled() and self.gradient_checkpointing:
291291
# middle
292292
sample = self._gradient_checkpointing_func(self.mid_block, sample, latent_embeds)

0 commit comments

Comments
 (0)