Skip to content

Commit c02f72d

Browse files
committed
fix
1 parent 5359222 commit c02f72d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/diffusers/pipelines/faster_cache_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -440,7 +440,7 @@ def new_forward(self, module: nn.Module, *args, **kwargs) -> Any:
440440
# TODO(aryan): remove later
441441
logger.debug("Skipping attention")
442442

443-
if torch.is_tensor(state.cache):
443+
if torch.is_tensor(state.cache[-1]):
444444
t_2_output, t_output = state.cache
445445
weight = state.weight_callback(module)
446446
output = self._compute_approximated_attention_output(t_2_output, t_output, weight, batch_size)

0 commit comments

Comments
 (0)