Skip to content

Commit a901420

Browse files
committed
more
1 parent 4cd5a3c commit a901420

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/diffusers/pipelines/pipeline_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1044,7 +1044,7 @@ def enable_model_cpu_offload(self, gpu_id: Optional[int] = None, device: Union[t
10441044
device_mod = getattr(torch, device.type, None)
10451045
if hasattr(device_mod, "empty_cache") and device_mod.is_available():
10461046
device_mod.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
1047-
1047+
print("Empty cache called.")
10481048
all_model_components = {k: v for k, v in self.components.items() if isinstance(v, torch.nn.Module)}
10491049

10501050
self._all_hooks = []

0 commit comments

Comments
 (0)