Skip to content

Commit 4ccde32

Browse files
committed
fix style
Signed-off-by: YAO Matrix <[email protected]>
1 parent 74dbfe4 commit 4ccde32

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

src/diffusers/pipelines/pipeline_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1205,7 +1205,7 @@ def enable_sequential_cpu_offload(self, gpu_id: Optional[int] = None, device: Un
12051205
r"""
12061206
Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state
12071207
dicts of all `torch.nn.Module` components (except those in `self._exclude_from_cpu_offload`) are saved to CPU
1208-
and then moved to `torch.device('meta')` and loaded to GPU only when their specific submodule has its `forward`
1208+
and then moved to `torch.device('meta')` and loaded to accelerator only when their specific submodule has its `forward`
12091209
method called. Offloading happens on a submodule basis. Memory savings are higher than with
12101210
`enable_model_cpu_offload`, but performance is lower.
12111211

src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -380,6 +380,7 @@ def encode_prompt(
380380
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
381381
else:
382382
scale_lora_layers(self.text_encoder, lora_scale)
383+
383384
if prompt is not None and isinstance(prompt, str):
384385
batch_size = 1
385386
elif prompt is not None and isinstance(prompt, list):

0 commit comments

Comments
 (0)