Skip to content

Commit e736b09

Browse files
committed
update docstring
1 parent ea446b1 commit e736b09

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

src/diffusers/models/model_loading_utils.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -565,10 +565,11 @@ def _expand_device_map(device_map, param_names):
565565

566566
# Adapted from: https://github.com/huggingface/transformers/blob/0687d481e2c71544501ef9cb3eef795a6e79b1de/src/transformers/modeling_utils.py#L5859
567567
def _caching_allocator_warmup(model, expanded_device_map: Dict[str, torch.device], dtype: torch.dtype) -> None:
568-
"""This function warm-ups the caching allocator based on the size of the model tensors that will reside on each
569-
device. It allows to have one large call to Malloc, instead of recursively calling it later when loading the model,
570-
which is actually the loading speed botteneck. Calling this function allows to cut the model loading time by a very
571-
large margin.
568+
"""
569+
This function warm-ups the caching allocator based on the size of the model tensors that will reside on each
570+
device. It allows to have one large call to Malloc, instead of recursively calling it later when loading
571+
the model, which is actually the loading speed bottleneck.
572+
Calling this function allows to cut the model loading time by a very large margin.
572573
"""
573574
# Remove disk and cpu devices, and cast to proper torch.device
574575
accelerator_device_map = {

0 commit comments

Comments
 (0)