Skip to content

Commit 67a35b8

Browse files
committed
make style : 0.1.5 version ruff
1 parent b2dcacb commit 67a35b8

File tree

1 file changed

+30
-27
lines changed

1 file changed

+30
-27
lines changed

src/diffusers/models/adapter.py

Lines changed: 30 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ class MultiAdapter(ModelMixin):
3030
MultiAdapter is a wrapper model that contains multiple adapter models and merges their outputs according to
3131
user-assigned weighting.
3232
33-
This model inherits from [`ModelMixin`]. Check the superclass documentation for common methods such as
34-
downloading or saving.
33+
This model inherits from [`ModelMixin`]. Check the superclass documentation for common methods such as downloading
34+
or saving.
3535
3636
Args:
3737
adapters (`List[T2IAdapter]`, *optional*, defaults to None):
@@ -77,14 +77,13 @@ def forward(self, xs: torch.Tensor, adapter_weights: Optional[List[float]] = Non
7777
r"""
7878
Args:
7979
xs (`torch.Tensor`):
80-
A tensor of shape (batch, channel, height, width) representing input images for multiple adapter models,
81-
concatenated along dimension 1(channel dimension).
82-
The `channel` dimension should be equal to `num_adapter` * number of channel per image.
80+
A tensor of shape (batch, channel, height, width) representing input images for multiple adapter
81+
models, concatenated along dimension 1(channel dimension). The `channel` dimension should be equal to
82+
`num_adapter` * number of channel per image.
8383
8484
adapter_weights (`List[float]`, *optional*, defaults to None):
85-
A list of floats representing the weights which will be multiplied by each adapter's output before summing
86-
them together.
87-
If `None`, equal weights will be used for all adapters.
85+
A list of floats representing the weights which will be multiplied by each adapter's output before
86+
summing them together. If `None`, equal weights will be used for all adapters.
8887
"""
8988
if adapter_weights is None:
9089
adapter_weights = torch.tensor([1 / self.num_adapter] * self.num_adapter)
@@ -119,14 +118,15 @@ def save_pretrained(
119118
save_directory (`str` or `os.PathLike`):
120119
The directory where the model will be saved. If the directory does not exist, it will be created.
121120
is_main_process (`bool`, optional, defaults=True):
122-
Indicates whether current process is the main process or not.
123-
Useful for distributed training (e.g., TPUs) and need to call this function on all processes.
124-
In this case, set `is_main_process=True` only for the main process to avoid race conditions.
121+
Indicates whether current process is the main process or not. Useful for distributed training (e.g.,
122+
TPUs) and need to call this function on all processes. In this case, set `is_main_process=True` only
123+
for the main process to avoid race conditions.
125124
save_function (`Callable`):
126-
Function used to save the state dictionary. Useful for distributed training (e.g., TPUs) to replace `torch.save` with another method. Can also be configured using`DIFFUSERS_SAVE_MODE` environment variable.
125+
Function used to save the state dictionary. Useful for distributed training (e.g., TPUs) to replace
126+
`torch.save` with another method. Can also be configured using`DIFFUSERS_SAVE_MODE` environment
127+
variable.
127128
safe_serialization (`bool`, optional, defaults=True):
128-
If `True`, save the model using `safetensors`.
129-
If `False`, save the model with `pickle`.
129+
If `True`, save the model using `safetensors`. If `False`, save the model with `pickle`.
130130
variant (`str`, *optional*):
131131
If specified, weights are saved in the format `pytorch_model.<variant>.bin`.
132132
"""
@@ -153,8 +153,9 @@ def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]
153153
the model, set it back to training mode using `model.train()`.
154154
155155
Warnings:
156-
*Weights from XXX not initialized from pretrained model* means that the weights of XXX are not pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning.
157-
*Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, so those weights are discarded.
156+
*Weights from XXX not initialized from pretrained model* means that the weights of XXX are not pretrained
157+
with the rest of the model. It is up to you to train those weights with a downstream fine-tuning. *Weights
158+
from XXX not used in YYY* means that the layer XXX is not used by YYY, so those weights are discarded.
158159
159160
Args:
160161
pretrained_model_path (`os.PathLike`):
@@ -174,20 +175,20 @@ def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]
174175
more information about each option see [designing a device
175176
map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
176177
max_memory (`Dict`, *optional*):
177-
A dictionary mapping device identifiers to their maximum memory. Default to the maximum memory available for each
178-
GPU and the available CPU RAM if unset.
178+
A dictionary mapping device identifiers to their maximum memory. Default to the maximum memory
179+
available for each GPU and the available CPU RAM if unset.
179180
low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):
180181
Speed up model loading by not initializing the weights and only loading the pre-trained weights. This
181182
also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the
182183
model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,
183184
setting this argument to `True` will raise an error.
184185
variant (`str`, *optional*):
185-
If specified, load weights from a `variant` file (*e.g.* pytorch_model.<variant>.bin). `variant` will be
186-
ignored when using `from_flax`.
186+
If specified, load weights from a `variant` file (*e.g.* pytorch_model.<variant>.bin). `variant` will
187+
be ignored when using `from_flax`.
187188
use_safetensors (`bool`, *optional*, defaults to `None`):
188-
If `None`, the `safetensors` weights will be downloaded if available **and** if`safetensors` library is installed.
189-
If `True`, the model will be forcibly loaded from`safetensors` weights.
190-
If `False`, `safetensors` is not used.
189+
If `None`, the `safetensors` weights will be downloaded if available **and** if`safetensors` library is
190+
installed. If `True`, the model will be forcibly loaded from`safetensors` weights. If `False`,
191+
`safetensors` is not used.
191192
"""
192193
idx = 0
193194
adapters = []
@@ -222,14 +223,16 @@ class T2IAdapter(ModelMixin, ConfigMixin):
222223
and
223224
[AdapterLight](https://github.com/TencentARC/T2I-Adapter/blob/686de4681515662c0ac2ffa07bf5dda83af1038a/ldm/modules/encoders/adapter.py#L235).
224225
225-
This model inherits from [`ModelMixin`]. Check the superclass documentation for the common methods, such as downloading or saving.
226+
This model inherits from [`ModelMixin`]. Check the superclass documentation for the common methods, such as
227+
downloading or saving.
226228
227229
Args:
228230
in_channels (`int`, *optional*, defaults to `3`):
229-
The number of channels in the adapter's input (*control image*). Set it to 1 if you're using a gray scale image.
231+
The number of channels in the adapter's input (*control image*). Set it to 1 if you're using a gray scale
232+
image.
230233
channels (`List[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`):
231-
The number of channels in each downsample block's output hidden state. The `len(block_out_channels)` determines
232-
the number of downsample blocks in the adapter.
234+
The number of channels in each downsample block's output hidden state. The `len(block_out_channels)`
235+
determines the number of downsample blocks in the adapter.
233236
num_res_blocks (`int`, *optional*, defaults to `2`):
234237
Number of ResNet blocks in each downsample block.
235238
downscale_factor (`int`, *optional*, defaults to `8`):

0 commit comments

Comments
 (0)