File tree Expand file tree Collapse file tree 2 files changed +4
-3
lines changed Expand file tree Collapse file tree 2 files changed +4
-3
lines changed Original file line number Diff line number Diff line change @@ -386,8 +386,8 @@ def load_ip_adapter(
386386 image_encoder_pretrained_model_name_or_path (`str`, *optional*, defaults to `./image_encoder`):
387387 Can be either:
388388
389- - A string, the *model id* (for example `openai/clip-vit-large-patch14`) of a pretrained model hosted on
390- the Hub.
389+ - A string, the *model id* (for example `openai/clip-vit-large-patch14`) of a pretrained model
390+ hosted on the Hub.
391391 - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
392392 with [`ModelMixin.save_pretrained`].
393393 cache_dir (`Union[str, os.PathLike]`, *optional*):
Original file line number Diff line number Diff line change @@ -756,7 +756,8 @@ def __call__(
756756 Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
757757 IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
758758 provided, embeddings are computed from the `ip_adapter_image` input argument.
759- negative_ip_adapter_image: (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
759+ negative_ip_adapter_image:
760+ (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
760761 negative_ip_adapter_image_embeds (`List[torch.Tensor]`, *optional*):
761762 Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
762763 IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
You can’t perform that action at this time.
0 commit comments