Error when trying to use BLIP in Embeddings. SD 1.5 #1283
Unanswered
Idamarinella
asked this question in
Q&A
Replies: 1 comment 1 reply
-
try this : |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
hi,
when I try to preprocess images for embedding in Train, if I flag "use BLIP for caption".
I use Automatic 1111 Colab
Thanks in advance
Error completing request
Arguments: ('/content/gdrive/MyDrive/sd/inversion ferri /input', '/content/gdrive/MyDrive/sd/inversion ferri /output', 512, 512, 'ignore', True, False, True, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/ui.py", line 19, in preprocess
modules.textual_inversion.preprocess.preprocess(*args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 23, in preprocess
preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 194, in preprocess_work
save_pic(img, index, params, existing_caption=existing_caption)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 83, in save_pic
save_pic_with_caption(image, index, params, existing_caption=existing_caption)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 52, in save_pic_with_caption
caption += shared.interrogator.generate_caption(image)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/interrogate.py", line 133, in generate_caption
caption = self.blip_model.generate(gpu_image, sample=False, num_beams=shared.opts.interrogate_clip_num_beams, min_length=shared.opts.interrogate_clip_min_length, max_length=shared.opts.interrogate_clip_max_length)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/blip/models/blip.py", line 156, in generate
outputs = self.text_decoder.generate(input_ids=input_ids,
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 1268, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 964, in _validate_model_kwargs
raise ValueError(
ValueError: The following
model_kwargs
are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)Beta Was this translation helpful? Give feedback.
All reactions