dont working reactor GFPGAN and CodeFormer staible defussion #112
Unanswered
Kiki32134142
asked this question in
Q&A
Replies: 1 comment
-
Error in some SFW postprocess. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
From https://github.com/AUTOMATIC1111/stable-diffusion-webui
Already up to date.
venv "venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: 1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
CUDA 12.1
Launching Web UI with arguments: --opt-sdp-attention --autolaunch --medvram
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
CHv1.8.13: Get Custom Model Folder
ControlNet preprocessor location: E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-controlnet\annotator\downloads
2025-05-01 14:56:35,165 - ControlNet - INFO - ControlNet v1.1.455
14:56:35 - ReActor - STATUS - Running v0.7.1-b3 on Device: CUDA
Loading weights [135bb58293] from E:\staibl\stable-diffusion-portable-main\models\Stable-diffusion\revAnimated_reva1.safetensors
CHv1.8.13: Set Proxy:
2025-05-01 14:56:36,165 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: E:\staibl\stable-diffusion-portable-main\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB
To create a public link, set
share=Trueinlaunch().Startup time: 17.2s (prepare environment: 5.7s, import torch: 4.9s, import gradio: 1.0s, setup paths: 0.7s, initialize shared: 0.3s, other imports: 0.5s, load scripts: 2.9s, create ui: 0.7s, gradio launch: 0.4s).
Loading VAE weights specified in settings: E:\staibl\stable-diffusion-portable-main\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying attention optimization: sdp... done.
Model loaded in 5.1s (load weights from disk: 0.5s, create model: 0.6s, apply weights to model: 2.8s, apply half(): 0.4s, load VAE: 0.2s, calculate empty prompt: 0.5s).
5%|████▏ | 1/20 [00:15<04:51, 15.35s/it]
*** Error completing request | 0/20 [00:00<?, ?it/s]
*** Arguments: ('task(yqkjfdqqm6cgcb4)', <gradio.routes.Request object at 0x000001E7D29DF640>, 'woman, cyberpunk, brownhair', '(worst quality:1.4), (low quality:1.4), (monochrome:1.1), bad_prompt_version2, bad_artist_anime, (loli: 1.5), (shota:1.5), (child:1.4), ((disfigured)), ((bad art)), vignette, cinematic, grayscale, bokeh, blurred, depth of field, (bad-hands-5:1.3)', [], 1, 1, 8, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), <PIL.Image.Image image mode=RGB size=1434x2160 at 0x1E7D14035B0>, True, '0', '0', 'inswapper_128.onnx', 'GFPGAN', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, 'tab_single', False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "E:\staibl\stable-diffusion-portable-main\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "E:\staibl\stable-diffusion-portable-main\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "E:\staibl\stable-diffusion-portable-main\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\modules\processing.py", line 998, in process_images_inner
devices.test_for_nans(samples_ddim, "unet")
File "E:\staibl\stable-diffusion-portable-main\modules\devices.py", line 265, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:31<00:00, 4.56s/it]
15:00:54 - ReActor - STATUS - Working: source face index [0], target face index [0]
15:00:54 - ReActor - STATUS - Checking for any unsafe content
*** Error running postprocess_image: E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py
Traceback (most recent call last):
File "E:\staibl\stable-diffusion-portable-main\modules\scripts.py", line 912, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py", line 465, in postprocess_image
result, output, swapped = swap_face(
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 391, in swap_face
if check_sfw_image(result_image) is None:
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 359, in check_sfw_image
if not sfw.nsfw_image(tmp_img, NSFWDET_MODEL_PATH):
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_sfw.py", line 15, in nsfw_image
result = predict(img)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 100, in call
return super().call(images, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1120, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1127, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 108, in _forward
model_outputs = self.model(**model_inputs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 804, in forward
outputs = self.vit(
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 583, in forward
embedding_output = self.embeddings(
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 122, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 181, in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
Total progress: 21it [02:43, 7.78s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:13<00:00, 3.69s/it]
15:02:45 - ReActor - STATUS - Working: source face index [0], target face index [0]████| 20/20 [01:09<00:00, 5.50s/it]
15:02:45 - ReActor - STATUS - Checking for any unsafe content
*** Error running postprocess_image: E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py
Traceback (most recent call last):
File "E:\staibl\stable-diffusion-portable-main\modules\scripts.py", line 912, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py", line 465, in postprocess_image
result, output, swapped = swap_face(
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 391, in swap_face
if check_sfw_image(result_image) is None:
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 359, in check_sfw_image
if not sfw.nsfw_image(tmp_img, NSFWDET_MODEL_PATH):
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_sfw.py", line 15, in nsfw_image
result = predict(img)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 100, in call
return super().call(images, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1120, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1127, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 108, in _forward
model_outputs = self.model(**model_inputs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 804, in forward
outputs = self.vit(
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 583, in forward
embedding_output = self.embeddings(
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 122, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 181, in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:15<00:00, 3.76s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:58<00:00, 2.92s/it]
15:11:49 - ReActor - STATUS - Working: source face index [0], target face index [0]████| 20/20 [00:43<00:00, 3.43s/it]
15:11:49 - ReActor - STATUS - Checking for any unsafe content
*** Error running postprocess_image: E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py
Traceback (most recent call last):
File "E:\staibl\stable-diffusion-portable-main\modules\scripts.py", line 912, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py", line 465, in postprocess_image
result, output, swapped = swap_face(
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 391, in swap_face
if check_sfw_image(result_image) is None:
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 359, in check_sfw_image
if not sfw.nsfw_image(tmp_img, NSFWDET_MODEL_PATH):
File "E:\staibl\stable-diffusion-portable-main\extensions\sd-webui-reactor-sfw\scripts\reactor_sfw.py", line 15, in nsfw_image
result = predict(img)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 100, in call
return super().call(images, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1120, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1127, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 108, in _forward
model_outputs = self.model(**model_inputs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 804, in forward
outputs = self.vit(
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 583, in forward
embedding_output = self.embeddings(
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 122, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 181, in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\staibl\stable-diffusion-portable-main\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "E:\staibl\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:14<00:00, 3.72s/it]
T
Beta Was this translation helpful? Give feedback.
All reactions