-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
I minimally modified the working example in https://huggingface.co/docs/diffusers/main/en/optimization/tgate?pipelines=Stable+Diffusion+XL to use img2img pipeline which results in the first error, and also tried with SD which gave the second error. both pipelines work when calling without the tgate. text2img does work for me with tgate.
Reproduction
sdxl:
!pip install tgate
import torch
from diffusers import StableDiffusionXLImg2ImgPipeline
from diffusers import DPMSolverMultistepScheduler
from tgate import TgateSDXLLoader
from PIL import Image
pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True,
)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
gate_step = 10
inference_step = 25
pipe = TgateSDXLLoader(
pipe,
gate_step=gate_step,
num_inference_steps=inference_step,
).to("cuda")
image = pipe.tgate(
"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.",
image = Image.new('RGB', (1024, 1024)),
gate_step=gate_step,
num_inference_steps=inference_step
).images[0]sd:
!pip install tgate
import torch
from diffusers import StableDiffusionImg2ImgPipeline
from diffusers import DPMSolverMultistepScheduler
from tgate import TgateSDLoader
from PIL import Image
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True,
)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
gate_step = 10
inference_step = 25
pipe = TgateSDLoader(
pipe,
gate_step=gate_step,
num_inference_steps=inference_step,
).to("cuda")
image = pipe.tgate(
"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.",
image = Image.new('RGB', (512, 512)),
gate_step=gate_step,
num_inference_steps=inference_step
).images[0]Logs
SDXL
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipython-input-800193126.py in <cell line: 0>()
23 ).to("cuda")
24
---> 25 image = pipe.tgate(
26 "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.",
27 image = Image.new('RGB', (1024, 1024)),
2 frames
/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs)
118 def decorate_context(*args, **kwargs):
119 with ctx_factory():
--> 120 return func(*args, **kwargs)
121
122 return decorate_context
/usr/local/lib/python3.12/dist-packages/tgate/SDXL.py in tgate(self, prompt, prompt_2, height, width, num_inference_steps, timesteps, sigmas, denoising_end, guidance_scale, negative_prompt, negative_prompt_2, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, original_size, crops_coords_top_left, target_size, negative_original_size, negative_crops_coords_top_left, negative_target_size, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, gate_step, sp_interval, fi_interval, warm_up, lcm, **kwargs)
251
252 # 0. Default height and width to unet
--> 253 height = height or self.default_sample_size * self.vae_scale_factor
254 width = width or self.default_sample_size * self.vae_scale_factor
255
/usr/local/lib/python3.12/dist-packages/diffusers/configuration_utils.py in __getattr__(self, name)
142 return self._internal_dict[name]
143
--> 144 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
145
146 def save_config(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs):
AttributeError: 'StableDiffusionXLImg2ImgPipeline' object has no attribute 'default_sample_size'
--------
SD:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipython-input-3990070211.py in <cell line: 0>()
23 ).to("cuda")
24
---> 25 image = pipe.tgate(
26 "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.",
27 image = Image.new('RGB', (512, 512)),
1 frames
/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs)
118 def decorate_context(*args, **kwargs):
119 with ctx_factory():
--> 120 return func(*args, **kwargs)
121
122 return decorate_context
/usr/local/lib/python3.12/dist-packages/tgate/SD.py in tgate(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, gate_step, sp_interval, fi_interval, warm_up, **kwargs)
174
175 # 1. Check inputs. Raise error if not correct
--> 176 self.check_inputs(
177 prompt,
178 height,
TypeError: StableDiffusionImg2ImgPipeline.check_inputs() takes from 4 to 10 positional arguments but 11 were givenSystem Info
- π€ Diffusers version: 0.35.2
- Platform: Linux-6.6.105+-x86_64-with-glibc2.35
- Running on Google Colab?: Yes
- Python version: 3.12.12
- PyTorch version (GPU?): 2.8.0+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.6 (gpu)
- Jax version: 0.5.3
- JaxLib version: 0.5.3
- Huggingface_hub version: 0.35.3
- Transformers version: 4.57.1
- Accelerate version: 1.10.1
- PEFT version: 0.17.1
- Bitsandbytes version: not installed
- Safetensors version: 0.6.2
- xFormers version: not installed
- Accelerator: Tesla T4, 15360 MiB
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Who can help?
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working