-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
Get an error like the title when I load a flux fill gguf format file as a flux transformer.
Reproduction
from huggingface_hub import hf_hub_download
import os
def download_model():
# 设置模型信息
repo_id = "YarvixPA/FLUX.1-Fill-dev-gguf"
filename = "flux1-fill-dev-Q4_1.gguf"
try:
# 下载文件
model_path = hf_hub_download(
repo_id=repo_id,
filename=filename,
resume_download=True
)
print(f"下载成功! 文件保存在: {model_path}")
return model_path
except Exception as e:
print(f"下载出错: {str(e)}")
return None
model_path = download_model()
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel, GGUFQuantizationConfig
transformer = FluxTransformer2DModel.from_single_file(
model_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True,
)Logs
No response
System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.12.2
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.25.0
- Transformers version: 4.47.0.dev0
- Accelerate version: 1.1.1
- PEFT version: 0.13.2
- Bitsandbytes version: 0.44.1
- Safetensors version: 0.4.5
- xFormers version: 0.0.28.post3
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working