You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using SYCL runs the computation on an Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before starting. For more details refer to [llama.cpp SYCL backend](https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#linux).
134
+
Using SYCL runs the computation on an Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before starting. For more details refer to [llama.cpp SYCL backend](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/SYCL.md#linux).
135
135
136
136
```bash
137
137
# Export relevant ENV variables
@@ -268,11 +268,15 @@ Below is a short example demonstrating how to use the high-level API to generate
# wtype="default", # Weight type (e.g. "q8_0", "f16", etc) (The "default" setting is automatically applied and determines the weight type of a model file)
# seed=1337, # Uncomment to set a specific seed (use -1 for a random seed)
290
+
preview_method="proj",
291
+
preview_interval=2, # Call every 2 steps
292
+
preview_callback=preview_callback,
286
293
)
287
294
output[0].save("output.png") # Output returned as list of PIL Images
288
295
@@ -388,6 +395,12 @@ Download the weights from the links below:
388
395
- Otherwise, download chroma's safetensors from [lodestones/Chroma1-Flash](https://huggingface.co/lodestones/Chroma1-Flash), [lodestones/Chroma1-Base](https://huggingface.co/lodestones/Chroma1-Base) or [lodestones/Chroma1-HD](https://huggingface.co/lodestones/Chroma1-HD) ([lodestones/Chroma](https://huggingface.co/lodestones/Chroma) is DEPRECATED)
389
396
- The `vae` and `t5xxl` models are the same as for FLUX image generation linked above (`clip_l` not required).
# Running distilled models: SSD1B and SDx.x with tiny U-Nets
2
+
3
+
## Preface
4
+
5
+
These models feature a reduced U-Net architecture. Unlike standard SDXL models, the SSD-1B U-Net contains only one middle block and fewer attention layers in its up- and down-blocks, resulting in significantly smaller file sizes. Using these models can reduce inference time by more than 33%. For more details, refer to Segmind's paper: https://arxiv.org/abs/2401.02677v1.
6
+
Similarly, SD1.x- and SD2.x-style models with a tiny U-Net consist of only 6 U-Net blocks, leading to very small files and time savings of up to 50%. For more information, see the paper: https://arxiv.org/pdf/2305.15798.pdf.
7
+
8
+
## SSD1B
9
+
10
+
Note that not all of these models follow the standard parameter naming conventions. However, several useful SSD-1B models are available online, such as:
These models also require conversion, partly because some tensors are stored in a non-contiguous manner. To create a usable checkpoint file, follow these simple steps:
61
+
Download and prepare the model using Python:
62
+
63
+
##### Download the model using Python on your computer, for example this way:
The file segmind_tiny-sd.ckpt will be generated and is now ready for use with sd.cpp. You can follow a similar process for the other models mentioned above.
0 commit comments