Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion PHILOSOPHY.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Pipelines are designed to be easy to use (therefore do not follow [*Simple over
The following design principles are followed:
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [# Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
- Pipelines all inherit from [`DiffusionPipeline`].
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/Lykon/dreamshaper-8/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
- Pipelines should be used **only** for inference.
- Pipelines should be very readable, self-explanatory, and easy to tweak.
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Generating outputs is super easy with 🤗 Diffusers. To generate an image from
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipeline = DiffusionPipeline.from_pretrained("Lykon/dreamshaper-8", torch_dtype=torch.float16)
pipeline.to("cuda")
pipeline("An image of a squirrel in Picasso style").images[0]
```
Expand Down Expand Up @@ -144,7 +144,7 @@ Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz9
<tr style="border-top: 2px solid black">
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img">Stable Diffusion Text-to-Image</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
<td><a href="https://huggingface.co/Lykon/dreamshaper-8"> Lykon/dreamshaper-8 </a></td>
</tr>
<tr>
<td>Text-to-Image</td>
Expand Down Expand Up @@ -174,7 +174,7 @@ Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz9
<tr>
<td>Text-guided Image-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/img2img">Stable Diffusion Image-to-Image</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
<td><a href="https://huggingface.co/Lykon/dreamshaper-8"> Lykon/dreamshaper-8 </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Text-guided Image Inpainting</td>
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/api/models/controlnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path
controlnet = ControlNetModel.from_single_file(url)

url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
url = "https://huggingface.co/Lykon/dreamshaper-8/blob/main/v1-5-pruned.safetensors" # can also be a local path
pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/stable_diffusion/inpaint.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The Stable Diffusion model can also be applied to inpainting which lets you edit
It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such
as [runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting). Default
text-to-image Stable Diffusion checkpoints, such as
[runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) are also compatible but they might be less performant.
[Lykon/dreamshaper-8](https://huggingface.co/Lykon/dreamshaper-8) are also compatible but they might be less performant.

<Tip>

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/stable_diffusion/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ from diffusers import StableDiffusionImg2ImgPipeline
import gradio as gr


pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("Lykon/dreamshaper-8")

gr.Interface.from_pipeline(pipe).launch()
```
Expand Down
6 changes: 3 additions & 3 deletions docs/source/en/api/pipelines/text_to_video_zero.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ To generate a video from prompt, run the following Python code:
import torch
from diffusers import TextToVideoZeroPipeline

model_id = "runwayml/stable-diffusion-v1-5"
model_id = "Lykon/dreamshaper-8"
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

prompt = "A panda is playing guitar on times square"
Expand All @@ -63,7 +63,7 @@ import torch
from diffusers import TextToVideoZeroPipeline
import numpy as np

model_id = "runwayml/stable-diffusion-v1-5"
model_id = "Lykon/dreamshaper-8"
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
seed = 0
video_length = 24 #24 ÷ 4fps = 6 seconds
Expand Down Expand Up @@ -137,7 +137,7 @@ To generate a video from prompt with additional pose control
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor

model_id = "runwayml/stable-diffusion-v1-5"
model_id = "Lykon/dreamshaper-8"
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
model_id, controlnet=controlnet, torch_dtype=torch.float16
Expand Down
8 changes: 4 additions & 4 deletions docs/source/en/conceptual/evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generato

![parti-prompts-14](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-14.png)

We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)), yields:
We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://huggingface.co/Lykon/dreamshaper-8)), yields:

![parti-prompts-15](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-15.png)

Expand Down Expand Up @@ -177,10 +177,10 @@ generator = torch.manual_seed(seed)
images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images
```

Then we load the [v1-5 checkpoint](https://huggingface.co/runwayml/stable-diffusion-v1-5) to generate images:
Then we load the [v1-5 checkpoint](https://huggingface.co/Lykon/dreamshaper-8) to generate images:

```python
model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5"
model_ckpt_1_5 = "Lykon/dreamshaper-8"
sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device)

images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images
Expand All @@ -198,7 +198,7 @@ print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}")
# CLIP Score with v-1-5: 36.2137
```

It seems like the [v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse.
It seems like the [v1-5](https://huggingface.co/Lykon/dreamshaper-8) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse.

<Tip warning={true}>

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/conceptual/philosophy.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Pipelines are designed to be easy to use (therefore do not follow [*Simple over
The following design principles are followed:
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [# Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
- Pipelines all inherit from [`DiffusionPipeline`].
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/Lykon/dreamshaper-8/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
- Pipelines should be used **only** for inference.
- Pipelines should be very readable, self-explanatory, and easy to tweak.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/en/optimization/coreml.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,10 +102,10 @@ Pass the path of the downloaded checkpoint with `-i` flag to the script. `--comp

The inference script assumes you're using the original version of the Stable Diffusion model, `CompVis/stable-diffusion-v1-4`. If you use another model, you *have* to specify its Hub id in the inference command line, using the `--model-version` option. This works for models already supported and custom models you trained or fine-tuned yourself.

For example, if you want to use [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5):
For example, if you want to use [`Lykon/dreamshaper-8`](https://huggingface.co/Lykon/dreamshaper-8):

```shell
python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5
python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version Lykon/dreamshaper-8
```

## Core ML inference in Swift
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/optimization/deepcache.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Then load and enable the [`DeepCacheSDHelper`](https://github.com/horseee/DeepCa
```diff
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda")
pipe = StableDiffusionPipeline.from_pretrained('Lykon/dreamshaper-8', torch_dtype=torch.float16).to("cuda")

+ from DeepCache import DeepCacheSDHelper
+ helper = DeepCacheSDHelper(pipe=pipe)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/optimization/fp16.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ import torch
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/optimization/habana.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ For more information, check out 🤗 Optimum Habana's [documentation](https://hu

We benchmarked Habana's first-generation Gaudi and Gaudi2 with the [Habana/stable-diffusion](https://huggingface.co/Habana/stable-diffusion) and [Habana/stable-diffusion-2](https://huggingface.co/Habana/stable-diffusion-2) Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance.

For [Stable Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on 512x512 images:
For [Stable Diffusion v1.5](https://huggingface.co/Lykon/dreamshaper-8) on 512x512 images:

| | Latency (batch size = 1) | Throughput |
| ---------------------- |:------------------------:|:---------------------------:|
Expand Down
14 changes: 7 additions & 7 deletions docs/source/en/optimization/memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
)
Expand All @@ -66,7 +66,7 @@ import torch
from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler

pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
)
Expand All @@ -92,7 +92,7 @@ import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
)
Expand Down Expand Up @@ -140,7 +140,7 @@ import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
)
Expand Down Expand Up @@ -201,7 +201,7 @@ def generate_inputs():


pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda")
Expand Down Expand Up @@ -265,7 +265,7 @@ class UNet2DConditionOutput:


pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda")
Expand Down Expand Up @@ -315,7 +315,7 @@ from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
"Lykon/dreamshaper-8",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda")
Expand Down
6 changes: 3 additions & 3 deletions docs/source/en/optimization/mps.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The `mps` backend uses PyTorch's `.to()` interface to move the Stable Diffusion
```python
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = DiffusionPipeline.from_pretrained("Lykon/dreamshaper-8")
pipe = pipe.to("mps")

# Recommended if your computer has < 64 GB of RAM
Expand All @@ -46,7 +46,7 @@ If you're using **PyTorch 1.13**, you need to "prime" the pipeline with an addit
```diff
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps")
pipe = DiffusionPipeline.from_pretrained("Lykon/dreamshaper-8").to("mps")
pipe.enable_attention_slicing()

prompt = "a photo of an astronaut riding a horse on mars"
Expand All @@ -67,7 +67,7 @@ To prevent this from happening, we recommend *attention slicing* to reduce memor
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps")
pipeline = DiffusionPipeline.from_pretrained("Lykon/dreamshaper-8", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps")
pipeline.enable_attention_slicing()
```

Expand Down
4 changes: 2 additions & 2 deletions docs/source/en/optimization/onnx.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ To load and run inference, use the [`~optimum.onnxruntime.ORTStableDiffusionPipe
```python
from optimum.onnxruntime import ORTStableDiffusionPipeline

model_id = "runwayml/stable-diffusion-v1-5"
model_id = "Lykon/dreamshaper-8"
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
Expand All @@ -44,7 +44,7 @@ To export the pipeline in the ONNX format offline and use it later for inference
use the [`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:

```bash
optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/
optimum-cli export onnx --model Lykon/dreamshaper-8 sd_v15_onnx/
```

Then to perform inference (you don't have to specify `export=True` again):
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/optimization/open_vino.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ To load and run inference, use the [`~optimum.intel.OVStableDiffusionPipeline`].
```python
from optimum.intel import OVStableDiffusionPipeline

model_id = "runwayml/stable-diffusion-v1-5"
model_id = "Lykon/dreamshaper-8"
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "sailing ship in storm by Rembrandt"
image = pipeline(prompt).images[0]
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/optimization/tome.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You can use ToMe from the [`tomesd`](https://github.com/dbolya/tomesd) library w
import tomesd

pipeline = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True,
"Lykon/dreamshaper-8", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")
+ tomesd.apply_patch(pipeline, ratio=0.5)

Expand Down
Loading
Loading