Skip to content

Commit 8cb5e51

Browse files
committed
doc-builder
1 parent f71062f commit 8cb5e51

File tree

2 files changed

+14
-12
lines changed

2 files changed

+14
-12
lines changed

src/diffusers/pipelines/animemory/pipeline_animemory.py

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -71,19 +71,18 @@
7171
>>> import torch
7272
>>> from diffusers import AniMemoryPipeline
7373
74-
>>> pipe = AniMemoryPipeline.from_pretrained(
75-
... "animEEEmpire/AniMemory-alpha", torch_dtype=torch.bfloat16
76-
... )
74+
>>> pipe = AniMemoryPipeline.from_pretrained("animEEEmpire/AniMemory-alpha", torch_dtype=torch.bfloat16)
7775
>>> pipe = pipe.to("cuda")
7876
79-
>>> prompt = '一只凶恶的狼,猩红的眼神,在午夜咆哮,月光皎洁'
80-
>>> negative_prompt = 'nsfw, worst quality, low quality, normal quality, low resolution, monochrome, blurry, wrong, Mutated hands and fingers, text, ugly faces, twisted, jpeg artifacts, watermark, low contrast, realistic'
77+
>>> prompt = "一只凶恶的狼,猩红的眼神,在午夜咆哮,月光皎洁"
78+
>>> negative_prompt = "nsfw, worst quality, low quality, normal quality, low resolution, monochrome, blurry, wrong, Mutated hands and fingers, text, ugly faces, twisted, jpeg artifacts, watermark, low contrast, realistic"
8179
>>> image = pipe(
8280
... prompt=prompt,
8381
... negative_prompt=negative_prompt,
8482
... num_inference_steps=40,
85-
... height=1024, width=1024,
86-
... guidance_scale=6.0
83+
... height=1024,
84+
... width=1024,
85+
... guidance_scale=6.0,
8786
... ).images[0]
8887
>>> image.save("output.png")
8988
```
@@ -359,11 +358,14 @@ class AniMemoryPipeline(
359358
360359
Args:
361360
vae ([`MoVQ`]):
362-
Variational Auto-Encoder (VAE) Model. AniMemory uses [MoVQ](https://github.com/ai-forever/Kandinsky-3/blob/main/kandinsky3/movq.py)
361+
Variational Auto-Encoder (VAE) Model. AniMemory uses
362+
[MoVQ](https://github.com/ai-forever/Kandinsky-3/blob/main/kandinsky3/movq.py)
363363
text_encoder ([`AniMemoryT5`]):
364-
Frozen text-encoder. AniMemory builds based on [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel).
364+
Frozen text-encoder. AniMemory builds based on
365+
[T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel).
365366
text_encoder_2 ([`AniMemoryAltCLip`]):
366-
Second frozen text-encoder. AniMemory builds based on [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection).
367+
Second frozen text-encoder. AniMemory builds based on
368+
[CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection).
367369
tokenizer (`XLMRobertaTokenizerFast`):
368370
Tokenizer of class
369371
[XLMRobertaTokenizerFast](https://huggingface.co/docs/transformers/v4.46.3/en/model_doc/xlm-roberta#transformers.XLMRobertaTokenizerFast).

src/diffusers/schedulers/scheduling_euler_ancestral_discrete_x_pred.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,8 @@
3131

3232
class EulerAncestralDiscreteXPredScheduler(EulerAncestralDiscreteScheduler):
3333
"""
34-
Ancestral sampling with Euler method steps. This model inherits from [`EulerAncestralDiscreteScheduler`]. Check the superclass
35-
documentation for the args and returns.
34+
Ancestral sampling with Euler method steps. This model inherits from [`EulerAncestralDiscreteScheduler`]. Check the
35+
superclass documentation for the args and returns.
3636
3737
For more details, see the original paper: https://arxiv.org/abs/2403.08381
3838
"""

0 commit comments

Comments
 (0)