Skip to content

Commit 82aa5cd

Browse files
committed
hybrid_inference/vae_decode
1 parent 6c2f123 commit 82aa5cd

File tree

3 files changed

+368
-1
lines changed

3 files changed

+368
-1
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,8 @@
7979
- sections:
8080
- local: hybrid_inference/overview
8181
title: Overview
82+
- local: hybrid_inference/vae_decode
83+
title: VAE Decode
8284
title: Hybrid Inference
8385
- sections:
8486
- local: using-diffusers/cogvideox

docs/source/en/hybrid_inference/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,5 +48,5 @@ Hybrid Inference offers a fast and simple way to offload local generation requir
4848

4949
The documentation is organized into two sections:
5050

51-
* **Getting Started** Learn the basics of how to use Hybrid Inference.
51+
* **VAE Decode** Learn the basics of how to use VAE Decode with Hybrid Inference.
5252
* **API Reference** Dive into task-specific settings and parameters.
Lines changed: 365 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,365 @@
1+
# Getting Started: VAE Decode with Hybrid Inference
2+
3+
VAE decode is an essential component of diffusion models - turning latent representations into images or videos.
4+
5+
## Memory
6+
7+
These tables demonstrate the VRAM requirements for VAE decode with SD v1 and SD XL on different GPUs.
8+
9+
For the majority of these GPUs the memory usage % dictates other models (text encoders, UNet/Transformer) must be offloaded, or tiled decoding has to be used which increases time taken and impacts quality.
10+
11+
<details><summary>SD v1.5</summary>
12+
13+
<p>
14+
15+
| GPU | Resolution | Time (seconds) | Memory (%) | Tiled Time (secs) | Tiled Memory (%) |
16+
| --- | --- | --- | --- | --- | --- |
17+
| NVIDIA GeForce RTX 4090 | 512x512 | 0.031 | 5.60% | 0.031 (0%) | 5.60% |
18+
| NVIDIA GeForce RTX 4090 | 1024x1024 | 0.148 | 20.00% | 0.301 (+103%) | 5.60% |
19+
| NVIDIA GeForce RTX 4080 | 512x512 | 0.05 | 8.40% | 0.050 (0%) | 8.40% |
20+
| NVIDIA GeForce RTX 4080 | 1024x1024 | 0.224 | 30.00% | 0.356 (+59%) | 8.40% |
21+
| NVIDIA GeForce RTX 4070 Ti | 512x512 | 0.066 | 11.30% | 0.066 (0%) | 11.30% |
22+
| NVIDIA GeForce RTX 4070 Ti | 1024x1024 | 0.284 | 40.50% | 0.454 (+60%) | 11.40% |
23+
| NVIDIA GeForce RTX 3090 | 512x512 | 0.062 | 5.20% | 0.062 (0%) | 5.20% |
24+
| NVIDIA GeForce RTX 3090 | 1024x1024 | 0.253 | 18.50% | 0.464 (+83%) | 5.20% |
25+
| NVIDIA GeForce RTX 3080 | 512x512 | 0.07 | 12.80% | 0.070 (0%) | 12.80% |
26+
| NVIDIA GeForce RTX 3080 | 1024x1024 | 0.286 | 45.30% | 0.466 (+63%) | 12.90% |
27+
| NVIDIA GeForce RTX 3070 | 512x512 | 0.102 | 15.90% | 0.102 (0%) | 15.90% |
28+
| NVIDIA GeForce RTX 3070 | 1024x1024 | 0.421 | 56.30% | 0.746 (+77%) | 16.00% |
29+
30+
</p>
31+
</details>
32+
33+
<details><summary>SDXL</summary>
34+
35+
<p>
36+
37+
| GPU | Resolution | Time (seconds) | Memory Consumed (%) | Tiled Time (seconds) | Tiled Memory (%) |
38+
| --- | --- | --- | --- | --- | --- |
39+
| NVIDIA GeForce RTX 4090 | 512x512 | 0.057 | 10.00% | 0.057 (0%) | 10.00% |
40+
| NVIDIA GeForce RTX 4090 | 1024x1024 | 0.256 | 35.50% | 0.257 (+0.4%) | 35.50% |
41+
| NVIDIA GeForce RTX 4080 | 512x512 | 0.092 | 15.00% | 0.092 (0%) | 15.00% |
42+
| NVIDIA GeForce RTX 4080 | 1024x1024 | 0.406 | 53.30% | 0.406 (0%) | 53.30% |
43+
| NVIDIA GeForce RTX 4070 Ti | 512x512 | 0.121 | 20.20% | 0.120 (-0.8%) | 20.20% |
44+
| NVIDIA GeForce RTX 4070 Ti | 1024x1024 | 0.519 | 72.00% | 0.519 (0%) | 72.00% |
45+
| NVIDIA GeForce RTX 3090 | 512x512 | 0.107 | 10.50% | 0.107 (0%) | 10.50% |
46+
| NVIDIA GeForce RTX 3090 | 1024x1024 | 0.459 | 38.00% | 0.460 (+0.2%) | 38.00% |
47+
| NVIDIA GeForce RTX 3080 | 512x512 | 0.121 | 25.60% | 0.121 (0%) | 25.60% |
48+
| NVIDIA GeForce RTX 3080 | 1024x1024 | 0.524 | 93.00% | 0.524 (0%) | 93.00% |
49+
| NVIDIA GeForce RTX 3070 | 512x512 | 0.183 | 31.80% | 0.183 (0%) | 31.80% |
50+
| NVIDIA GeForce RTX 3070 | 1024x1024 | 0.794 | 96.40% | 0.794 (0%) | 96.40% |
51+
52+
</p>
53+
</details>
54+
55+
## Available VAEs
56+
57+
| | **Endpoint** | **Model** |
58+
|:-:|:-----------:|:--------:|
59+
| **Stable Diffusion v1** | [https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud](https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud) | [`stabilityai/sd-vae-ft-mse`](https://hf.co/stabilityai/sd-vae-ft-mse) |
60+
| **Stable Diffusion XL** | [https://x2dmsqunjd6k9prw.us-east-1.aws.endpoints.huggingface.cloud](https://x2dmsqunjd6k9prw.us-east-1.aws.endpoints.huggingface.cloud) | [`madebyollin/sdxl-vae-fp16-fix`](https://hf.co/madebyollin/sdxl-vae-fp16-fix) |
61+
| **Flux** | [https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud](https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud) | [`black-forest-labs/FLUX.1-schnell`](https://hf.co/black-forest-labs/FLUX.1-schnell) |
62+
| **HunyuanVideo** | [https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud](https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud) | [`hunyuanvideo-community/HunyuanVideo`](https://hf.co/hunyuanvideo-community/HunyuanVideo) |
63+
64+
65+
> [!TIP]
66+
> Model support can be requested [here](https://github.com/huggingface/diffusers/issues/new?template=remote-vae-pilot-feedback.yml).
67+
68+
## Code
69+
70+
> [!NOTE]
71+
> We recommend installing `diffusers` from `main` to run the code.
72+
> Install `diffusers` from `main` to run the code.
73+
> `pip install git+https://github.com/huggingface/diffusers@main`
74+
75+
A helper method simplifies interacting with Hybrid Inference.
76+
77+
```python
78+
from diffusers.utils.remote_utils import remote_decode
79+
```
80+
81+
### Basic example
82+
83+
Here, we show how to use the remote VAE on random tensors.
84+
85+
<details><summary>Code</summary>
86+
<p>
87+
88+
```python
89+
image = remote_decode(
90+
endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
91+
tensor=torch.randn([1, 4, 64, 64], dtype=torch.float16),
92+
scaling_factor=0.18215,
93+
)
94+
```
95+
96+
</p>
97+
</details>
98+
99+
<figure class="image flex flex-col items-center text-center m-0 w-full">
100+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/output.png"/>
101+
</figure>
102+
103+
Usage for Flux is slightly different. Flux latents are packed so we need to send the `height` and `width`.
104+
105+
<details><summary>Code</summary>
106+
<p>
107+
108+
```python
109+
image = remote_decode(
110+
endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/",
111+
tensor=torch.randn([1, 4096, 64], dtype=torch.float16),
112+
height=1024,
113+
width=1024,
114+
scaling_factor=0.3611,
115+
shift_factor=0.1159,
116+
)
117+
```
118+
119+
</p>
120+
</details>
121+
122+
<figure class="image flex flex-col items-center text-center m-0 w-full">
123+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/flux_random_latent.png"/>
124+
</figure>
125+
126+
Finally, an example for HunyuanVideo.
127+
128+
<details><summary>Code</summary>
129+
<p>
130+
131+
```python
132+
video = remote_decode(
133+
endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/",
134+
tensor=torch.randn([1, 16, 3, 40, 64], dtype=torch.float16),
135+
output_type="mp4",
136+
)
137+
with open("video.mp4", "wb") as f:
138+
f.write(video)
139+
```
140+
141+
</p>
142+
</details>
143+
144+
<figure class="image flex flex-col items-center text-center m-0 w-full">
145+
<video
146+
alt="queue.mp4"
147+
autoplay loop autobuffer muted playsinline
148+
>
149+
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/video_1.mp4" type="video/mp4">
150+
</video>
151+
</figure>
152+
153+
154+
### Generation
155+
156+
But we want to use the VAE on an actual pipeline to get an actual image, not random noise. The example below shows how to do it with SD v1.5.
157+
158+
<details><summary>Code</summary>
159+
<p>
160+
161+
```python
162+
from diffusers import StableDiffusionPipeline
163+
164+
pipe = StableDiffusionPipeline.from_pretrained(
165+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
166+
torch_dtype=torch.float16,
167+
variant="fp16",
168+
vae=None,
169+
).to("cuda")
170+
171+
prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious"
172+
173+
latent = pipe(
174+
prompt=prompt,
175+
output_type="latent",
176+
).images
177+
image = remote_decode(
178+
endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
179+
tensor=latent,
180+
scaling_factor=0.18215,
181+
)
182+
image.save("test.jpg")
183+
```
184+
185+
</p>
186+
</details>
187+
188+
<figure class="image flex flex-col items-center text-center m-0 w-full">
189+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/test.jpg"/>
190+
</figure>
191+
192+
Here’s another example with Flux.
193+
194+
<details><summary>Code</summary>
195+
<p>
196+
197+
```python
198+
from diffusers import FluxPipeline
199+
200+
pipe = FluxPipeline.from_pretrained(
201+
"black-forest-labs/FLUX.1-schnell",
202+
torch_dtype=torch.bfloat16,
203+
vae=None,
204+
).to("cuda")
205+
206+
prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious"
207+
208+
latent = pipe(
209+
prompt=prompt,
210+
guidance_scale=0.0,
211+
num_inference_steps=4,
212+
output_type="latent",
213+
).images
214+
image = remote_decode(
215+
endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/",
216+
tensor=latent,
217+
height=1024,
218+
width=1024,
219+
scaling_factor=0.3611,
220+
shift_factor=0.1159,
221+
)
222+
image.save("test.jpg")
223+
```
224+
225+
</p>
226+
</details>
227+
228+
<figure class="image flex flex-col items-center text-center m-0 w-full">
229+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/test_1.jpg"/>
230+
</figure>
231+
232+
Here’s an example with HunyuanVideo.
233+
234+
<details><summary>Code</summary>
235+
<p>
236+
237+
```python
238+
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
239+
240+
model_id = "hunyuanvideo-community/HunyuanVideo"
241+
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
242+
model_id, subfolder="transformer", torch_dtype=torch.bfloat16
243+
)
244+
pipe = HunyuanVideoPipeline.from_pretrained(
245+
model_id, transformer=transformer, vae=None, torch_dtype=torch.float16
246+
).to("cuda")
247+
248+
latent = pipe(
249+
prompt="A cat walks on the grass, realistic",
250+
height=320,
251+
width=512,
252+
num_frames=61,
253+
num_inference_steps=30,
254+
output_type="latent",
255+
).frames
256+
257+
video = remote_decode(
258+
endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/",
259+
tensor=latent,
260+
output_type="mp4",
261+
)
262+
263+
if isinstance(video, bytes):
264+
with open("video.mp4", "wb") as f:
265+
f.write(video)
266+
```
267+
268+
</p>
269+
</details>
270+
271+
<figure class="image flex flex-col items-center text-center m-0 w-full">
272+
<video
273+
alt="queue.mp4"
274+
autoplay loop autobuffer muted playsinline
275+
>
276+
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/video.mp4" type="video/mp4">
277+
</video>
278+
</figure>
279+
280+
281+
### Queueing
282+
283+
One of the great benefits of using a remote VAE is that we can queue multiple generation requests. While the current latent is being processed for decoding, we can already queue another one. This helps improve concurrency.
284+
285+
286+
<details><summary>Code</summary>
287+
<p>
288+
289+
```python
290+
import queue
291+
import threading
292+
from IPython.display import display
293+
from diffusers import StableDiffusionPipeline
294+
295+
def decode_worker(q: queue.Queue):
296+
while True:
297+
item = q.get()
298+
if item is None:
299+
break
300+
image = remote_decode(
301+
endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
302+
tensor=item,
303+
scaling_factor=0.18215,
304+
)
305+
display(image)
306+
q.task_done()
307+
308+
q = queue.Queue()
309+
thread = threading.Thread(target=decode_worker, args=(q,), daemon=True)
310+
thread.start()
311+
312+
def decode(latent: torch.Tensor):
313+
q.put(latent)
314+
315+
prompts = [
316+
"Blueberry ice cream, in a stylish modern glass , ice cubes, nuts, mint leaves, splashing milk cream, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious",
317+
"Lemonade in a glass, mint leaves, in an aqua and white background, flowers, ice cubes, halo, fluid motion, dynamic movement, soft lighting, digital painting, rule of thirds composition, Art by Greg rutkowski, Coby whitmore",
318+
"Comic book art, beautiful, vintage, pastel neon colors, extremely detailed pupils, delicate features, light on face, slight smile, Artgerm, Mary Blair, Edmund Dulac, long dark locks, bangs, glowing, fashionable style, fairytale ambience, hot pink.",
319+
"Masterpiece, vanilla cone ice cream garnished with chocolate syrup, crushed nuts, choco flakes, in a brown background, gold, cinematic lighting, Art by WLOP",
320+
"A bowl of milk, falling cornflakes, berries, blueberries, in a white background, soft lighting, intricate details, rule of thirds, octane render, volumetric lighting",
321+
"Cold Coffee with cream, crushed almonds, in a glass, choco flakes, ice cubes, wet, in a wooden background, cinematic lighting, hyper realistic painting, art by Carne Griffiths, octane render, volumetric lighting, fluid motion, dynamic movement, muted colors,",
322+
]
323+
324+
pipe = StableDiffusionPipeline.from_pretrained(
325+
"Lykon/dreamshaper-8",
326+
torch_dtype=torch.float16,
327+
vae=None,
328+
).to("cuda")
329+
330+
pipe.unet = pipe.unet.to(memory_format=torch.channels_last)
331+
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
332+
333+
_ = pipe(
334+
prompt=prompts[0],
335+
output_type="latent",
336+
)
337+
338+
for prompt in prompts:
339+
latent = pipe(
340+
prompt=prompt,
341+
output_type="latent",
342+
).images
343+
decode(latent)
344+
345+
q.put(None)
346+
thread.join()
347+
```
348+
349+
</p>
350+
</details>
351+
352+
353+
<figure class="image flex flex-col items-center text-center m-0 w-full">
354+
<video
355+
alt="queue.mp4"
356+
autoplay loop autobuffer muted playsinline
357+
>
358+
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/queue.mp4" type="video/mp4">
359+
</video>
360+
</figure>
361+
362+
## Integrations
363+
364+
* **[SD.Next](https://github.com/vladmandic/sdnext):** All-in-one UI with direct supports Hybrid Inference.
365+
* **[ComfyUI-HFRemoteVae](https://github.com/kijai/ComfyUI-HFRemoteVae):** ComfyUI node for Hybrid Inference.

0 commit comments

Comments
 (0)