Skip to content

Commit 3712dc3

Browse files
committed
init docs
1 parent 217e161 commit 3712dc3

File tree

2 files changed

+46
-0
lines changed

2 files changed

+46
-0
lines changed

docs/source/en/_toctree.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,10 @@
112112
- local: using-diffusers/marigold_usage
113113
title: Marigold Computer Vision
114114
title: Specific pipeline examples
115+
- sections:
116+
- local: hybrid_inference/overview
117+
title: Overview
118+
title: Hybrid Inference
115119
- sections:
116120
- local: training/overview
117121
title: Overview
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# Hybrid Inference
14+
15+
**Empowering local AI builders with Hybrid Inference**
16+
17+
---
18+
19+
## Why use Hybrid Inference?
20+
21+
Hybrid Inference offers a fast and simple way to offload local generation requirements.
22+
23+
* **VAE Decode:** Quickly decode latents to images without comprimising quality or slowing down your workflow.
24+
* **VAE Encode (coming soon):** Encode images to latents for generation or training.
25+
* **Text Encoders (coming soon):** Compute text embeddings for prompts without comprimising quality or slowing down your workflow.
26+
27+
---
28+
29+
## Key Benefits
30+
31+
- 🚀 **Reduced Requirements:** Access powerful models without expensive hardware.
32+
- 🎯 **Diverse Use Cases:** Fully compatible with Diffusers 🧨 and the wider community.
33+
- 🔧 **Developer-Friendly:** Simple requests, fast responses.
34+
35+
---
36+
37+
## Contents
38+
39+
The documentation is organized into two sections:
40+
41+
* **Getting Started** Learn the basics of how to use Hybrid Inference.
42+
* **API Reference** Dive into task-specific settings and parameters.

0 commit comments

Comments
 (0)