You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/hybrid_inference/overview.md
+11-15Lines changed: 11 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,25 +15,28 @@ specific language governing permissions and limitations under the License.
15
15
> [!TIP]
16
16
> This is currently an experimental feature, and if you have any feedback, please feel free to leave it [here](https://github.com/huggingface/diffusers/issues/new?template=remote-vae-pilot-feedback.yml).
17
17
18
-
Remote inference offloads the decoding and encoding process to a remote endpoint to relax the memory requirements for local inference with large models. This feature is powered by [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index).
18
+
Remote inference offloads the decoding and encoding process to a remote endpoint to relax the memory requirements for local inference with large models. This feature is powered by [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index). Refer to the table below for the supported models and endpoint.
Pass an image to [`~utils.remote_encode`] to encode it. The specific `scaling_factor` and `shift_factor` values for each model can be found in the [Remote inference](../hybrid_inference/api_reference) API reference.
31
34
32
35
```py
33
36
import torch
34
37
from diffusers import FluxPipeline
35
38
from diffusers.utils import load_image
36
-
from diffusers.utils.remote_utils importremote_decode, remote_encode
39
+
from diffusers.utils.remote_utils import remote_encode
37
40
38
41
pipeline = FluxPipeline.from_pretrained(
39
42
"black-forest-labs/FLUX.1-schnell",
@@ -59,14 +62,7 @@ init_latent = remote_encode(
59
62
60
63
Decoding converts latent representations back into images or videos. Refer to the table below for the available and supported VAEs.
Set the output type to `"latent"` in the pipeline and set the `vae` to `None`. Pass the latents to the [`~utils.remote_decode`] function. For Flux, the latents are packed so the `height` and `width` also need to be passed. The specific `scaling_factor` and `shift_factor` values for each model can be found in the API reference.
65
+
Set the output type to `"latent"` in the pipeline and set the `vae` to `None`. Pass the latents to the [`~utils.remote_decode`] function. For Flux, the latents are packed so the `height` and `width` also need to be passed. The specific `scaling_factor` and `shift_factor` values for each model can be found in the [Remote inference](../hybrid_inference/api_reference) API reference.
0 commit comments