You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/community/README.md
+48-1Lines changed: 48 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -87,6 +87,7 @@ PIXART-α Controlnet pipeline | Implementation of the controlnet model for pixar
87
87
| CogVideoX DDIM Inversion Pipeline | Implementation of DDIM inversion and guided attention-based editing denoising process on CogVideoX. |[CogVideoX DDIM Inversion Pipeline](#cogvideox-ddim-inversion-pipeline)| - |[LittleNyima](https://github.com/LittleNyima)|
88
88
| FaithDiff Stable Diffusion XL Pipeline | Implementation of [(CVPR 2025) FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolutionUnleashing Diffusion Priors for Faithful Image Super-resolution](https://huggingface.co/papers/2411.18824) - FaithDiff is a faithful image super-resolution method that leverages latent diffusion models by actively adapting the diffusion prior and jointly fine-tuning its components (encoder and diffusion model) with an alignment module to ensure high fidelity and structural consistency. |[FaithDiff Stable Diffusion XL Pipeline](#faithdiff-stable-diffusion-xl-pipeline)|[](https://huggingface.co/jychen9811/FaithDiff)|[Junyang Chen, Jinshan Pan, Jiangxin Dong, IMAG Lab, (Adapted by Eliseu Silva)](https://github.com/JyChen9811/FaithDiff)|
89
89
| Stable Diffusion 3 InstructPix2Pix Pipeline | Implementation of Stable Diffusion 3 InstructPix2Pix Pipeline |[Stable Diffusion 3 InstructPix2Pix Pipeline](#stable-diffusion-3-instructpix2pix-pipeline)|[](https://huggingface.co/BleachNick/SD3_UltraEdit_freeform)[](https://huggingface.co/CaptainZZZ/sd3-instructpix2pix)|[Jiayu Zhang](https://github.com/xduzhangjiayu) and [Haozhe Zhao](https://github.com/HaozheZhao)|
90
+
| Flux Kontext multiple images | Allow to call Flux Kontext by giving several images. These images will be separatly encoded in the latent space, then the latent vector will be concatenated |[Flux Kontext multiple input Pipeline](#flux-kontext-multiple images) | - |https://github.com/Net-Mist|
90
91
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
This model is trained on 512x512, so input size is better on 512x512.
5481
5482
For better editing performance, please refer to this powerful model https://huggingface.co/BleachNick/SD3_UltraEdit_freeform and Paper "UltraEdit: Instruction-based Fine-Grained Image
5482
-
Editing at Scale", many thanks to their contribution!
5483
+
Editing at Scale", many thanks to their contribution!
5484
+
5485
+
# Flux Kontext multiple images
5486
+
5487
+
This is an implementation of Flux Kontext allowing the user to pass multiple reference images.
5488
+
5489
+
These images will be encoded separatly and the latent vectors will be concatenated.
5490
+
5491
+
as explained section 3 of [the paper](https://arxiv.org/pdf/2506.15742), the sequence concatenation mecanism of the model can extends the model capabilities to several images (however note that the current version of Flux-Kontext wasn't train for this). Currently, stacking on the first axis doesn't seem to give correct results, but stacking on the other 2 works.
5492
+
5493
+
## Example Usage
5494
+
5495
+
This pipeline loads 2 reference images, and generate an image using them.
0 commit comments