Skip to content

Commit ae96aa8

Browse files
authored
[Model]: add FLUX.1-dev model (vllm-project#853)
1 parent d64bbde commit ae96aa8

File tree

7 files changed

+1376
-1
lines changed

7 files changed

+1376
-1
lines changed

docs/models/supported_models.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ th {
3333
|`LongCatImageEditPipeline` | LongCat-Image-Edit | `meituan-longcat/LongCat-Image-Edit` |
3434
|`StableDiffusion3Pipeline` | Stable-Diffusion-3 | `stabilityai/stable-diffusion-3.5-medium` |
3535
|`Flux2KleinPipeline` | FLUX.2-klein | `black-forest-labs/FLUX.2-klein-4B`, `black-forest-labs/FLUX.2-klein-9B` |
36+
|`FluxPipeline` | FLUX.1-dev | `black-forest-labs/FLUX.1-dev` |
3637
|`StableAudioPipeline` | Stable-Audio-Open | `stabilityai/stable-audio-open-1.0` |
3738
|`Qwen3TTSForConditionalGeneration` | Qwen3-TTS-12Hz-1.7B-CustomVoice | `Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice` |
3839
|`Qwen3TTSForConditionalGeneration` | Qwen3-TTS-12Hz-1.7B-VoiceDesign | `Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign` |

docs/user_guide/diffusion/parallelism_acceleration.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ The following table shows which models are currently supported by parallelism me
3030
| **Z-Image** | `Tongyi-MAI/Z-Image-Turbo` |||| ✅ (TP=2 only) |
3131
| **Stable-Diffusion3.5** | `stabilityai/stable-diffusion-3.5` |||||
3232
| **FLUX.2-klein** | `black-forest-labs/FLUX.2-klein-4B` |||||
33+
| **FLUX.1-dev** | `black-forest-labs/FLUX.1-dev` |||||
3334

3435

3536
!!! note "TP Limitations for Diffusion Models"

vllm_omni/diffusion/cache/cache_dit_backend.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -791,7 +791,7 @@ def refresh_cache_context(pipeline: Any, num_inference_steps: int, verbose: bool
791791
CUSTOM_DIT_ENABLERS.update(
792792
{
793793
"WanPipeline": enable_cache_for_wan22,
794-
"FluxPipeline": enable_cache_for_flux,
794+
# "FluxPipeline": enable_cache_for_flux,
795795
"LongCatImagePipeline": enable_cache_for_longcat_image,
796796
"LongCatImageEditPipeline": enable_cache_for_longcat_image,
797797
"StableDiffusion3Pipeline": enable_cache_for_sd3,
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# SPDX-License-Identifier: Apache-2.0
2+
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
3+
"""FLUX.1-dev diffusion model components."""
4+
5+
from vllm_omni.diffusion.models.flux.flux_transformer import (
6+
FluxTransformer2DModel,
7+
)
8+
from vllm_omni.diffusion.models.flux.pipeline_flux import (
9+
FluxPipeline,
10+
get_flux_post_process_func,
11+
)
12+
13+
__all__ = [
14+
"FluxPipeline",
15+
"FluxTransformer2DModel",
16+
"get_flux_post_process_func",
17+
]

0 commit comments

Comments
 (0)