Skip to content

v0.9.0

Compare
Choose a tag to compare
@github-actions github-actions released this 15 May 03:38
· 3461 commits to main since this release
5873877

Highlights

This release features 649 commits, from 215 contributors (82 new contributors!)

  • vLLM has upgraded to PyTorch 2.7! (#16859) This is a breaking change for environment dependency.
    • The default wheel has been upgraded from CUDA 12.4 to CUDA 12.8. We will distribute CUDA 12.6 wheel on GitHub artifact.
    • As a general rule of thumb, our CUDA version policy follow PyTorch's CUDA version policy.
  • Enhanced NVIDIA Blackwell support. vLLM now ships with initial set of optimized kernels on NVIDIA Blackwell with both attention and mlp.
    • You can use our docker image or install FlashInfer nightly wheel pip install https://download.pytorch.org/whl/cu128/flashinfer/flashinfer_python-0.2.5%2Bcu128torch2.7-cp38-abi3-linux_x86_64.whl then set VLLM_ATTENTION_BACKEND=FLASHINFER for better performance.
    • Upgraded support for the new FlashInfer main branch. (#15777)
    • Please checkout #18153 for the full roadmap
  • Initial DP, EP, PD support for large scale inference
    • EP:
      • Permute and unpermute kernel for moe optimization (#14568)
      • Modularize fused experts and integrate PPLX kernels (#15956)
      • Refactor pplx init logic to make it modular (prepare for deepep) (#18200)
      • Add ep group and all2all interface (#18077)
    • DP:
      • Decouple engine process management and comms (#15977)
    • PD:
      • NIXL Integration (#17751)
      • Local attention optimization for NIXL (#18170)
      • Support multiple kv connectors (#17564)
  • Migrate docs from Sphinx to MkDocs (#18145, #18610, #18614, #18616. #18622, #18626, #18627, #18635, #18637, #18657, #18663, #18666, #18713)

Notable Changes

  • Removal of CUDA 12.4 support due to PyTorch upgrade to 2.7.
  • Change top_k to be disabled with 0 (still accept -1 for now) (#17773)
  • The seed is now set to 0 by default for V1 Engine, meaning that different vLLM runs now yield the same outputs even if temperature > 0. This does not modify the random state in user code since workers are run in separate processes unless VLLM_USE_V1_MULTIPROCESSING=0. (#17929, #18741)

Model Enhancements

  • Support MiMo-7B (#17433), MiniMax-VL-01 (#16328), Ovis 1.6 (#17861), Ovis 2 (#15826), GraniteMoeHybrid 4.0 (#17497), FalconH1* (#18406), LlamaGuard4 (#17315)
    • Please install the development version of transformers (from source) to use Falcon-H1.
  • Embedding models: nomic-embed-text-v2-moe (#17785), new class of gte models (#17986)
  • Progress in Hybrid Memory Allocator (#17394, #17479, #17474, #17483, #17193, #17946, #17945, #17999, #18001, #18593)
  • DeepSeek: perf enhancement by moving more calls into cuda-graph region(#17484, #17668), Function Call (#17784), MTP in V1 (#18435)
  • Qwen2.5-1M: Implements dual-chunk-flash-attn backend for dual chunk attention with sparse attention support (#11844)
  • Qwen2.5-VL speed enhancement via rotary_emb optimization (#17973)
  • InternVL models with Qwen2.5 backbone now support video inputs (#18499)

Performance, Production and Scaling

  • Support full cuda graph in v1 (#16072)
  • Pipeline Parallelism: MultiprocExecutor support (#14219), torchrun (#17827)
  • Support sequence parallelism combined with pipeline parallelism (#18243)
  • Async tensor parallelism using compilation pass (#17882)
  • Perf: Use small max_num_batched_tokens for A100 (#17885)
  • Fast Model Loading: Tensorizer support for V1 and LoRA (#17926)
  • Multi-modality: Automatically cast multi-modal input dtype before transferring device (#18756)

Security

  • Prevent side-channel attacks via cache salting (#17045)
  • Fix image hash collision in certain edge cases (#17378)
  • Add VLLM_ALLOW_INSECURE_SERIALIZATION env var (#17490)
  • Migrate to REGEX Library to prevent catastrophic backtracking (#18454, #18750)

Features

  • CLI: deprecated=True (#17426)
  • Frontend: progress bar for adding requests (#17525), chat_template_kwargs in LLM.chat (#17356), /classify endpoint (#17032), truncation control for embedding models (#14776), cached_tokens in response usage (#18149)
  • LoRA: default local directory LoRA resolver plugin. (#16855)
  • Metrics: kv event publishing (#16750), API for accessing in-memory Prometheus metrics (#17010)
  • Quantization: nvidia/DeepSeek-R1-FP4 (#16362), Quark MXFP4 format (#16943), AutoRound (#17850), torchao models with AOPerModuleConfig (#17826), CUDA Graph support for V1 GGUF support (#18646)
  • Reasoning: deprecate --enable-reasoning (#17452)
  • Spec Decode: EAGLE share input embedding (#17326), torch.compile & cudagraph to EAGLE (#17211), EAGLE3 (#17504), log accumulated metrics(#17913), Medusa (#17956)
  • Structured Outputs: Thinking compatibility (#16577), Spec Decoding (#14702), Qwen3 reasoning parser (#17466), tool_choice: required for Xgrammar (#17845), Structural Tag with Guidance backend (#17333)
  • Transformers backend: named parameters (#16868), interleaved sliding window attention (#18494)

Hardwares

  • NVIDIA: cutlass support for blackwell fp8 blockwise gemm (#14383)
  • TPU: Multi-LoRA implementation(#14238), default max-num-batched-tokens (#17508), V1 backend by default (#17673), top-logprobs (#17072)
  • Neuron: NeuronxDistributedInference support (#15970), Speculative Decoding, Dynamic on-device sampling (#16357), Mistral Model (#18222), Multi-LoRA (#18284)
  • AMD: Enable FP8 KV cache on V1 (#17870), Tuned fused moe config for Qwen3 MoE on MI300X (#17535, #17530), AITER biased group topk (#17955), Block-Scaled GEMM (#14968), MLA (#17523), Radeon GPU use Custom Paged Attention (#17004), reduce the number of environment variables in command line (#17229)
  • Extensibility: Make PiecewiseBackend pluggable and extendable (#18076)

Documentation

  • Update quickstart and install for cu128 using --torch-backend=auto (#18505)
  • NVIDIA TensorRT Model Optimizer (#17561)
  • Usage of Qwen3 thinking (#18291)

Developer Facing

What's Changed

New Contributors

Full Changelog: v0.8.5.post1...v0.9.0