diff --git a/docs/configuration/conserving_memory.md b/docs/configuration/conserving_memory.md index 5ce43c798405..0aa89a89eae5 100644 --- a/docs/configuration/conserving_memory.md +++ b/docs/configuration/conserving_memory.md @@ -49,9 +49,6 @@ llm = LLM(model="adept/fuyu-8b", max_model_len=2048, max_num_seqs=2) By default, we optimize model inference using CUDA graphs which take up extra memory in the GPU. -!!! warning - CUDA graph capture takes up more memory in V1 than in V0. - You can adjust `compilation_config` to achieve a better balance between inference speed and memory usage: ??? code diff --git a/docs/configuration/optimization.md b/docs/configuration/optimization.md index b0d390d7e1cb..fdd9c317b022 100644 --- a/docs/configuration/optimization.md +++ b/docs/configuration/optimization.md @@ -31,9 +31,7 @@ In vLLM V1, the default preemption mode is `RECOMPUTE` rather than `SWAP`, as re Chunked prefill allows vLLM to process large prefills in smaller chunks and batch them together with decode requests. This feature helps improve both throughput and latency by better balancing compute-bound (prefill) and memory-bound (decode) operations. -In vLLM V1, **chunked prefill is always enabled by default**. This is different from vLLM V0, where it was conditionally enabled based on model characteristics. - -With chunked prefill enabled, the scheduling policy prioritizes decode requests. It batches all pending decode requests before scheduling any prefill operations. When there are available tokens in the `max_num_batched_tokens` budget, it schedules pending prefills. If a pending prefill request cannot fit into `max_num_batched_tokens`, it automatically chunks it. +In V1, **chunked prefill is enabled by default whenever possible**. With chunked prefill enabled, the scheduling policy prioritizes decode requests. It batches all pending decode requests before scheduling any prefill operations. When there are available tokens in the `max_num_batched_tokens` budget, it schedules pending prefills. If a pending prefill request cannot fit into `max_num_batched_tokens`, it automatically chunks it. This policy has two benefits: diff --git a/docs/usage/reproducibility.md b/docs/usage/reproducibility.md index afc25b63902e..a8e49d0a3398 100644 --- a/docs/usage/reproducibility.md +++ b/docs/usage/reproducibility.md @@ -1,21 +1,23 @@ # Reproducibility vLLM does not guarantee the reproducibility of the results by default, for the sake of performance. To achieve -reproducible results, you need to turn off multiprocessing to make the scheduling deterministic by setting `VLLM_ENABLE_V1_MULTIPROCESSING=0`. +reproducible results: + +- In offline mode, you can either set `VLLM_ENABLE_V1_MULTIPROCESSING=0` which makes scheduling deterministic, + or enable [batch invariance](../features/batch_invariance.md) to make the outputs insensitive to scheduling. +- In online mode, you can only enable [batch invariance](../features/batch_invariance.md). Example: [examples/offline_inference/reproducibility.py](../../examples/offline_inference/reproducibility.py) !!! warning - Applying the above settings [changes the random state in user code](#locality-of-random-state). + Setting `VLLM_ENABLE_V1_MULTIPROCESSING=0` will change the random state of user code + (i.e. the code that constructs [LLM][vllm.LLM] class). !!! note Even with the above settings, vLLM only provides reproducibility when it runs on the same hardware and the same vLLM version. - Also, the online serving API (`vllm serve`) does not support reproducibility - because it is almost impossible to make the scheduling deterministic in the - online setting. ## Setting the global seed @@ -23,25 +25,17 @@ The `seed` parameter in vLLM is used to control the random states for various ra If a specific seed value is provided, the random states for `random`, `np.random`, and `torch.manual_seed` will be set accordingly. -However, in some cases, setting the seed will also [change the random state in user code](#locality-of-random-state). - ### Default Behavior In V1, the `seed` parameter defaults to `0` which sets the random state for each worker, so the results will remain consistent for each vLLM run even if `temperature > 0`. -!!! note - - It is impossible to un-specify a seed for V1 because different workers need to sample the same outputs - for workflows such as speculative decoding. - - For more information, see: - -### Locality of random state +It is impossible to un-specify a seed for V1 because different workers need to sample the same outputs +for workflows such as speculative decoding. For more information, see: -The random state in user code (i.e. the code that constructs [LLM][vllm.LLM] class) is updated by vLLM under the following conditions: +!!! note -- For V0: The seed is specified. -- For V1: The workers are run in the same process as user code, i.e.: `VLLM_ENABLE_V1_MULTIPROCESSING=0`. + The random state in user code (i.e. the code that constructs [LLM][vllm.LLM] class) is updated by vLLM + only if the workers are run in the same process as user code, i.e.: `VLLM_ENABLE_V1_MULTIPROCESSING=0`. -By default, these conditions are not active so you can use vLLM without having to worry about -accidentally making deterministic subsequent operations that rely on random state. + By default, `VLLM_ENABLE_V1_MULTIPROCESSING=1` so you can use vLLM without having to worry about + accidentally making deterministic subsequent operations that rely on random state. diff --git a/docs/usage/v1_guide.md b/docs/usage/v1_guide.md index 22f4e6761ea9..5f647aafd61d 100644 --- a/docs/usage/v1_guide.md +++ b/docs/usage/v1_guide.md @@ -4,9 +4,7 @@ We have fully deprecated V0. Please read [RFC #18571](https://github.com/vllm-project/vllm/issues/18571) for more details. -V1 is now enabled by default for all supported use cases, and we will gradually enable it for every use case we plan to support. Please share any feedback on [GitHub](https://github.com/vllm-project/vllm) or in the [vLLM Slack](https://inviter.co/vllm-slack). - -## Why vLLM V1? + If you have a use case that works on V0 Engine but not V1, please share it on [GitHub](https://github.com/vllm-project/vllm) or in the [vLLM Slack](https://inviter.co/vllm-slack). vLLM V0 successfully supported a wide range of models and hardware, but as new features were developed independently, the system grew increasingly complex. This complexity made it harder to integrate new capabilities and introduced technical debt, revealing the need for a more streamlined and unified design. @@ -32,16 +30,44 @@ Upgrade to vLLM’s Core Architecture](https://blog.vllm.ai/2025/01/27/v1-alpha- This living user guide outlines a few known **important changes and limitations** introduced by vLLM V1. The team has been working actively to bring V1 as the default engine, therefore this guide will be updated constantly as more features get supported on vLLM V1. -## Current Status +## Differences from V0 + +This section lists some differences in behavior between V0 and V1. + +### Chunked Prefill + +Chunked prefill is enabled by default whenever possible, unlike in V0 where it was conditionally enabled based on model characteristics. + +### CUDA Graphs + +CUDA graph capture takes up more memory in V1 than in V0. + +### Semantic Changes to Logprobs + +#### Logprobs Calculation + +By default, logprobs in V1 are now returned immediately once computed from the model’s raw output (i.e. +before applying any logits post-processing such as temperature scaling or penalty +adjustments). As a result, the returned logprobs do not reflect the final adjusted +probabilities used during sampling. + +You can adjust this behavior by setting the `--logprobs-mode` flag. +Four modes are supported: `raw_logprobs` (default), `processed_logprobs`, `raw_logits`, `processed_logits`. +Raw means the values before applying any logit processors, like bad words. +Processed means the values after applying all processors, including temperature and top_k/top_p. + +#### Prompt Logprobs with Prefix Caching + +While V1 supports passing prompt logprobs with prefix caching enabled, it no longer caches the logprobs. +For a request requiring prompt logprobs, the engine will ignore the prefix cache and recompute the prefill of full prompt to generate the logprobs. + +## Feature Support -For each item, our progress towards V1 support falls into one of the following states: +For each item, its support in vLLM V1 falls into one of the following states: -- **πŸš€ Optimized**: Nearly fully optimized, with no further work currently planned. -- **🟒 Functional**: Fully operational, with ongoing optimizations. -- **🚧 WIP**: Under active development. -- **🟑 Planned**: Scheduled for future implementation (some may have open PRs/RFCs). -- **🟠 Delayed**: Temporarily dropped in V1 but planned to be re-introduced later. -- **πŸ”΄ Deprecated**: Not planned for V1 unless there is strong demand. +- **🟒 Functional**: Fully operational with optimizations comparable to or better than V0. +- **🟑 In Progress**: Planned to be in vLLM V1, with open PRs/RFCs. +- **πŸ”΄ Removed**: Dropped from vLLM V1. Will only consider re-introducing if there is strong demand. !!! note vLLM V1’s unified scheduler treats both prompt and output tokens the same @@ -57,13 +83,13 @@ based on assigned priority, with FCFS as a tie-breaker), configurable via the ### Hardware -| Hardware | Status | -|------------|-----------------------------------------------| -| **NVIDIA** | πŸš€ | -| **AMD** | 🟒 | +| Hardware | Status | +|------------------|-----------------------------------------------| +| **NVIDIA** | 🟒 | +| **AMD** | 🟒 | | **INTEL GPU** | 🟒 | -| **TPU** | 🟒 | -| **CPU** | 🟒 (x86\_64/aarch64) 🟑 (MacOS) | +| **TPU** | 🟒 | +| **CPU** | 🟒 | !!! note @@ -78,23 +104,21 @@ based on assigned priority, with FCFS as a tie-breaker), configurable via the ### Models -| Model Type | Status | -|-----------------------------|------------------------------------------------------------------------------------| -| **Decoder-only Models** | πŸš€ Optimized | -| **Encoder-Decoder Models** | 🟒 Whisper only | -| **Embedding Models** | 🟒 Functional | -| **Mamba Models** | 🟒 (Mamba-2), 🟒 (Mamba-1) | -| **Multimodal Models** | 🟒 Functional | +| Model Type | Status | +|-----------------------------|-------------------------------------------------------------------------| +| **Decoder-only Models** | 🟒 | +| **Encoder-Decoder Models** | 🟒 (Whisper), πŸ”΄ (Others) | +| **Pooling Models** | 🟒 | +| **Mamba Models** | 🟒 | +| **Multimodal Models** | 🟒 | See below for the status of models that are not yet supported or have more features planned in V1. -#### Embedding Models +#### Pooling Models -The initial basic support is now functional. +Now fully supported, with prefix caching and chunked prefill newly available for last-pooling models. -Later, we will consider using [hidden states processor](https://github.com/vllm-project/vllm/issues/12249), -which is based on [global logits processor](https://github.com/vllm-project/vllm/pull/13360) -to enable simultaneous generation and embedding using the same engine instance in V1. +We are working on enabling prefix caching and chunked prefill for more categories of pooling models. #### Mamba Models @@ -112,24 +136,25 @@ Please note that prefix caching is not yet supported for any of the above models Whisper is supported. Other models requiring cross-attention between separate encoder and decoder (e.g., `BartForConditionalGeneration`, -`MllamaForConditionalGeneration`) are not supported. +`MllamaForConditionalGeneration`) are no longer supported. ### Features | Feature | Status | |---------------------------------------------|-----------------------------------------------------------------------------------| -| **Prefix Caching** | πŸš€ Optimized | -| **Chunked Prefill** | πŸš€ Optimized | -| **LoRA** | πŸš€ Optimized | +| **Prefix Caching** | 🟒 Functional | +| **Chunked Prefill** | 🟒 Functional | +| **LoRA** | 🟒 Functional | | **Logprobs Calculation** | 🟒 Functional | -| **FP8 KV Cache** | 🟒 Functional on Hopper devices ()| -| **Spec Decode** | πŸš€ Optimized | -| **Prompt Logprobs with Prefix Caching** | 🟑 Planned ([RFC #13414](https://github.com/vllm-project/vllm/issues/13414))| +| **FP8 KV Cache** | 🟒 Functional | +| **Spec Decode** | 🟒 Functional | +| **Prompt Logprobs with Prefix Caching** | 🟒 Functional | | **Structured Output Alternative Backends** | 🟒 Functional | -| **Request-level Structured Output Backend** | πŸ”΄ Deprecated | -| **best_of** | πŸ”΄ Deprecated ([RFC #13361](https://github.com/vllm-project/vllm/issues/13361))| -| **Per-Request Logits Processors** | πŸ”΄ Deprecated ([RFC #13360](https://github.com/vllm-project/vllm/pull/13360)) | -| **GPU <> CPU KV Cache Swapping** | πŸ”΄ Deprecated | +| **Concurrent Partial Prefills** | 🟑 [In Progress](https://github.com/vllm-project/vllm/issues/14003) | +| **best_of** | πŸ”΄ [Removed](https://github.com/vllm-project/vllm/issues/13361) | +| **Per-Request Logits Processors** | πŸ”΄ [Removed](https://github.com/vllm-project/vllm/pull/13360) | +| **GPU <> CPU KV Cache Swapping** | πŸ”΄ Removed | +| **Request-level Structured Output Backend** | πŸ”΄ Removed | !!! note @@ -139,37 +164,16 @@ encoder and decoder (e.g., `BartForConditionalGeneration`, prefix caching, and speculative decoding without a strict separation between prefill and decode phases. -#### Semantic Changes to Logprobs - -vLLM V1 supports logprobs and prompt logprobs. However, there are some important semantic -differences compared to V0: - -##### Logprobs Calculation - -By default, logprobs in V1 are now returned immediately once computed from the model’s raw output (i.e. -before applying any logits post-processing such as temperature scaling or penalty -adjustments). As a result, the returned logprobs do not reflect the final adjusted -probabilities used during sampling. - -You can adjust this behavior by setting the `--logprobs-mode` flag. -Four modes are supported: `raw_logprobs` (default), `processed_logprobs`, `raw_logits`, `processed_logits`. -Raw means the values before applying any logit processors, like bad words. -Processed means the values after applying all processors, including temperature and top_k/top_p. - -##### Prompt Logprobs with Prefix Caching - -Logprobs are not cached. For a request requiring prompt logprobs, the engine will ignore the prefix cache and recompute the prefill of full prompt to generate the logprobs. - -#### Deprecated Features +#### Removed Features -As part of the major architectural rework in vLLM V1, several legacy features have been deprecated. +As part of the major architectural rework in vLLM V1, several legacy features have been removed. ##### Sampling features -- **best_of**: This feature has been deprecated due to limited usage. See details at [RFC #13361](https://github.com/vllm-project/vllm/issues/13361). +- **best_of**: This feature has been removed due to limited usage. See details at [RFC #13361](https://github.com/vllm-project/vllm/issues/13361). - **Per-Request Logits Processors**: In V0, users could pass custom processing functions to adjust logits on a per-request basis. In vLLM V1, this - feature has been deprecated. Instead, we now support **global logits processors** + feature has been removed. Instead, we now support **global logits processors** which are set at startup time, see [RFC #17799](https://github.com/vllm-project/vllm/issues/17799). ##### KV Cache features @@ -179,4 +183,4 @@ to handle request preemptions. ##### Structured Output features -- **Request-level Structured Output Backend**: Deprecated, alternative backends (outlines, guidance) with fallbacks is supported now. +- **Request-level Structured Output Backend**: Removed; alternative backends (outlines, guidance) with fallbacks are supported now. diff --git a/examples/offline_inference/reproducibility.py b/examples/offline_inference/reproducibility.py index e135bc1b2abb..72c1e841dca4 100644 --- a/examples/offline_inference/reproducibility.py +++ b/examples/offline_inference/reproducibility.py @@ -11,8 +11,11 @@ from vllm import LLM, SamplingParams -# Turn off multiprocessing to make the scheduling deterministic. +# Either: +## Turn off multiprocessing to make the scheduling deterministic, or os.environ["VLLM_ENABLE_V1_MULTIPROCESSING"] = "0" +## Enable batch invariance to get consistent results regardless of scheduling. +os.environ["VLLM_BATCH_INVARIANT"] = "1" prompts = [ "Hello, my name is", diff --git a/tests/models/language/generation/test_common.py b/tests/models/language/generation/test_common.py index 0cdb7c9a603f..df6c2cab7814 100644 --- a/tests/models/language/generation/test_common.py +++ b/tests/models/language/generation/test_common.py @@ -10,13 +10,6 @@ from ...registry import HF_EXAMPLE_MODELS from ...utils import check_logprobs_close -# These have unsupported head_dim for FA. We do not -# have a clean way to fall back, so we fail with -# a clear msg when it happens. -# https://github.com/vllm-project/vllm/issues/14524 -# NOTE(woosuk): Skipping these tests until V1 supports them. -# REQUIRES_V0 = ["microsoft/phi-2", "stabilityai/stablelm-3b-4e1t"] - # This list contains the model that are using AITER kernel. # Skip model that are not using AITER tests. # When more AITER kernels are added, this list will not be