Skip to content
This repository was archived by the owner on Sep 4, 2025. It is now read-only.

Commit b1f3e18

Browse files
authored
[MISC] Keep chunked prefill enabled by default with long context when prefix caching is enabled (vllm-project#8342)
1 parent 04e7c4e commit b1f3e18

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

vllm/engine/arg_utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -878,7 +878,6 @@ def create_engine_config(self) -> EngineConfig:
878878
if (is_gpu and not use_sliding_window and not use_spec_decode
879879
and not self.enable_lora
880880
and not self.enable_prompt_adapter
881-
and not self.enable_prefix_caching
882881
and not has_seqlen_agnostic_layers):
883882
self.enable_chunked_prefill = True
884883
logger.warning(

0 commit comments

Comments
 (0)