+ "details": "### Summary\nThe /v1/chat/completions and /tokenize endpoints allow a `chat_template_kwargs` request parameter that is used in the code before it is properly validated against the chat template. With the right `chat_template_kwargs` parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests \n\n### Details\nIn serving_engine.py, the chat_template_kwargs are unpacked into kwargs passed to chat_utils.py `apply_hf_chat_template` with no validation on the keys or values in that chat_template_kwargs dict. This means they can be used to override optional parameters in the `apply_hf_chat_template` method, such as `tokenize`, changing its default from False to True.\n\nhttps://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814\n\nhttps://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610\n\nBoth serving_chat.py and serving_tokenization.py call into this `_preprocess_chat` method of `serving_engine.py` and they both pass in `chat_template_kwargs`.\n\nSo, a `chat_template_kwargs` like `{\"tokenize\": True}` makes tokenization happen as part of applying the chat template, even though that is not expected. Tokenization is a blocking operation, and with sufficiently large input can block the API server's event loop, which blocks handling of all other requests until this tokenization is complete.\n\nThis optional `tokenize` parameter to `apply_hf_chat_template` does not appear to be used, so one option would be to just hard-code that to always be False instead of allowing it to be optionally overridden by callers. A better option may be to not pass `chat_template_kwargs` as unpacked kwargs but instead as a dict, and only unpack them after the logic in `apply_hf_chat_template` that resolves the kwargs against the chat template.\n\n### Impact\n\nAny authenticated user can cause a denial of service to a vLLM server with Chat Completion or Tokenize requests.\n\n### Fix\n\nhttps://github.com/vllm-project/vllm/pull/27205",
0 commit comments