Skip to content

Commit a73c8c2

Browse files
1 parent 205dec0 commit a73c8c2

File tree

1 file changed

+73
-0
lines changed

1 file changed

+73
-0
lines changed
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
{
2+
"schema_version": "1.4.0",
3+
"id": "GHSA-69j4-grxj-j64p",
4+
"modified": "2025-11-20T21:26:24Z",
5+
"published": "2025-11-20T21:26:24Z",
6+
"aliases": [
7+
"CVE-2025-62426"
8+
],
9+
"summary": "vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`",
10+
"details": "### Summary\nThe /v1/chat/completions and /tokenize endpoints allow a `chat_template_kwargs` request parameter that is used in the code before it is properly validated against the chat template. With the right `chat_template_kwargs` parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests \n\n### Details\nIn serving_engine.py, the chat_template_kwargs are unpacked into kwargs passed to chat_utils.py `apply_hf_chat_template` with no validation on the keys or values in that chat_template_kwargs dict. This means they can be used to override optional parameters in the `apply_hf_chat_template` method, such as `tokenize`, changing its default from False to True.\n\nhttps://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814\n\nhttps://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610\n\nBoth serving_chat.py and serving_tokenization.py call into this `_preprocess_chat` method of `serving_engine.py` and they both pass in `chat_template_kwargs`.\n\nSo, a `chat_template_kwargs` like `{\"tokenize\": True}` makes tokenization happen as part of applying the chat template, even though that is not expected. Tokenization is a blocking operation, and with sufficiently large input can block the API server's event loop, which blocks handling of all other requests until this tokenization is complete.\n\nThis optional `tokenize` parameter to `apply_hf_chat_template` does not appear to be used, so one option would be to just hard-code that to always be False instead of allowing it to be optionally overridden by callers. A better option may be to not pass `chat_template_kwargs` as unpacked kwargs but instead as a dict, and only unpack them after the logic in `apply_hf_chat_template` that resolves the kwargs against the chat template.\n\n### Impact\n\nAny authenticated user can cause a denial of service to a vLLM server with Chat Completion or Tokenize requests.\n\n### Fix\n\nhttps://github.com/vllm-project/vllm/pull/27205",
11+
"severity": [
12+
{
13+
"type": "CVSS_V3",
14+
"score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H"
15+
}
16+
],
17+
"affected": [
18+
{
19+
"package": {
20+
"ecosystem": "PyPI",
21+
"name": "vllm"
22+
},
23+
"ranges": [
24+
{
25+
"type": "ECOSYSTEM",
26+
"events": [
27+
{
28+
"introduced": "0.5.5"
29+
},
30+
{
31+
"fixed": "0.11.1"
32+
}
33+
]
34+
}
35+
]
36+
}
37+
],
38+
"references": [
39+
{
40+
"type": "WEB",
41+
"url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-69j4-grxj-j64p"
42+
},
43+
{
44+
"type": "WEB",
45+
"url": "https://github.com/vllm-project/vllm/pull/27205"
46+
},
47+
{
48+
"type": "WEB",
49+
"url": "https://github.com/vllm-project/vllm/commit/3ada34f9cb4d1af763fdfa3b481862a93eb6bd2b"
50+
},
51+
{
52+
"type": "PACKAGE",
53+
"url": "https://github.com/vllm-project/vllm"
54+
},
55+
{
56+
"type": "WEB",
57+
"url": "https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610"
58+
},
59+
{
60+
"type": "WEB",
61+
"url": "https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814"
62+
}
63+
],
64+
"database_specific": {
65+
"cwe_ids": [
66+
"CWE-770"
67+
],
68+
"severity": "MODERATE",
69+
"github_reviewed": true,
70+
"github_reviewed_at": "2025-11-20T21:26:24Z",
71+
"nvd_published_at": null
72+
}
73+
}

0 commit comments

Comments
 (0)