We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 82d24f7 commit 0c0c201Copy full SHA for 0c0c201
docs/source/serving/openai_compatible_server.md
@@ -112,7 +112,13 @@ completion = client.chat.completions.create(
112
113
## Extra HTTP Headers
114
115
-Only `X-Request-Id` HTTP request header is supported for now.
+Only `X-Request-Id` HTTP request header is supported for now. It can be enabled
116
+with `--enable-request-id-headers`.
117
+
118
+> Note that enablement of the headers can impact performance significantly at high QPS
119
+> rates. We recommend implementing HTTP headers at the router level (e.g. via Istio),
120
+> rather than within the vLLM layer for this reason.
121
+> See https://github.com/vllm-project/vllm/pull/11529 for more details.
122
123
```python
124
completion = client.chat.completions.create(
0 commit comments