-
Notifications
You must be signed in to change notification settings - Fork 45
feat: add support for X-Request-Id header in responses and logs #269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
mayabar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, please see some comments
e0e36e2 to
dc941e7
Compare
|
@mayabar Thanks for the review! The addressed issues have been updated. |
dc941e7 to
3823ef9
Compare
mayabar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@rudeigerc Please fix the lint problem
3823ef9 to
20cc725
Compare
Signed-off-by: rudeigerc <[email protected]>
20cc725 to
fbaa66a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
|
/lgtm |
This PR adds support for the
X-Request-Idheader, enabling request tracing across the LLM inference simulation server, aligning with extra HTTP headers of vLLM's OpenAI-compatible server feature. The update also provides the--enable-request-id-headersflag that vLLM offers.Related vLLM PRs:
add_request_idmiddleware vllm-project/vllm#9594Currently, only
X-Request-Idis supported sincectx.Request.Header.Peekfrom fasthttp is case-sensitive.