-
Notifications
You must be signed in to change notification settings - Fork 356
[Bugfix] Concurrent requests to model are currently limited to 100 due to aiohttp default #767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Concurrent requests to model are currently limited to 100 due to aiohttp default #767
Conversation
Summary of ChangesHello @dermodmaster, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a concurrency limitation within the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses an issue where the aiohttp.ClientSession was limited to 100 concurrent connections by default, contrary to the intended behavior of having unlimited connections. The approach of using aiohttp.TCPConnector with limit=0 is correct for this purpose. However, my review identified a potential resource leak. When an external connector is passed to ClientSession, it doesn't take ownership by default, meaning it won't be closed automatically. I've provided a specific comment with a code suggestion to fix this by setting connector_owner=True. With this change, the implementation will be robust.
|
Could you fix the pre-commit issue? |
206a3cf to
bc299c4
Compare
Signed-off-by: Levent K. (M.Sc.) <[email protected]>
Signed-off-by: Levent K. (M.Sc.) <[email protected]>
Signed-off-by: Levent K. (M.Sc.) <[email protected]>
Signed-off-by: Levent K. (M.Sc.) <[email protected]> Signed-off-by: Levent Koch <[email protected]> Signed-off-by: Levent K. (M.Sc.) <[email protected]>
…itialisation Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Levent K. (M.Sc.) <[email protected]> Signed-off-by: Levent Koch <[email protected]> Signed-off-by: Levent K. (M.Sc.) <[email protected]>
Signed-off-by: Levent K. (M.Sc.) <[email protected]>
bc299c4 to
0a1dc37
Compare
Signed-off-by: Levent K. (M.Sc.) <[email protected]>
|
@zerofishnoodles Yes, sry should be fixed now. |
zerofishnoodles
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Currently vllm-router limits the concurrent requests to 100 while using the AiohttpClientWrapper in
aiohttp_client.py.In my test scenario the deployment router will not proxy more than 100 concurrent llm requests to the model.
This is because the
aiohttp.ClientSession()will create aaiohttp.TCPConnectorwhich default limit is 100 concurrent requests (see aiohttp docs for TCPConnector).As the already implemented comment says, it is intended to have unlimited connections there, but the implementation itself is missing:
I propose to create a
aiohttp.TCPConnectorwithlimit=0which equals unlimited and pass it to the createdaiohttp.ClientSession().FIX #765
-swhen doinggit commit[Bugfix],[Feat], and[CI].Detailed Checklist (Click to Expand)
Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Feat]for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).[Router]for changes to thevllm_router(e.g., routing algorithm, router observability, etc.).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
pre-committo format your code. SeeREADME.mdfor installation.DCO and Signed-off-by
When contributing changes to this project, you must agree to the DCO. Commits must include a
Signed-off-by:header which certifies agreement with the terms of the DCO.Using
-swithgit commitwill automatically add this header.What to Expect for the Reviews
We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.