Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue where the vLLM worker in the text completion example could fail to initialize on GPUs with approximately 16GB of memory due to CUDA out-of-memory errors. By introducing configurable parameters for Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
4019df3 to
c43fcb7
Compare
Fixes: #30513
Successful run: https://github.com/apache/beam/actions/runs/23814494318
Fixes Dataflow postcommit vllmTests failures caused by vLLM exiting during engine startup on NVIDIA T4 workers. The failure was CUDA OOM during vLLM V1 engine initialization. The example now passes memory-aware vLLM server flags via the existing vllm_server_kwargs pattern
as after investigation Task :sdks:python:test-suites:dataflow:py312:vllmTests failed and job dataflow logs showed:
Exception: Failed to start vLLM server, polling process exited with code 1.
Starting service with ['/opt/apache/beam-venv/beam-venv-worker-sdk-0-0/bin/python' '-m'
'vllm.entrypoints.openai.api_server' '--model' 'facebook/opt-125m' '--port' '…']
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.00 MiB.
GPU 0 has a total capacity of 14.58 GiB of which 33.56 MiB is free.
… 13.62 GiB is allocated by PyTorch …
vLLM then raised:
RuntimeError: CUDA out of memory occurred when warming up sampler with 256 dummy requests.
Please try lowering
max_num_seqsorgpu_memory_utilizationwhen initializing the engine.So this PR :
Uses vllm_server_kwargs (same pattern as other vLLM examples, e.g. vllm_gemma_batch.py) to pass --max-num-seqs and --gpu-memory-utilization with conservative defaults suited to ~16 GiB GPUs.
Adds --vllm_max_num_seqs and --vllm_gpu_memory_utilization so larger GPUs can override.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.