-
Notifications
You must be signed in to change notification settings - Fork 15
GPU CI #334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
GPU CI #334
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #334 +/- ##
=======================================
Coverage ? 64.09%
=======================================
Files ? 79
Lines ? 7760
Branches ? 0
=======================================
Hits ? 4974
Misses ? 2786
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
hey |
|
- name: Download and install vLLM and its dependencies | ||
# TODO: this honestly could not be hackier if I tried | ||
run: | | ||
python -m pip install -r .github/packaging/vllm_reqs.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems stable
OK at least in the run https://github.com/meta-pytorch/forge/actions/runs/18421419301?pr=334 all unit tests but 3 pass. Two look like pretty normal failures, the only reasonably suspicious one is the linker-related error in |
# TODO: this honestly could not be hackier if I tried | ||
run: | | ||
python -m pip install -r .github/packaging/vllm_reqs.txt | ||
python -m pip install vllm==0.10.1.dev0+g6d8d0a24c.d20251009.cu129 --no-cache-dir --index-url https://download.pytorch.org/whl/preview/forge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we were to put these into folders by day and have a latest, then could we just do something like this?
python -m pip install vllm --no-cache-dir --index-url https://download.pytorch.org/whl/preview/forge/latest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally yes but need to check on the ability to add new subdirectories from devinfra's point of view. cc @atalman on the best approach here
# TODO: this honestly could not be hackier if I tried | ||
run: | | ||
python -m pip install -r .github/packaging/vllm_reqs.txt | ||
python -m pip install vllm==0.10.1.dev0+g6d8d0a24c.d20251009.cu129 --no-cache-dir --index-url https://download.pytorch.org/whl/preview/forge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could these dependencies go in the pyproject toml?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leaving cleanup as a follow-up. most important thing here is to get this working imo. after that i will open some other smaller refactor PRs (e.g. @allenwang28 was talking about consolidation with the install script as well).
this PR is the culmination of a long and sad story. but we can now run forge tests on GPU runners