-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
build cuda 13.0.0 base image with 2.9.0 #3229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the 📝 WalkthroughWalkthroughThis PR adds a new CUDA 13.0.0 matrix entry to the GitHub Actions workflow build matrices. The entry includes Python 3.11, PyTorch 2.9.0, and CUDA architecture support for compute capability 9.0+PTX, applied to both the base and UV-base build configurations. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/base.yml(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: build-base-uv (128, 12.8.1, 3.11, 2.8.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
- GitHub Check: build-base-uv (126, 12.6.3, 3.11, 2.7.1, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
- GitHub Check: build-base-uv (130, 13.0.0, 3.11, 2.9.0, 9.0+PTX, Dockerfile-uv-base)
- GitHub Check: build-base (128, 12.8.1, 3.11, 2.9.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
- GitHub Check: build-base-uv (128, 12.8.1, 3.11, 2.9.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
- GitHub Check: build-base-uv (128, 12.8.1, 3.11, 2.7.1, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
- GitHub Check: build-base (126, 12.6.3, 3.11, 2.7.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
- GitHub Check: build-base (128, 12.8.1, 3.11, 2.8.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
- GitHub Check: PyTest from Source Dist (3.11, 2.8.0)
- GitHub Check: build-base (126, 12.6.3, 3.11, 2.7.1, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
- GitHub Check: PyTest from Source Dist (3.11, 2.7.1)
- GitHub Check: build-base (130, 13.0.0, 3.11, 2.9.0, 9.0+PTX, Dockerfile-base)
- GitHub Check: PyTest (3.11, 2.7.1)
- GitHub Check: PyTest (3.11, 2.8.0)
| - cuda: "130" | ||
| cuda_version: 13.0.0 | ||
| cudnn_version: "" | ||
| python_version: "3.11" | ||
| pytorch: 2.9.0 | ||
| torch_cuda_arch_list: "9.0+PTX" | ||
| dockerfile: "Dockerfile-base" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify torch_cuda_arch_list is intentional for CUDA 13.0.0.
The new CUDA 13.0.0 entry restricts torch_cuda_arch_list to "9.0+PTX", while all existing CUDA 12.x entries support a broader range: "7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX". This significantly limits hardware compatibility—users with older GPUs (compute capability 7.0–8.9) won't have optimized binaries.
Confirm whether this reduction in supported architectures is:
- An intentional limitation of CUDA 13.0.0 or PyTorch 2.9.0 with CUDA 13.0.0
- An oversight that should match the broader architecture support of earlier entries
Also applies to: 153-159
🌐 Web query:
CUDA 13.0.0 supported GPU compute capabilities PyTorch 2.9.0
💡 Result:
Short answer:
-
CUDA 13.0 supports GPU compute capabilities (compute_xx / sm_xx) listed in NVIDIA's nvcc docs: compute_75, 80, 86, 87, 88, 89, 90, 90a, 100, 100f, 100a, 103, 103f, 103a, 110, 110f, 110a, 120, 120f, 120a, 121, 121f, 121a (and corresponding sm_75, sm_80, sm_86, sm_87, sm_88, sm_89, sm_90, sm_90a, sm_100, sm_100f, sm_100a, sm_103, sm_103f, sm_103a, sm_110, sm_110f, sm_110a, sm_120, sm_120f, sm_120a, sm_121, sm_121f, sm_121a). [1][2]
-
CUDA 13.0 supports NVIDIA architectures from Turing (SM 7.5) through Grace/Blackwell; Maxwell/Pascal/Volta (older than Turing) are dropped for several libraries. [3]
-
PyTorch 2.9.0 expands wheel support to include CUDA 13 (i.e., PyTorch 2.9 provides CUDA‑13 build variants). Confirm PyTorch package variant for your platform/installer before installing. [4]
Sources:
[1] NVIDIA nvcc GPU code generation / List of supported GPU codes (CUDA 13.0).
[2] CUDA Toolkit 13.0 release notes (architectures/support details).
[3] NVIDIA developer blog “What’s New in CUDA 13.0”.
[4] PyTorch 2.9 release blog (notes on CUDA 13 wheel support).
Correct torch_cuda_arch_list for CUDA 13.0.0 to match supported GPU architectures.
CUDA 13.0 supports GPU compute capabilities from Turing (SM 7.5) through Grace/Blackwell. The current entry restricts this to "9.0+PTX" only, which excludes Turing (7.5) and Ampere (8.0–8.9) GPUs that CUDA 13.0 officially supports.
Update line 66 (and 156) to:
torch_cuda_arch_list: "7.5 8.0 8.6 8.7 8.8 8.9 9.0+PTX"
Note: Unlike CUDA 12.x entries, omit "7.0" (Pascal) since Maxwell/Pascal/Volta are dropped in CUDA 13.0.
Also applies to: 153-159
🤖 Prompt for AI Agents
In .github/workflows/base.yml around lines 63-69 (and also apply the same change
at lines 153-159), the torch_cuda_arch_list is incorrectly set to "9.0+PTX"
which excludes supported CUDA 13.0 architectures; update the
torch_cuda_arch_list value to "7.5 8.0 8.6 8.7 8.8 8.9 9.0+PTX" in both places
so Turing and Ampere families are included and Pascal (7.0) is omitted per CUDA
13.0 support.
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
Summary by CodeRabbit