Skip to content

Conversation

@winglian
Copy link
Collaborator

@winglian winglian commented Oct 23, 2025

Summary by CodeRabbit

  • Chores
    • Added CUDA 13.0.0 support to build configurations with Python 3.11 and PyTorch 2.9.0 compatibility.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 23, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

📝 Walkthrough

Walkthrough

This PR adds a new CUDA 13.0.0 matrix entry to the GitHub Actions workflow build matrices. The entry includes Python 3.11, PyTorch 2.9.0, and CUDA architecture support for compute capability 9.0+PTX, applied to both the base and UV-base build configurations.

Changes

Cohort / File(s) Change Summary
GitHub Actions Workflow Matrix Configuration
​.github/workflows/base.yml
Added CUDA 13.0.0 matrix entry (cuda: "130", pytorch: 2.9.0, python_version: "3.11", torch_cuda_arch_list: "9.0+PTX") to build-base and build-base-uv matrices

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Possibly related PRs

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "build cuda 13.0.0 base image with 2.9.0" directly and accurately summarizes the main change in this PR. The changeset adds a new CUDA 13.0.0 matrix entry with PyTorch 2.9.0 to the GitHub workflows for building base images. The title is specific about the CUDA version, the artifact being built (base image), and the PyTorch version, which precisely reflects the core modification. It is concise, clear, and free from vague language or misleading information.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bb33fda and d7f6db4.

📒 Files selected for processing (1)
  • .github/workflows/base.yml (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
  • GitHub Check: build-base-uv (128, 12.8.1, 3.11, 2.8.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
  • GitHub Check: build-base-uv (126, 12.6.3, 3.11, 2.7.1, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
  • GitHub Check: build-base-uv (130, 13.0.0, 3.11, 2.9.0, 9.0+PTX, Dockerfile-uv-base)
  • GitHub Check: build-base (128, 12.8.1, 3.11, 2.9.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
  • GitHub Check: build-base-uv (128, 12.8.1, 3.11, 2.9.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
  • GitHub Check: build-base-uv (128, 12.8.1, 3.11, 2.7.1, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-uv-base)
  • GitHub Check: build-base (126, 12.6.3, 3.11, 2.7.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
  • GitHub Check: build-base (128, 12.8.1, 3.11, 2.8.0, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
  • GitHub Check: PyTest from Source Dist (3.11, 2.8.0)
  • GitHub Check: build-base (126, 12.6.3, 3.11, 2.7.1, 7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX, Dockerfile-base)
  • GitHub Check: PyTest from Source Dist (3.11, 2.7.1)
  • GitHub Check: build-base (130, 13.0.0, 3.11, 2.9.0, 9.0+PTX, Dockerfile-base)
  • GitHub Check: PyTest (3.11, 2.7.1)
  • GitHub Check: PyTest (3.11, 2.8.0)

Comment on lines +63 to +69
- cuda: "130"
cuda_version: 13.0.0
cudnn_version: ""
python_version: "3.11"
pytorch: 2.9.0
torch_cuda_arch_list: "9.0+PTX"
dockerfile: "Dockerfile-base"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify torch_cuda_arch_list is intentional for CUDA 13.0.0.

The new CUDA 13.0.0 entry restricts torch_cuda_arch_list to "9.0+PTX", while all existing CUDA 12.x entries support a broader range: "7.0 7.5 8.0 8.6 8.7 8.9 9.0+PTX". This significantly limits hardware compatibility—users with older GPUs (compute capability 7.0–8.9) won't have optimized binaries.

Confirm whether this reduction in supported architectures is:

  • An intentional limitation of CUDA 13.0.0 or PyTorch 2.9.0 with CUDA 13.0.0
  • An oversight that should match the broader architecture support of earlier entries

Also applies to: 153-159


🌐 Web query:

CUDA 13.0.0 supported GPU compute capabilities PyTorch 2.9.0

💡 Result:

Short answer:

  • CUDA 13.0 supports GPU compute capabilities (compute_xx / sm_xx) listed in NVIDIA's nvcc docs: compute_75, 80, 86, 87, 88, 89, 90, 90a, 100, 100f, 100a, 103, 103f, 103a, 110, 110f, 110a, 120, 120f, 120a, 121, 121f, 121a (and corresponding sm_75, sm_80, sm_86, sm_87, sm_88, sm_89, sm_90, sm_90a, sm_100, sm_100f, sm_100a, sm_103, sm_103f, sm_103a, sm_110, sm_110f, sm_110a, sm_120, sm_120f, sm_120a, sm_121, sm_121f, sm_121a). [1][2]

  • CUDA 13.0 supports NVIDIA architectures from Turing (SM 7.5) through Grace/Blackwell; Maxwell/Pascal/Volta (older than Turing) are dropped for several libraries. [3]

  • PyTorch 2.9.0 expands wheel support to include CUDA 13 (i.e., PyTorch 2.9 provides CUDA‑13 build variants). Confirm PyTorch package variant for your platform/installer before installing. [4]

Sources:
[1] NVIDIA nvcc GPU code generation / List of supported GPU codes (CUDA 13.0).
[2] CUDA Toolkit 13.0 release notes (architectures/support details).
[3] NVIDIA developer blog “What’s New in CUDA 13.0”.
[4] PyTorch 2.9 release blog (notes on CUDA 13 wheel support).


Correct torch_cuda_arch_list for CUDA 13.0.0 to match supported GPU architectures.

CUDA 13.0 supports GPU compute capabilities from Turing (SM 7.5) through Grace/Blackwell. The current entry restricts this to "9.0+PTX" only, which excludes Turing (7.5) and Ampere (8.0–8.9) GPUs that CUDA 13.0 officially supports.

Update line 66 (and 156) to:

torch_cuda_arch_list: "7.5 8.0 8.6 8.7 8.8 8.9 9.0+PTX"

Note: Unlike CUDA 12.x entries, omit "7.0" (Pascal) since Maxwell/Pascal/Volta are dropped in CUDA 13.0.

Also applies to: 153-159

🤖 Prompt for AI Agents
In .github/workflows/base.yml around lines 63-69 (and also apply the same change
at lines 153-159), the torch_cuda_arch_list is incorrectly set to "9.0+PTX"
which excludes supported CUDA 13.0 architectures; update the
torch_cuda_arch_list value to "7.5 8.0 8.6 8.7 8.8 8.9 9.0+PTX" in both places
so Turing and Ampere families are included and Pascal (7.0) is omitted per CUDA
13.0 support.

@codecov
Copy link

codecov bot commented Oct 23, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants