Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/scripts/build-cuda.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,19 @@ elif [ "${build_arch}" = "aarch64" ]; then
build_capability="75;80;90"

# CUDA 12.8-12.9: Add sm100/sm120
[[ "${cuda_version}" == 12.8.* || "${cuda_version}" == 12.9.* ]] && build_capability="75;80;90;100;120"
[[ "${cuda_version}" == 12.8.* || "${cuda_version}" == 12.9.* ]] && build_capability="75;80;87;90;100;120"

# CUDA 13.0+: Add sm100/sm110/sm120
[[ "${cuda_version}" == 13.*.* ]] && build_capability="75;80;90;100;110;120"
[[ "${cuda_version}" == 13.*.* ]] && build_capability="75;80;87;90;100;103;110;120;121"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have expected building for sm80, sm100, and sm120 to cover this as we don't use any specific features in sm87/sm103/sm121 yet. Is it not working today? Can you clarify on the benefit of adding these targets? I assume maybe just some performance optimizations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn’t know that. That is fine. Closing

else
# By default, target Pascal through Hopper.
build_capability="60;70;75;80;86;89;90"

# CUDA 12.8+: Add sm100 and sm120; remove < sm70 to align with PyTorch 2.8+cu128 minimum
[[ "${cuda_version}" == 12.8.* || "${cuda_version}" == 12.9.* ]] && build_capability="70;75;80;86;89;90;100;120"
[[ "${cuda_version}" == 12.8.* || "${cuda_version}" == 12.9.* ]] && build_capability="70;75;80;87;86;89;90;100;120;121"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that Orin (sm87) and Spark (GB10, sm121) are only available on aarch64 platforms, so we shouldn't need to do this for x86-64.


# CUDA 13.0+: Remove < sm75 to align with PyTorch 2.9+cu130 minimum
[[ "${cuda_version}" == 13.*.* ]] && build_capability="75;80;86;89;90;100;120"
[[ "${cuda_version}" == 13.*.* ]] && build_capability="75;80;87;90;100;103;110;120;121"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment on sm87, sm110, sm121 being exclusive to aarch64.

I think we should keep sm86/sm89, and maybe consider adding sm103.

Suggested change
[[ "${cuda_version}" == 13.*.* ]] && build_capability="75;80;87;90;100;103;110;120;121"
[[ "${cuda_version}" == 13.*.* ]] && build_capability="75;80;86;89;90;100;103;120"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right

fi

[[ "${build_os}" = windows-* ]] && python3 -m pip install ninja
Expand Down
Loading