Skip to content

Conversation

@jameslamb
Copy link
Member

@jameslamb jameslamb commented Dec 1, 2025

Description

Contributes to rapidsai/build-planning#208

Reverts #19903 and #19917

Now that there are xgboost conda packages with CUDA 13 support (rapidsai/xgboost-feedstock#100), switches the third-party-integration-tests-cudf-pandas to use CUDA 13.

This also switches to the more generic CI Image tags (rapidsai/26.02-latest), so that configuration doesn't need to be updated as RAPIDS adds support for new minor versions of CUDA.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@jameslamb jameslamb added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Dec 1, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 1, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@jameslamb
Copy link
Member Author

/ok to test

@jameslamb
Copy link
Member Author

I think this is working... the third-party-integration-tests-cudf-pandas job is passing: https://github.com/rapidsai/cudf/actions/runs/19827202930/job/56806558906?pr=20748

Other failures look like network or Anaconda infrastructure issues, which would hopefully be resolved with a re-run:

critical libmamba Multiple errors occurred:
Download error (28) Timeout was reached [https://conda.anaconda.org/conda-forge/noarch/repodata.json.zst]
Operation too slow. Less than 30 bytes/sec transferred the last 60 seconds
Subdir conda-forge/noarch not loaded!
If you run into this error repeatedly, your package cache may be corrupted.
Please try running mamba clean -a to remove this cache before retrying the operation.
If you still are having issues, please report the error on mamba-org/mamba's issue tracker:
https://github.com/mamba-org/mamba/issues/new?assignees=&labels=&projects=&template=bug.yml
[rapids-conda-retry] conda returned exit code: 1
[rapids-conda-retry] Exiting, reached max retries...

(conda-python-other-tests link)

@jameslamb jameslamb changed the title WIP: use CUDA 13 for third-party integration tests use CUDA 13 for third-party integration tests Dec 1, 2025
@jameslamb jameslamb marked this pull request as ready for review December 1, 2025 16:11
@jameslamb jameslamb requested a review from a team as a code owner December 1, 2025 16:11
@jameslamb jameslamb requested a review from AyodeAwe December 1, 2025 16:11
@vyasr
Copy link
Contributor

vyasr commented Dec 2, 2025

I've had #19948 open for a while on this but was stalled waiting for Pytorch. In the interim (last week) we merged #20729, which means that we now accept installing CPU pytorch since we then just skip those tests. @Matt711 you pretty much said you're fine with this outcome, right? Like @bdice said, there's some risk of us just not noticing happening, but I agree that it's a net benefit overall.

Copy link
Contributor

@bdice bdice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb This should be good to merge once the temporary channel changes are removed. Thanks for testing that!

@jameslamb
Copy link
Member Author

Ah thanks @vyasr , sorry I missed that already-open PR.

I've removed the other testing bits from this, it should be ready now.

@jameslamb jameslamb requested a review from bdice December 2, 2025 16:00
@jameslamb
Copy link
Member Author

/merge

@rapids-bot rapids-bot bot merged commit 2b1da04 into rapidsai:main Dec 2, 2025
139 checks passed
@jameslamb jameslamb deleted the cuda13-xgboost branch December 2, 2025 22:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants