Skip to content

How do we test fixes?Β #19

@TomAugspurger

Description

@TomAugspurger

Suppose there's some upstream change that requires an update to the downstream libraries to fix (we'll pick dask-cudf for the example). How do we ensure that the fix actually fixes the issue (and doesn't introduce any new ones)?

The PR to rapidsai/cudf will test in an environment against a released version of Dask and the PR, which will ensure we don't have a regression but doesn't test against dask main. So that's good, but doesn't actually test the fix.

I'd lightly suggest a workflow like:

  1. We identify an issue through nightly testing
  2. We fix the issue in the downstream library
  • The developer working on the fix can manually / temporarily install dask @ main to verify it works as they develop the fix
  • The developer makes a PR to the downstream library with the fix. This triggers a wheel build / upload to http://downloads.rapids.ai/ci/ (behind the NVIDIA VPN).
  1. The developer triggers a workflow here, and indicates that

That manual workflow run will use an environment with Dask main and the PR's commit.

This will require some adjustments to the workfow here. cron.yaml includes a workflow_dispatch trigger so it can run manually. We'll need to

  • add an input to control which version of downstream libraries to install (one per library?). This will default to nightly, the current behavior
  • update our install logic to be able to download and install artifacts from that downloads.rapids.ai. That's not a package index, and the artifacts are gzipped wheels.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions