Skip to content

GH-47128: [Python] Numba-CUDA interop with NVIDIA bindings#47150

Merged
kou merged 2 commits intoapache:mainfrom
gmarkall:cuda-bindings-fix
Aug 15, 2025
Merged

GH-47128: [Python] Numba-CUDA interop with NVIDIA bindings#47150
kou merged 2 commits intoapache:mainfrom
gmarkall:cuda-bindings-fix

Conversation

@gmarkall
Copy link
Contributor

@gmarkall gmarkall commented Jul 21, 2025

Rationale for this change

Testing with Numba-CUDA which uses the NVIDIA CUDA Python bindings by default identified that PyArrow Numba interop has an incompatibility with Numba / Numba-CUDA using the NVIDIA bindings. See Issue #47128.

What changes are included in this PR?

The fix is to get device pointer values from their device_pointer_value property, which is consistent across the ctypes and NVIDIA bindings in Numba.

I also attempted to update the CI config to install Numba-CUDA. I think some of the comments in docker-compose.yml were a bit out of sync with changes to it, so I also updated comments that appeared to be relevant to reflect what I had to run locally. I could have got the CI changes all wrong - happy to change these, as they're not really the critical part of this PR.

Fixes #47128.

Are these changes tested?

Yes, by the existing test_cuda_numba_interop.py and the CI changes in this PR.

Are there any user-facing changes?

No.

gmarkall added 2 commits July 21, 2025 12:37
Numba and Numba-CUDA return a different type from `Context.memalloc()`
depending on whether their built-in ctypes bindings or the NVIDIA CUDA
Python bindings are in use - either e `ctypes.c_void_p` or a
`cuda.bindings.driver.CUdeviceptr`. Whilst this inconsistency is
unfortunate, it's hard to change as existing code in the wild depends on
it.

The issue in Arrow is that the value of the pointer cannot be obtained
by `device_pointer.value` for a `CUdeviceptr`. Numba and Numba-CUDA do
provide another property, `device_pointer_value`, that provides the
device pointer as an `int` regardless of the kind of binding in use, so
we can switch to use this for consistency between the two kinds of
bindings.

Fixes apache#47128.
@github-actions
Copy link

Thanks for opening a pull request!

If this is not a minor PR. Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose

Opening GitHub issues ahead of time contributes to the Openness of the Apache Arrow project.

Then could you also rename the pull request title in the following format?

GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}

or

MINOR: [${COMPONENT}] ${SUMMARY}

See also:

@gmarkall gmarkall changed the title Fix #47128, Numba-CUDA interop with NVIDIA bindings GH-47128: [Python] Numba-CUDA interop with NVIDIA bindings Jul 21, 2025
@github-actions
Copy link

⚠️ GitHub issue #47128 has been automatically assigned in GitHub to PR creator.

@raulcd
Copy link
Member

raulcd commented Jul 21, 2025

@github-actions crossbow submit -g cuda

@github-actions
Copy link

Revision: 232c66e

Submitted crossbow builds: ursacomputing/crossbow @ actions-6712a32a6e

Task Status
test-cuda-cpp-ubuntu-22.04-cuda-11.7.1 GitHub Actions
test-cuda-python-ubuntu-22.04-cuda-11.7.1 GitHub Actions

@raulcd
Copy link
Member

raulcd commented Jul 21, 2025

It seems the cuda runners haven't been available for the last several days. I am trying to investigate, cc @assignUser

@raulcd
Copy link
Member

raulcd commented Jul 23, 2025

@github-actions crossbow submit -g cuda

@github-actions
Copy link

Revision: 232c66e

Submitted crossbow builds: ursacomputing/crossbow @ actions-9953d44aa0

Task Status
test-cuda-cpp-ubuntu-22.04-cuda-11.7.1 GitHub Actions
test-cuda-python-ubuntu-22.04-cuda-11.7.1 GitHub Actions

@raulcd
Copy link
Member

raulcd commented Jul 23, 2025

@assignUser the cuda runners still doesn't seem to be working 🤔

@assignUser
Copy link
Member

@raulcd ah sorry I was testing with the nanoarrow cluster, looks like there are some issue with the crossbow ones. Investigating now.

@assignUser
Copy link
Member

@raulcd the cuda nodes are back up!
image

@gmarkall
Copy link
Contributor Author

A gentle ping on this one (perhaps for @pitrou ?)

Copy link
Member

@kou kou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@kou kou merged commit ddd25d1 into apache:main Aug 15, 2025
35 checks passed
@kou kou removed the awaiting review Awaiting review label Aug 15, 2025
@conbench-apache-arrow
Copy link

After merging your PR, Conbench analyzed the 4 benchmarking runs that have been run so far on merge-commit ddd25d1.

There weren't enough matching historic benchmark results to make a call on whether there were regressions.

The full Conbench report has more details.

@pitrou
Copy link
Member

pitrou commented Aug 19, 2025

Sorry for not looking at this earlier, I was on holiday.

It's nice that the CI is green, but when looking at the logs it seems the tests are actually skipped:

=============================== warnings summary ===============================
arrow-dev/lib/python3.10/site-packages/pyarrow/tests/test_cuda_numba_interop.py:27
  /arrow-dev/lib/python3.10/site-packages/pyarrow/tests/test_cuda_numba_interop.py:27: PytestDeprecationWarning: 
  Module 'numba.cuda' was found, but when imported by pytest it raised:
      ImportError('CUDA bindings not found. Please pip install the cuda-bindings package. Alternatively, install numba-cuda[cuXY], where XY is the required CUDA version, to install the binding automatically. If no CUDA bindings are desired, set the env var NUMBA_CUDA_USE_NVIDIA_BINDING=0 to enable ctypes bindings.')
  In pytest 9.1 this warning will become an error by default.
  You can fix the underlying problem, or alternatively overwrite this behavior and silence this warning by passing exc_type=ImportError explicitly.
  See https://docs.pytest.org/en/stable/deprecations.html#pytest-importorskip-default-behavior-regarding-importerror
    nb_cuda = pytest.importorskip("numba.cuda")

Can you take a look at this @gmarkall ?

@gmarkall
Copy link
Contributor Author

Can you take a look at this @gmarkall ?

Yes, I will take a look at this - thanks for spotting it and mentioning!

@gmarkall
Copy link
Contributor Author

@pitrou A potential fix in #47372

@kou
Copy link
Member

kou commented Aug 19, 2025

Oh, sorry. I should have noticed it...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Python] test_cuda_numba_interop fails with numba-cuda

5 participants