forked from scipy/scipy
-
Notifications
You must be signed in to change notification settings - Fork 1
DEMO: welch() with CuPy or NumPy #70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
tylerjereddy
wants to merge
23
commits into
main
Choose a base branch
from
treddy_array_type_lib_support
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* early demo work making `welch()` compatible for multiple array lib passthrough
* adjust the `atol` in `test_roundtrip_scaling` to support swapping between the different FFT implementations of NumPy and SciPy, which have slightly difference precision
* switch to a testing approach where the expected value is always on the host (a NumPy array), and the actual value is always moved back to the host before assertions are made, using `array_api_compat.to_device(x, "cpu")` * this is dependent on my PR 40 to the array-api-compat project and allows both of these test incantations to pass: ``` ARR_TST_BACKEND=numpy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ARR_TST_BACKEND=cupy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ```
* a little cleanup after rebasing on latest `main` branch
* vendor our own version of `assert_allclose()` that additionally requires each argument array-like to be available in host memory; this should reduce duplication of `to_device()` in our testsuite as we expand array API support * confirmed that both of these incantations still pass: ``` ARR_TST_BACKEND=numpy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ARR_TST_BACKEND=cupy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ```
* draft a small utility function that imports `xp` based on an environment variable, to avoid duplication in our testsuite as we expand array API support * verified that both incantations still pass: ``` ARR_TST_BACKEND=numpy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ARR_TST_BACKEND=cupy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ```
* adjust `test_real_onesided_even()` to enforce array type in == array type out, and fix two FFT usages in `_spectral_helper()` that were preventing this check from passing
* move the majority of the `TestWelch` tests over to support swapping between NumPy and CuPy * accompanying code changes that I needed for the welch tests to pass
* remove some `welch()` tests that were duplicated accidentally
* bite the bullet and use CuPy `lstsq` implementation despite its absence from the array API standard, because the alternative I had in place was making a ton of HtoD and DtoH copies with CuPy
* linter and other minor cleanups in the diff
This was referenced May 2, 2023
* we now allow `as_strided()` when an array library provides it, as long as `SCIPY_STRICT_ARR_API` env variable is not set * the testsuite results and benchmarking results are as expected with this change
This was referenced May 5, 2023
* tentative testing shims to allow flushing of `welch()` tests with torch tensors
* add temporary shims to work around array-api-compat issue 43 for PyTorch `result_type` * add (hopefully) temporary shim to coerce `win` type returned by `_triage_segments` with `xp.asarray()`, which seems to help for PyTorch `result_type` * fix the `xp.zeros()` signature using in `test_unk_scaling()` because it was not array API compliant (and `torch` actually failed the test as a result, which is good) * this reduces the number of failures from 31 to 28 for the PyTorch CPU testing: `ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py` (the `numpy` and `cupy` incantations still pass here)
* NumPy/CuPy tests still passing and PyTorch tests improve from 28 to 18 failures for: `ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py`
* now down to 12 from 18 failures for: `ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py` * CuPy/NumPy still passing tests for that incantation
* drop from 12 to 7 test failures for: `ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py` * NumPy/CuPy still passing in that incantation
* dropped from 7 to 2 failures for: `ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py` * NumPy/CuPy still passing for that incantation
* tests now pass for this incantation: `ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py` * NumPy/CuPy tests also still pass for that incantation * fix needed was a shim around `size`, which returns a weird tuple subclass in `torch`
* all of these incantations are passing tests now: ``` ARR_TST_BACKEND=cupy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ARR_TST_BACKEND=numpy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ARR_TST_BACKEND=pytorch_gpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py SCIPY_STRICT_ARR_API=1 ARR_TST_BACKEND=cupy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py SCIPY_STRICT_ARR_API=1 ARR_TST_BACKEND=numpy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py SCIPY_STRICT_ARR_API=1 ARR_TST_BACKEND=pytorch_cpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py SCIPY_STRICT_ARR_API=1 ARR_TST_BACKEND=pytorch_gpu python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py ```
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This demo branch allows the
welch()
tests to pass when either CuPy or NumPy is used as the array backend, and monitoring of the tests with the Python bindings to NVML clearly shows the GPU in use with CuPy and not NumPy. I didn't find evidence of performance improvements, but perhaps the testing shims are of moderate interest given that they attempt to be swappable between devices.ARR_TST_BACKEND=numpy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py::TestWelch -- --count=300
ARR_TST_BACKEND=cupy python dev.py test -j 12 -t scipy/signal/tests/test_spectral.py::TestWelch -- --count=300
And respective pyNVML monitoring results for NumPy, CuPy: