-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
During Array API adoption and from conversations with SciPy devs, I've seen attempts to dispatch to underlying libraries for compute. For example: scipy/scipy#20772. The motivation for this special casing is that the Array API standard does not container all the required APIs. Logistically, I think the standard will always lag behind what libraries offer and there will be some APIs that may never be standardized.
An alternative proposal is to use dlpack to do a zero copy transfer to PyTorch, use the PyTorch API for compute, and use dlpack transfer back to the original array container. Here are the pros and cons:
Pros
- We can use the full PyTorch API without the limitations of the Array API Standard.
- We get all the advantages of PyTorch, such as torch.compile or torch.export in the future.
- This is already possible with NumPy code and PyTorch: https://pytorch.org/blog/compiling-numpy-code/
Cons
- PyTorch feels more corporate compared to NumPy
- Array API Standard covers Jax, Dask, and all future Array libraries that adopt the standard. Going with PyTorch as the core compute layer would reduce coverage.
- PyTorch cpu wheel is ~ 183 MB, which is much bigger than NumPy.
Currently, I am -0 on such a move.
Metadata
Metadata
Assignees
Labels
No labels