Skip to content

PyTorch as the core compute layer #18

@thomasjpfan

Description

@thomasjpfan

During Array API adoption and from conversations with SciPy devs, I've seen attempts to dispatch to underlying libraries for compute. For example: scipy/scipy#20772. The motivation for this special casing is that the Array API standard does not container all the required APIs. Logistically, I think the standard will always lag behind what libraries offer and there will be some APIs that may never be standardized.

An alternative proposal is to use dlpack to do a zero copy transfer to PyTorch, use the PyTorch API for compute, and use dlpack transfer back to the original array container. Here are the pros and cons:

Pros

  1. We can use the full PyTorch API without the limitations of the Array API Standard.
  2. We get all the advantages of PyTorch, such as torch.compile or torch.export in the future.

Cons

  1. PyTorch feels more corporate compared to NumPy
  2. Array API Standard covers Jax, Dask, and all future Array libraries that adopt the standard. Going with PyTorch as the core compute layer would reduce coverage.
  3. PyTorch cpu wheel is ~ 183 MB, which is much bigger than NumPy.

Currently, I am -0 on such a move.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions