Skip to content

Commit a648ffd

Browse files
committed
Split the docs into multiple pages
1 parent bdb6c9c commit a648ffd

File tree

6 files changed

+292
-279
lines changed

6 files changed

+292
-279
lines changed

docs/dev/implementation-notes.md

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
# Implementation Notes
2+
3+
As noted before, the goal of this library is to reuse the NumPy and CuPy array
4+
objects, rather than wrapping or extending them. This means that the functions
5+
need to accept and return `np.ndarray` for NumPy and `cp.ndarray` for CuPy.
6+
7+
Each namespace (`array_api_compat.numpy`, `array_api_compat.cupy`, and
8+
`array_api_compat.torch`) is populated with the normal library namespace (like
9+
`from numpy import *`). Then specific functions are replaced with wrapped
10+
variants.
11+
12+
Since NumPy and CuPy are nearly identical in behavior, most wrapping logic can
13+
be shared between them. Wrapped functions that have the same logic between
14+
NumPy and CuPy are in `array_api_compat/common/`.
15+
These functions are defined like
16+
17+
```py
18+
# In array_api_compat/common/_aliases.py
19+
20+
def acos(x, /, xp):
21+
return xp.arccos(x)
22+
```
23+
24+
The `xp` argument refers to the original array namespace (either `numpy` or
25+
`cupy`). Then in the specific `array_api_compat/numpy/` and
26+
`array_api_compat/cupy/` namespaces, the `@get_xp` decorator is applied to
27+
these functions, which automatically removes the `xp` argument from the
28+
function signature and replaces it with the corresponding array library, like
29+
30+
```py
31+
# In array_api_compat/numpy/_aliases.py
32+
33+
from ..common import _aliases
34+
35+
import numpy as np
36+
37+
acos = get_xp(np)(_aliases.acos)
38+
```
39+
40+
This `acos` now has the signature `acos(x, /)` and calls `numpy.arccos`.
41+
42+
Similarly, for CuPy:
43+
44+
```py
45+
# In array_api_compat/cupy/_aliases.py
46+
47+
from ..common import _aliases
48+
49+
import cupy as cp
50+
51+
acos = get_xp(cp)(_aliases.acos)
52+
```
53+
54+
Since NumPy and CuPy are nearly identical in their behaviors, this allows
55+
writing the wrapping logic for both libraries only once.
56+
57+
PyTorch uses a similar layout in `array_api_compat/torch/`, but it differs
58+
enough from NumPy/CuPy that very few common wrappers for those libraries are
59+
reused.
60+
61+
See https://numpy.org/doc/stable/reference/array_api.html for a full list of
62+
changes from the base NumPy (the differences for CuPy are nearly identical). A
63+
corresponding document does not yet exist for PyTorch, but you can examine the
64+
various comments in the
65+
[implementation](https://github.com/data-apis/array-api-compat/blob/main/array_api_compat/torch/_aliases.py)
66+
to see what functions and behaviors have been wrapped.

docs/dev/index.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Development Notes
2+
3+
```{toctree}
4+
:titlesonly:
5+
:hidden:
6+
7+
implementation-notes.md
8+
releasing.md
9+
```

docs/dev/releasing.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Releasing
2+
3+
To release, first note that CuPy must be tested manually (it isn't tested on
4+
CI). Use the script
5+
6+
```
7+
./test_cupy.sh
8+
```
9+
10+
on a machine with a CUDA GPU.
11+
12+
Once you are ready to release, create a PR with a release branch, so that you
13+
can verify that CI is passing. You must edit
14+
15+
```
16+
array_api_compat/__init__.py
17+
```
18+
19+
and update the version (the version is not computed from the tag because that
20+
would break vendorability). You should also edit
21+
22+
```
23+
CHANGELOG.md
24+
```
25+
26+
with the changes for the release.
27+
28+
Then create a tag
29+
30+
```
31+
git tag -a <version>
32+
```
33+
34+
and push it to GitHub
35+
36+
```
37+
git push origin <version>
38+
```
39+
40+
Check that the `publish distributions` action works. Note that this action
41+
will run even if the other CI fails, so you must make sure that CI is passing
42+
*before* tagging.
43+
44+
This does mean you can ignore CI failures, but ideally you should fix any
45+
failures or update the `*-xfails.txt` files before tagging, so that CI and the
46+
cupy tests pass. Otherwise it will be hard to tell what things are breaking in
47+
the future. It's also a good idea to remove any xpasses from those files (but
48+
be aware that some xfails are from flaky failures, so unless you know the
49+
underlying issue has been fixed, a xpass test is probably still xfail).

docs/differences.md

Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
# Differences from the Array API Specification
2+
3+
There are some known differences between this library and the array API
4+
specification:
5+
6+
## NumPy and CuPy
7+
8+
- The array methods `__array_namespace__`, `device` (for NumPy), `to_device`,
9+
and `mT` are not defined. This reuses `np.ndarray` and `cp.ndarray` and we
10+
don't want to monkeypatch or wrap it. The helper functions `device()` and
11+
`to_device()` are provided to work around these missing methods (see above).
12+
`x.mT` can be replaced with `xp.linalg.matrix_transpose(x)`.
13+
`array_namespace(x)` should be used instead of `x.__array_namespace__`.
14+
15+
- Value-based casting for scalars will be in effect unless explicitly disabled
16+
with the environment variable `NPY_PROMOTION_STATE=weak` or
17+
`np._set_promotion_state('weak')` (requires NumPy 1.24 or newer, see [NEP
18+
50](https://numpy.org/neps/nep-0050-scalar-promotion.html) and
19+
https://github.com/numpy/numpy/issues/22341)
20+
21+
- `asarray()` does not support `copy=False`.
22+
23+
- Functions which are not wrapped may not have the same type annotations
24+
as the spec.
25+
26+
- Functions which are not wrapped may not use positional-only arguments.
27+
28+
The minimum supported NumPy version is 1.21. However, this older version of
29+
NumPy has a few issues:
30+
31+
- `unique_*` will not compare nans as unequal.
32+
- `finfo()` has no `smallest_normal`.
33+
- No `from_dlpack` or `__dlpack__`.
34+
- `argmax()` and `argmin()` do not have `keepdims`.
35+
- `qr()` doesn't support matrix stacks.
36+
- `asarray()` doesn't support `copy=True` (as noted above, `copy=False` is not
37+
supported even in the latest NumPy).
38+
- Type promotion behavior will be value based for 0-D arrays (and there is no
39+
`NPY_PROMOTION_STATE=weak` to disable this).
40+
41+
If any of these are an issue, it is recommended to bump your minimum NumPy
42+
version.
43+
44+
## PyTorch
45+
46+
- Like NumPy/CuPy, we do not wrap the `torch.Tensor` object. It is missing the
47+
`__array_namespace__` and `to_device` methods, so the corresponding helper
48+
functions `array_namespace()` and `to_device()` in this library should be
49+
used instead (see above).
50+
51+
- The `x.size` attribute on `torch.Tensor` is a function that behaves
52+
differently from
53+
[`x.size`](https://data-apis.org/array-api/draft/API_specification/generated/array_api.array.size.html)
54+
in the spec. Use the `size(x)` helper function as a portable workaround (see
55+
above).
56+
57+
- PyTorch does not have unsigned integer types other than `uint8`, and no
58+
attempt is made to implement them here.
59+
60+
- PyTorch has type promotion semantics that differ from the array API
61+
specification for 0-D tensor objects. The array functions in this wrapper
62+
library do work around this, but the operators on the Tensor object do not,
63+
as no operators or methods on the Tensor object are modified. If this is a
64+
concern, use the functional form instead of the operator form, e.g., `add(x,
65+
y)` instead of `x + y`.
66+
67+
- [`unique_all()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.unique_all.html#array_api.unique_all)
68+
is not implemented, due to the fact that `torch.unique` does not support
69+
returning the `indices` array. The other
70+
[`unique_*`](https://data-apis.org/array-api/latest/API_specification/set_functions.html)
71+
functions are implemented.
72+
73+
- Slices do not support negative steps.
74+
75+
- [`std()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.std.html#array_api.std)
76+
and
77+
[`var()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.var.html#array_api.var)
78+
do not support floating-point `correction`.
79+
80+
- The `stream` argument of the `to_device()` helper (see above) is not
81+
supported.
82+
83+
- As with NumPy, type annotations and positional-only arguments may not
84+
exactly match the spec for functions that are not wrapped at all.
85+
86+
The minimum supported PyTorch version is 1.13.
87+
88+
## JAX
89+
90+
Unlike the other libraries supported here, JAX array API support is contained
91+
entirely in the JAX library. The JAX array API support is tracked at
92+
https://github.com/google/jax/issues/18353.
93+
94+
## Dask
95+
96+
If you're using dask with numpy, many of the same limitations that apply to numpy
97+
will also apply to dask. Besides those differences, other limitations include missing
98+
sort functionality (no `sort` or `argsort`), and limited support for the optional `linalg`
99+
and `fft` extensions.
100+
101+
In particular, the `fft` namespace is not compliant with the array API spec. Any functions
102+
that you find under the `fft` namespace are the original, unwrapped functions under [`dask.array.fft`](https://docs.dask.org/en/latest/array-api.html#fast-fourier-transforms), which may or may not be Array API compliant. Use at your own risk!
103+
104+
For `linalg`, several methods are missing, for example:
105+
- `cross`
106+
- `det`
107+
- `eigh`
108+
- `eigvalsh`
109+
- `matrix_power`
110+
- `pinv`
111+
- `slogdet`
112+
- `matrix_norm`
113+
- `matrix_rank`
114+
Other methods may only be partially implemented or return incorrect results at times.
115+
116+
The minimum supported Dask version is 2023.12.0.

docs/helper-functions.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Helper Functions
2+
3+
In addition to the wrapped library namespaces and functions in the array API
4+
specification, there are several helper functions included here that aren't
5+
part of the specification but which are useful for using the array API:
6+
7+
- `is_array_api_obj(x)`: Return `True` if `x` is an array API compatible array
8+
object.
9+
10+
- `is_numpy_array(x)`, `is_cupy_array(x)`, `is_torch_array(x)`,
11+
`is_dask_array(x)`, `is_jax_array(x)`: return `True` if `x` is an array from
12+
the corresponding library. These functions do not import the underlying
13+
library if it has not already been imported, so they are cheap to use.
14+
15+
- `array_namespace(*xs)`: Get the corresponding array API namespace for the
16+
arrays `xs`. For example, if the arrays are NumPy arrays, the returned
17+
namespace will be `array_api_compat.numpy`. Note that this function will
18+
also work for namespaces that aren't supported by this compat library but
19+
which do support the array API (i.e., arrays that have the
20+
`__array_namespace__` attribute).
21+
22+
- `device(x)`: Equivalent to
23+
[`x.device`](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.device.html)
24+
in the array API specification. Included because `numpy.ndarray` does not
25+
include the `device` attribute and this library does not wrap or extend the
26+
array object. Note that for NumPy and dask, `device(x)` is always `"cpu"`.
27+
28+
- `to_device(x, device, /, *, stream=None)`: Equivalent to
29+
[`x.to_device`](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.to_device.html).
30+
Included because neither NumPy's, CuPy's, Dask's, nor PyTorch's array objects
31+
include this method. For NumPy, this function effectively does nothing since
32+
the only supported device is the CPU, but for CuPy, this method supports
33+
CuPy CUDA
34+
[Device](https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.Device.html)
35+
and
36+
[Stream](https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.Stream.html)
37+
objects. For PyTorch, this is the same as
38+
[`x.to(device)`](https://pytorch.org/docs/stable/generated/torch.Tensor.to.html)
39+
(the `stream` argument is not supported in PyTorch).
40+
41+
- `size(x)`: Equivalent to
42+
[`x.size`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.size.html#array_api.array.size),
43+
i.e., the number of elements in the array. Included because PyTorch's
44+
`Tensor` defines `size` as a method which returns the shape, and this cannot
45+
be wrapped because this compat library doesn't wrap or extend the array
46+
objects.

0 commit comments

Comments
 (0)