Skip to content

Releases: jcmgray/autoray

v0.8.10

06 Mar 22:35

Choose a tag to compare

  • tensorflow: support for cholesky with upper arg, batched svd, and scipy.linalg.solve_triangular

Full Changelog: v0.8.9...v0.8.10

v0.8.9

06 Mar 02:06

Choose a tag to compare

  • add swapaxes function for tensorflow

v0.8.8

03 Mar 22:55

Choose a tag to compare

Enhancements:

  • autoray.lazy: add support for moveaxis and swapaxes

Full Changelog: v0.8.7...v0.8.8

v0.8.7

03 Mar 08:33

Choose a tag to compare

Enhancements

  • Torch improvements:
    • Added torch implementation for trace with axes specified plus tests.
    • Allowed torch default_rng seed to be None.
    • Added axis kwarg support for torch count_nonzero.
  • Added decorator support for registration APIs:
    • register_function can now be used as a decorator.
    • register_custom_wrapper can now be used as a decorator.
  • Refactored internals to use the new decorator style for function registration.

v0.8.6

02 Mar 06:08

Choose a tag to compare

What's Changed

  • add stop_gradient by @ChenAo-Phys in #29
  • Add torch support for scipy.linalg.solve_triangular.
  • Add tensorflow support for random.default_rng.
  • Add allclose support for lazy arrays.
  • Add Python 3.14 support.
  • Fix and sort internal backend registration functions.

New Contributors

Full Changelog: v0.8.5...v0.8.6

v0.8.4

05 Dec 01:41

Choose a tag to compare

Enhancements

  • add autoray.is_scalar function
  • lazy.einsum: for numpy backend set optimize=True by default
  • lazy.linalg.norm: add kwargs support
  • torch.random.default_rng: add choice support
  • move ci to pixi

Full Changelog: v0.8.3...v0.8.4

v0.8.3

24 Nov 22:32

Choose a tag to compare

Enhancements

  • register array and asarray as creation routines, so that they pick up device by default from like
  • LazyArray.show: show more information about call signature by default
  • lazy creation routines, support single int as 1D shape specifier
  • remove custom torch.count_nonzero which torch now natively implements
  • bump min python version to 3.10
  • torch.take implementation: support vmapping over both scalar and sequence indices
  • get_namespace: support scipy as a custom submodule.
  • custom autograd.numpy.take to support grad (HIPS/autograd#743).

Full Changelog: v0.8.2...v0.8.3

v0.8.2

03 Nov 22:33

Choose a tag to compare

Ehancements

  • Speedup creation of get_namespace.
  • Reduce namespace attribute retrieval overhead to essentially zero

Full Changelog: v0.8.1...v0.8.2

v0.8.1

28 Oct 00:20

Choose a tag to compare

Bug fixes

  • enable jax batched qr including for flat, grad inputs

v0.8.0

20 Aug 18:09

Choose a tag to compare

Breaking changes

  • LazyArray.__iter__: now iterates over slices of array rather than the computational graph nodes

Enhancements

  • add xp = autoray.get_namespace(like) as an alternative api
  • LazyArray: add support for python array api via .__array_namespace__()
  • LazyArray: support broadcasted linear algebra
  • alias ("torch" "equal") to torch.eq
  • lazy: support caching of more kwargs
  • lazy: add take_along_axis
  • lazy: add equal
  • add "random.default_rng" implementation for jax and torch to support random pure functions (#27)

Bug fixes

  • python compiler supports calls with kwargs
  • jax: re wrap fat QR to support gradient
  • pytensor: qr fix mode to 'economic'