Skip to content

Conversation

@brendan-m-murphy
Copy link
Owner

@brendan-m-murphy brendan-m-murphy commented Jan 24, 2025

Description

Related Issue

  • Closes #
  • Related to #

Checklist

Type of change

  • New feature / enhancement
  • Bug fix
  • Documentation
  • Maintenance
  • Other (please specify):

📚 Documentation preview 📚: https://pytensor--3.org.readthedocs.build/en/3/

Armavica and others added 30 commits November 10, 2024 09:07
Also removes special case for old unsupported numpy 1.16
Many more tests pass after fixing this.
This gives us an easy way to support Numpy v < 2.0,
and allows the type underlying the bit width types,
like pytensor_complex128, to be correctly inferred
from the numpy complex types they inherit from.
Updated pytensor_complex struct to use get/set real/imag
aliases defined above.

Note: redefining the complex arithmetic here means that we
aren't treating NaNs and infinities as carefully as the C99
standard suggets (see Appendix G of the standard).

The code has been like this since it was added to Theano,
so we're keeping the existing behavior.
We need the bit width of the complex types so that
we can choose the right get/set operators
Some new changes to the C-API created new bugs.
MapIter was removed from the public numpy C-API in
version 2.0, so we raise a not implemented error to
default to the python code for the AdvancedInSubtensor1.

The python version, defined in `AdvancedInSubtensor1.perform`
calls `np.add.at`, which uses `MapIter` behind the scenes.
There is active development on Numpy to improve the efficiency
of `np.add.at`.
Now there is a new TypeError raised by pytensor: Matrix(int32, ...)
cannot store a value of dtype float64 w/o loss of precision..

Previously there was a type error from numpy for trying to pass 'int32'
to the 'dtype' argument of `Generator.standard_normal`
...squash this
...cloning in tests/scan/test_basic.py.

Previously the function in this test was compiled
with `mode=Mode(optimizer=None)`. This test was failing
on numpy 2.0, and it passes if you remove this argument.

The test was failing because the second value returned
from the inner function was changing at each step,
whle the test expects it to be the same at each step.

I don't know why `optmizer=None` causes this to fail.
deepcopy seems to be the recommended method for
copying a numpy Generator.

Here is some related discussion:
numpy/numpy#24086

I didn't see any official documentation about
a change in numpy that would make copy stop
working.
np.MAXDIMS was removed from the public API and
no replacement is given in the migration docs.

In tensor/special.py, the use of np.MAXDIMS to
check for axis=None can be replaced by the
new constant NPY_RAVEL_AXIS.

To make this constant accessible when using Numpy <= 1.26,
I added a function to insert npy_2_compat.h into the support
code for the softmax ops.
Passing the value of "NPY_RAVEL_AXIS" through `c_axis`
wasn't working, so I modified the `c_code` to set
the value of `axis` to `NPY_RAVEL_AXIS` if `self.axis is None`
In numpy 2.0, -1 as uint8 is out of bounds, whereas
previously it would be converted to 255.
In the case where `axis` is not None,
the "params" argument needs to be filled
by `.format(locals())`, but the parentheses
were in the wrong place for this to happen.
With "weak promotion" of python types in Numpy 2.0,
the statement `1.1 == np.asarray(1.1).astype('float32')` is True,
whereas in Numpy 1.26, it was false.

However, in numpy 1.26, `1.1 == np.asarray([1.1]).astype('float32')`
was true, so the scalar behavior and array behavior are the same
in Numpy 2.0, while they were different in numpy 1.26.

Essentially, in Numpy 2.0, if python floats are used in operations
with numpy floats or arrays, then the type of the numpy object will
be used (i.e. the python value will be treated as the type of the numpy
objects). This means that the "custom" autocaster will probably always
use the lowest precision amongst the `try_dtypes` list.
Numpy 2.0's new scalar promotion rules don't increase
the precision of scalars as readily, so there was an
overflow in the `test_inner_composite` test in `scalar/test_loop.py`.
This didn't happen in Numpy 1.26 because at some point the value
was converted to float64, whereas now it remains at the specified
precision.
I also reverted a change to the autocaster tests.

Now the old behavior is preserved.
I was getting a NameError from the list
comprehensions saying that e.g. `pytensor_scalar`
was not defined. I'm not sure why, but this is another
(more verbose) way to do the same thing.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants