Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ repos:
pytensor/tensor/variable\.py|
)$
- id: check-merge-conflict
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v1.0.0
hooks:
- id: sphinx-lint
args: ["."]
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.3
hooks:
Expand Down
2 changes: 1 addition & 1 deletion doc/extending/creating_a_c_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ This distance between consecutive elements of an array over a given dimension,
is called the stride of that dimension.


Accessing NumPy :class`ndarray`\s' data and properties
Accessing NumPy :class:`ndarray`'s data and properties
------------------------------------------------------

The following macros serve to access various attributes of NumPy :class:`ndarray`\s.
Expand Down
34 changes: 17 additions & 17 deletions doc/extending/creating_a_numba_jax_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Adding JAX, Numba and Pytorch support for `Op`\s
PyTensor is able to convert its graphs into JAX, Numba and Pytorch compiled functions. In order to do
this, each :class:`Op` in an PyTensor graph must have an equivalent JAX/Numba/Pytorch implementation function.

This tutorial will explain how JAX, Numba and Pytorch implementations are created for an :class:`Op`.
This tutorial will explain how JAX, Numba and Pytorch implementations are created for an :class:`Op`.

Step 1: Identify the PyTensor :class:`Op` you'd like to implement
------------------------------------------------------------------------
Expand Down Expand Up @@ -60,7 +60,7 @@ could also have any data type (e.g. floats, ints), so our implementation
must be able to handle all the possible data types.

It also tells us that there's only one return value, that it has a data type
determined by :meth:`x.type()` i.e., the data type of the original tensor.
determined by :meth:`x.type` i.e., the data type of the original tensor.
This implies that the result is necessarily a matrix.

Some class may have a more complex behavior. For example, the :class:`CumOp`\ :class:`Op`
Expand Down Expand Up @@ -116,7 +116,7 @@ Here's an example for :class:`DimShuffle`:

.. tab-set::

.. tab-item:: JAX
.. tab-item:: JAX

.. code:: python

Expand All @@ -134,7 +134,7 @@ Here's an example for :class:`DimShuffle`:
res = jnp.copy(res)

return res

.. tab-item:: Numba

.. code:: python
Expand Down Expand Up @@ -465,7 +465,7 @@ Step 4: Write tests
.. tab-item:: JAX

Test that your registered `Op` is working correctly by adding tests to the
appropriate test suites in PyTensor (e.g. in ``tests.link.jax``).
appropriate test suites in PyTensor (e.g. in ``tests.link.jax``).
The tests should ensure that your implementation can
handle the appropriate types of inputs and produce outputs equivalent to `Op.perform`.
Check the existing tests for the general outline of these kinds of tests. In
Expand All @@ -478,7 +478,7 @@ Step 4: Write tests
Here's a small example of a test for :class:`CumOp` above:

.. code:: python

import numpy as np
import pytensor.tensor as pt
from pytensor.configdefaults import config
Expand Down Expand Up @@ -514,22 +514,22 @@ Step 4: Write tests
.. code:: python

import pytest

def test_jax_CumOp():
"""Test JAX conversion of the `CumOp` `Op`."""
a = pt.matrix("a")
a.tag.test_value = np.arange(9, dtype=config.floatX).reshape((3, 3))

with pytest.raises(NotImplementedError):
out = pt.cumprod(a, axis=1)
fgraph = FunctionGraph([a], [out])
compare_jax_and_py(fgraph, [get_test_value(i) for i in fgraph.inputs])


.. tab-item:: Numba

Test that your registered `Op` is working correctly by adding tests to the
appropriate test suites in PyTensor (e.g. in ``tests.link.numba``).
appropriate test suites in PyTensor (e.g. in ``tests.link.numba``).
The tests should ensure that your implementation can
handle the appropriate types of inputs and produce outputs equivalent to `Op.perform`.
Check the existing tests for the general outline of these kinds of tests. In
Expand All @@ -542,7 +542,7 @@ Step 4: Write tests
Here's a small example of a test for :class:`CumOp` above:

.. code:: python

from tests.link.numba.test_basic import compare_numba_and_py
from pytensor.graph import FunctionGraph
from pytensor.compile.sharedvalue import SharedVariable
Expand All @@ -561,11 +561,11 @@ Step 4: Write tests
if not isinstance(i, SharedVariable | Constant)
],
)



.. tab-item:: Pytorch

Test that your registered `Op` is working correctly by adding tests to the
appropriate test suites in PyTensor (``tests.link.pytorch``). The tests should ensure that your implementation can
handle the appropriate types of inputs and produce outputs equivalent to `Op.perform`.
Expand All @@ -579,7 +579,7 @@ Step 4: Write tests
Here's a small example of a test for :class:`CumOp` above:

.. code:: python

import numpy as np
import pytest
import pytensor.tensor as pt
Expand All @@ -592,7 +592,7 @@ Step 4: Write tests
["float64", "int64"],
)
@pytest.mark.parametrize(
"axis",
"axis",
[None, 1, (0,)],
)
def test_pytorch_CumOp(axis, dtype):
Expand Down Expand Up @@ -650,4 +650,4 @@ as reported in issue `#654 <https://github.com/pymc-devs/pytensor/issues/654>`_.
All jitted functions now must have constant shape, which means a graph like the
one of :class:`Eye` can never be translated to JAX, since it's fundamentally a
function with dynamic shapes. In other words, only PyTensor graphs with static shapes
can be translated to JAX at the moment.
can be translated to JAX at the moment.
2 changes: 1 addition & 1 deletion doc/extending/type.rst
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ returns eitehr a new transferred variable (which can be the same as
the input if no transfer is necessary) or returns None if the transfer
can't be done.

Then register that function by calling :func:`register_transfer()`
Then register that function by calling :func:`register_transfer`
with it as argument.

An example
Expand Down
2 changes: 1 addition & 1 deletion doc/library/compile/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The ``inputs`` argument to ``pytensor.function`` is a list, containing the ``Var
``self.<name>``. The default value is ``None``.

``value``: literal or ``Container``. The initial/default value for this
input. If update is`` None``, this input acts just like
input. If update is ``None``, this input acts just like
an argument with a default value in Python. If update is not ``None``,
changes to this
value will "stick around", whether due to an update or a user's
Expand Down
2 changes: 1 addition & 1 deletion doc/library/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ import ``pytensor`` and print the config variable, as in:
in the future.

The ``'numpy+floatX'`` setting attempts to mimic NumPy casting rules,
although it prefers to use ``float32` `numbers instead of ``float64`` when
although it prefers to use ``float32`` numbers instead of ``float64`` when
``config.floatX`` is set to ``'float32'`` and the associated data is not
explicitly typed as ``float64`` (e.g. regular Python floats). Note that
``'numpy+floatX'`` is not currently behaving exactly as planned (it is a
Expand Down
68 changes: 34 additions & 34 deletions doc/library/tensor/basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -908,8 +908,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to compute the maximum
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: maximum of *x* along *axis*

axis can be:
Expand All @@ -922,8 +922,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis along which to compute the index of the maximum
:Parameter: *keepdims* - (boolean) If this is set to True, the axis which is reduced is
left in the result as a dimension with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as a dimension with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: the index of the maximum value along a given axis

if ``axis == None``, `argmax` over the flattened tensor (like NumPy)
Expand All @@ -933,8 +933,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis along which to compute the maximum and its index
:Parameter: *keepdims* - (boolean) If this is set to True, the axis which is reduced is
left in the result as a dimension with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as a dimension with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: the maximum value along a given axis and its index.

if ``axis == None``, `max_and_argmax` over the flattened tensor (like NumPy)
Expand All @@ -944,8 +944,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to compute the minimum
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: minimum of *x* along *axis*

`axis` can be:
Expand All @@ -958,8 +958,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis along which to compute the index of the minimum
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: the index of the minimum value along a given axis

if ``axis == None``, `argmin` over the flattened tensor (like NumPy)
Expand All @@ -980,8 +980,8 @@ Reductions
This default dtype does _not_ depend on the value of "acc_dtype".

:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.

:Parameter: *acc_dtype* - The dtype of the internal accumulator.
If None (default), we use the dtype in the list below,
Expand Down Expand Up @@ -1015,8 +1015,8 @@ Reductions
This default dtype does _not_ depend on the value of "acc_dtype".

:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.

:Parameter: *acc_dtype* - The dtype of the internal accumulator.
If None (default), we use the dtype in the list below,
Expand All @@ -1031,16 +1031,16 @@ Reductions
as we need to handle 3 different cases: without zeros in the
input reduced group, with 1 zero or with more zeros.

This could slow you down, but more importantly, we currently
don't support the second derivative of the 3 cases. So you
cannot take the second derivative of the default prod().
This could slow you down, but more importantly, we currently
don't support the second derivative of the 3 cases. So you
cannot take the second derivative of the default prod().

To remove the handling of the special cases of 0 and so get
some small speed up and allow second derivative set
``no_zeros_in_inputs`` to ``True``. It defaults to ``False``.
To remove the handling of the special cases of 0 and so get
some small speed up and allow second derivative set
``no_zeros_in_inputs`` to ``True``. It defaults to ``False``.

**It is the user responsibility to make sure there are no zeros
in the inputs. If there are, the grad will be wrong.**
**It is the user responsibility to make sure there are no zeros
in the inputs. If there are, the grad will be wrong.**

:Returns: product of every term in *x* along *axis*

Expand All @@ -1058,13 +1058,13 @@ Reductions
done in float64 (acc_dtype would be float64 by default),
but that result will be casted back in float32.
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Parameter: *acc_dtype* - The dtype of the internal accumulator of the
inner summation. This will not necessarily be the dtype of the
output (in particular if it is a discrete (int/uint) dtype, the
output will be in a float type). If None, then we use the same
rules as :func:`sum()`.
rules as :func:`sum`.
:Returns: mean value of *x* along *axis*

`axis` can be:
Expand All @@ -1077,8 +1077,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to compute the variance
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: variance of *x* along *axis*

`axis` can be:
Expand All @@ -1091,8 +1091,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to compute the standard deviation
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: variance of *x* along *axis*

`axis` can be:
Expand All @@ -1105,8 +1105,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to apply 'bitwise and'
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: bitwise and of *x* along *axis*

`axis` can be:
Expand All @@ -1119,8 +1119,8 @@ Reductions
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to apply bitwise or
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
:Returns: bitwise or of *x* along *axis*

`axis` can be:
Expand Down Expand Up @@ -1745,7 +1745,7 @@ Linear Algebra
when indexed, so that each returned argument has the same shape.
The dimensions and number of the output arrays are equal to the
number of indexing dimensions. If the step length is not a complex
number, then the stop is not inclusive.
number, then the stop is not inclusive.

Example:

Expand Down
2 changes: 1 addition & 1 deletion doc/library/tensor/conv.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@
.. moduleauthor:: LISA, PyMC Developers, PyTensor Developers

.. automodule:: pytensor.tensor.conv
:members:
:members:
6 changes: 3 additions & 3 deletions doc/optimizations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -262,8 +262,8 @@ Optimization o4 o3 o2
local_remove_all_assert
This is an unsafe optimization.
For the fastest possible PyTensor, this optimization can be enabled by
setting ``optimizer_including=local_remove_all_assert`` which will
remove all assertions in the graph for checking user inputs are valid.
setting ``optimizer_including=local_remove_all_assert`` which will
remove all assertions in the graph for checking user inputs are valid.
Use this optimization if you are sure everything is valid in your graph.

See :ref:`unsafe_rewrites`
See :ref:`unsafe_rewrites`
Loading
Loading