Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
17748b7
Remove accidental print statements
ricardoV94 Feb 5, 2025
4fa9bb8
PyTorch inline constants in dispatch to avoid graph breaks (#1118)
ricardoV94 Feb 10, 2025
da4960b
Remove unnecessary type ignore in new version of mypy
ricardoV94 Feb 10, 2025
ffdde1c
Implement gradient for vector repetitions
ricardoV94 Feb 5, 2025
60c2d92
Deprecate Chi2SF ScalarOp
ricardoV94 Feb 12, 2025
0b07727
Remove unused ScalarOp.st_impl
ricardoV94 Jan 23, 2025
0b94be0
Reduce overhead of Scalar python implementation
ricardoV94 Jan 23, 2025
7411a08
More direct access to special functions
ricardoV94 Jan 23, 2025
1ed3611
Bump pypa/gh-action-pypi-publish from 1.12.2 to 1.12.4
dependabot[bot] Jan 27, 2025
2823dfc
Faster python implementation of MvNormal
ricardoV94 Feb 12, 2025
2aecb95
Allow decomposition methods in MvNormal
ricardoV94 Feb 12, 2025
298bb13
Remove global RTOl and ATOL in test file
ricardoV94 Feb 14, 2025
49cf9d2
Cleanup Rop tests and fix Max Rop implementation
ricardoV94 Feb 14, 2025
4aea87c
Fix bug when taking the L_op of a Scan with mit-mot and disconnected …
ricardoV94 Feb 14, 2025
84c7802
Handle Scan gradients of non shaped disconnected inputs
ricardoV94 Feb 14, 2025
b5a64c7
Compute pushforward via double application of pullback
ricardoV94 Feb 13, 2025
fe8804f
Cache sub-type of DimShuffle
ricardoV94 Jan 20, 2025
947b940
Make reshape ndim keyword only
ricardoV94 Feb 10, 2025
141307f
Fix bug in local_useless_reshape
ricardoV94 Feb 10, 2025
02545ed
Specify reshape shape length if unknown
ricardoV94 Feb 10, 2025
dbf5f38
Refactor reshape + dimshuffle rewrites
ricardoV94 Feb 10, 2025
65b96c1
Canonicalize squeeze out of reshape and specialize back
ricardoV94 Feb 10, 2025
8e5e8a4
Only do reshapes in `tensordot` when needed
ricardoV94 Feb 11, 2025
bbe663d
Implement numba dispatch for all `linalg.solve` modes
jessegrabowski Feb 11, 2025
361280c
Update numpy deprecated imports
ricardoV94 Apr 5, 2024
e6c26b2
Changes for numpy 2.0 deprecations
Armavica Apr 3, 2024
910b27c
Updated lazylinker C code
brendan-m-murphy Feb 5, 2025
92d96ff
Changes for deprecations in numpy 2.0 C-API
ricardoV94 Apr 5, 2024
b20f401
Update type hint for c_code_cache_version
brendan-m-murphy Feb 13, 2025
69713de
Make complex scalars work with numpy 2.0
brendan-m-murphy Jul 29, 2024
9416df2
Use Python implementation for AdvancedInSubtensor1
brendan-m-murphy Jul 29, 2024
f4f58a4
Changed copy to deepcopy for rng
brendan-m-murphy Aug 6, 2024
2944552
Change rng.__getstate__ to rng.bit_generator.state
brendan-m-murphy Jan 29, 2025
0aa10c0
Replace use of `np.MAXDIMS`
brendan-m-murphy Aug 7, 2024
b349a9a
Fixed failed test due to uint8 overflow
brendan-m-murphy Aug 27, 2024
9e919c7
Changes due to new numpy scalar promotion rules
brendan-m-murphy Aug 30, 2024
bce3613
Fix for NameError in test
brendan-m-murphy Nov 10, 2024
45c3a01
Updated doctests
brendan-m-murphy Jan 24, 2025
2bfe6dd
Preserve numpy < 2.0 Unique inverse output shape
brendan-m-murphy Jan 30, 2025
cd75f95
Fix test for neg on unsigned
brendan-m-murphy Jan 30, 2025
93dd7c8
Split up TestMinMax::test_uint
brendan-m-murphy Feb 4, 2025
720568c
Unpinned numpy
brendan-m-murphy Aug 27, 2024
b633bca
Added numpy 1.26.* to CI
brendan-m-murphy Feb 4, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -189,5 +189,5 @@ jobs:
name: universal_wheel
path: dist

- uses: pypa/[email protected].2
- uses: pypa/[email protected].4
# Implicitly attests that the packages were uploaded in the context of this workflow.
31 changes: 28 additions & 3 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ jobs:
- uses: pre-commit/[email protected]

test:
name: "${{ matrix.os }} test py${{ matrix.python-version }} : fast-compile ${{ matrix.fast-compile }} : float32 ${{ matrix.float32 }} : ${{ matrix.part }}"
name: "${{ matrix.os }} test py${{ matrix.python-version }} numpy${{ matrix.numpy-version }} : fast-compile ${{ matrix.fast-compile }} : float32 ${{ matrix.float32 }} : ${{ matrix.part }}"
needs:
- changes
- style
Expand All @@ -76,6 +76,7 @@ jobs:
matrix:
os: ["ubuntu-latest"]
python-version: ["3.10", "3.12"]
numpy-version: ["~=1.26.0", ">=2.0"]
fast-compile: [0, 1]
float32: [0, 1]
install-numba: [0]
Expand Down Expand Up @@ -105,45 +106,68 @@ jobs:
float32: 1
- part: "--doctest-modules pytensor --ignore=pytensor/misc/check_duplicate_key.py --ignore=pytensor/link"
fast-compile: 1
- numpy-version: "~=1.26.0"
fast-compile: 1
- numpy-version: "~=1.26.0"
float32: 1
- numpy-version: "~=1.26.0"
python-version: "3.12"
- numpy-version: "~=1.26.0"
part: "--doctest-modules pytensor --ignore=pytensor/misc/check_duplicate_key.py --ignore=pytensor/link"
include:
- install-numba: 1
os: "ubuntu-latest"
python-version: "3.10"
numpy-version: "~=2.1.0"
fast-compile: 0
float32: 0
part: "tests/link/numba"
- install-numba: 1
os: "ubuntu-latest"
python-version: "3.12"
numpy-version: "~=2.1.0"
fast-compile: 0
float32: 0
part: "tests/link/numba"
- install-jax: 1
os: "ubuntu-latest"
python-version: "3.10"
numpy-version: ">=2.0"
fast-compile: 0
float32: 0
part: "tests/link/jax"
- install-jax: 1
os: "ubuntu-latest"
python-version: "3.12"
numpy-version: ">=2.0"
fast-compile: 0
float32: 0
part: "tests/link/jax"
- install-torch: 1
os: "ubuntu-latest"
python-version: "3.10"
numpy-version: ">=2.0"
fast-compile: 0
float32: 0
part: "tests/link/pytorch"
- os: macos-15
python-version: "3.12"
numpy-version: ">=2.0"
fast-compile: 0
float32: 0
install-numba: 0
install-jax: 0
install-torch: 0
part: "tests/tensor/test_blas.py tests/tensor/test_elemwise.py tests/tensor/test_math_scipy.py"
- os: "ubuntu-latest"
python-version: "3.10"
numpy-version: "~=1.26.0"
fast-compile: 0
float32: 0
install-numba: 0
install-jax: 0
install-torch: 0
part: "tests/tensor/test_math.py"

steps:
- uses: actions/checkout@v4
Expand Down Expand Up @@ -174,9 +198,9 @@ jobs:
run: |

if [[ $OS == "macos-15" ]]; then
micromamba install --yes -q "python~=${PYTHON_VERSION}=*_cpython" numpy scipy pip graphviz cython pytest coverage pytest-cov pytest-benchmark pytest-mock libblas=*=*accelerate;
micromamba install --yes -q "python~=${PYTHON_VERSION}=*_cpython" "numpy${NUMPY_VERSION}" scipy pip graphviz cython pytest coverage pytest-cov pytest-benchmark pytest-mock libblas=*=*accelerate;
else
micromamba install --yes -q "python~=${PYTHON_VERSION}=*_cpython" mkl numpy scipy pip mkl-service graphviz cython pytest coverage pytest-cov pytest-benchmark pytest-mock;
micromamba install --yes -q "python~=${PYTHON_VERSION}=*_cpython" mkl "numpy${NUMPY_VERSION}" scipy pip mkl-service graphviz cython pytest coverage pytest-cov pytest-benchmark pytest-mock;
fi
if [[ $INSTALL_NUMBA == "1" ]]; then micromamba install --yes -q -c conda-forge "python~=${PYTHON_VERSION}=*_cpython" "numba>=0.57"; fi
if [[ $INSTALL_JAX == "1" ]]; then micromamba install --yes -q -c conda-forge "python~=${PYTHON_VERSION}=*_cpython" jax jaxlib numpyro && pip install tensorflow-probability; fi
Expand All @@ -193,6 +217,7 @@ jobs:
fi
env:
PYTHON_VERSION: ${{ matrix.python-version }}
NUMPY_VERSION: ${{ matrix.numpy-version }}
INSTALL_NUMBA: ${{ matrix.install-numba }}
INSTALL_JAX: ${{ matrix.install-jax }}
INSTALL_TORCH: ${{ matrix.install-torch}}
Expand Down
1 change: 0 additions & 1 deletion doc/extending/op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -506,4 +506,3 @@ These are the function required to work with :func:`pytensor.gradient.grad`.
the outputs) back to their corresponding shapes and return them as the
output of the :meth:`Op.R_op` method.

:ref:`List of op with r op support <R_op_list>`.
76 changes: 0 additions & 76 deletions doc/library/gradient.rst

This file was deleted.

2 changes: 0 additions & 2 deletions doc/library/tensor/basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1791,5 +1791,3 @@ Gradient / Differentiation
:members: grad
:noindex:

See the :ref:`gradient <libdoc_gradient>` page for complete documentation
of the gradient module.
21 changes: 16 additions & 5 deletions doc/tutorial/gradients.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,9 +86,7 @@ of symbolic differentiation).
``i`` of the output list is the gradient of the first argument of
`pt.grad` with respect to the ``i``-th element of the list given as second argument.
The first argument of `pt.grad` has to be a scalar (a tensor
of size 1). For more information on the semantics of the arguments of
`pt.grad` and details about the implementation, see
:ref:`this<libdoc_gradient>` section of the library.
of size 1).

Additional information on the inner workings of differentiation may also be
found in the more advanced tutorial :ref:`Extending PyTensor<extending>`.
Expand Down Expand Up @@ -204,7 +202,21 @@ you need to do something similar to this:
>>> f([[1, 1], [1, 1]], [[2, 2], [2, 2]], [0,1])
array([ 2., 2.])

:ref:`List <R_op_list>` of Op that implement Rop.
By default, the R-operator is implemented as a double application of the L_operator
(see `reference <https://j-towns.github.io/2017/06/12/A-new-trick.html>`_).
In most cases this should be as performant as a specialized implementation of the R-operator.
However, PyTensor may sometimes fail to prune dead branches or fuse common expressions within composite operators,
such as Scan and OpFromGraph, that would be more easily avoidable in a direct implentation of the R-operator.

When this is a concern, it is possible to force `Rop` to use the specialized `Op.R_op` methods by passing
`use_op_rop_implementation=True`. Note that this will fail if the graph contains `Op`s that don't implement this method.


>>> JV = pytensor.gradient.Rop(y, W, V, use_op_rop_implementation=True)
>>> f = pytensor.function([W, V, x], JV)
>>> f([[1, 1], [1, 1]], [[2, 2], [2, 2]], [0,1])
array([ 2., 2.])


L-operator
----------
Expand Down Expand Up @@ -234,7 +246,6 @@ array([[ 0., 0.],
as the input parameter, while the result of the R-operator has a shape similar
to that of the output.

:ref:`List of op with r op support <R_op_list>`.

Hessian times a Vector
======================
Expand Down
2 changes: 1 addition & 1 deletion environment-osx-arm64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ channels:
dependencies:
- python=>3.10
- compilers
- numpy>=1.17.0,<2
- numpy>=1.17.0
- scipy>=1,<2
- filelock>=3.15
- etuples
Expand Down
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ channels:
dependencies:
- python>=3.10
- compilers
- numpy>=1.17.0,<2
- numpy>=1.17.0
- scipy>=1,<2
- filelock>=3.15
- etuples
Expand Down
13 changes: 10 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ keywords = [
dependencies = [
"setuptools>=59.0.0",
"scipy>=1,<2",
"numpy>=1.17.0,<2",
"numpy>=1.17.0",
"filelock>=3.15",
"etuples",
"logical-unification",
Expand Down Expand Up @@ -129,7 +129,7 @@ exclude = ["doc/", "pytensor/_version.py"]
docstring-code-format = true

[tool.ruff.lint]
select = ["B905", "C", "E", "F", "I", "UP", "W", "RUF", "PERF", "PTH", "ISC"]
select = ["B905", "C", "E", "F", "I", "UP", "W", "RUF", "PERF", "PTH", "ISC", "T20", "NPY201"]
ignore = ["C408", "C901", "E501", "E741", "RUF012", "PERF203", "ISC001"]
unfixable = [
# zip-strict: the auto-fix adds `strict=False` but we might want `strict=True` instead
Expand All @@ -144,7 +144,12 @@ lines-after-imports = 2
# TODO: Get rid of these:
"**/__init__.py" = ["F401", "E402", "F403"]
"pytensor/tensor/linalg.py" = ["F403"]
"pytensor/link/c/cmodule.py" = ["PTH"]
"pytensor/link/c/cmodule.py" = ["PTH", "T201"]
"pytensor/misc/elemwise_time_test.py" = ["T201"]
"pytensor/misc/elemwise_openmp_speedup.py" = ["T201"]
"pytensor/misc/check_duplicate_key.py" = ["T201"]
"pytensor/misc/check_blas.py" = ["T201"]
"pytensor/bin/pytensor_cache.py" = ["T201"]
# For the tests we skip because `pytest.importorskip` is used:
"tests/link/jax/test_scalar.py" = ["E402"]
"tests/link/jax/test_tensor_basic.py" = ["E402"]
Expand All @@ -158,6 +163,8 @@ lines-after-imports = 2
"tests/sparse/test_sp2.py" = ["E402"]
"tests/sparse/test_utils.py" = ["E402"]
"tests/sparse/sandbox/test_sp.py" = ["E402", "F401"]
"tests/compile/test_monitormode.py" = ["T201"]
"scripts/run_mypy.py" = ["T201"]


[tool.mypy]
Expand Down
16 changes: 8 additions & 8 deletions pytensor/breakpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,14 +108,14 @@ def perform(self, node, inputs, output_storage):
f"'{self.name}' could not be casted to NumPy arrays"
)

print("\n")
print("-------------------------------------------------")
print(f"Conditional breakpoint '{self.name}' activated\n")
print("The monitored variables are stored, in order,")
print("in the list variable 'monitored' as NumPy arrays.\n")
print("Their contents can be altered and, when execution")
print("resumes, the updated values will be used.")
print("-------------------------------------------------")
print("\n") # noqa: T201
print("-------------------------------------------------") # noqa: T201
print(f"Conditional breakpoint '{self.name}' activated\n") # noqa: T201
print("The monitored variables are stored, in order,") # noqa: T201
print("in the list variable 'monitored' as NumPy arrays.\n") # noqa: T201
print("Their contents can be altered and, when execution") # noqa: T201
print("resumes, the updated values will be used.") # noqa: T201
print("-------------------------------------------------") # noqa: T201

try:
import pudb
Expand Down
13 changes: 12 additions & 1 deletion pytensor/compile/builders.py
Original file line number Diff line number Diff line change
Expand Up @@ -340,6 +340,12 @@ def __init__(
``None``, this will be used as the connection_pattern for this
:class:`Op`.

.. warning::

rop overrides is ignored when `pytensor.gradient.Rop` is called with
`use_op_rop_implementation=False` (default). In this case the Lop
is used twice to obtain a mathematically equivalent Rop.

strict: bool, default False
If true, it raises when any variables needed to compute the inner graph
are not provided as explici inputs. This can only happen for graphs with
Expand Down Expand Up @@ -641,7 +647,12 @@ def _build_and_cache_rop_op(self):
return rop_overrides

eval_points = [inp_t() for inp_t in self.input_types]
fn_rop = partial(Rop, wrt=inner_inputs, eval_points=eval_points)
fn_rop = partial(
Rop,
wrt=inner_inputs,
eval_points=eval_points,
use_op_rop_implementation=True,
)

callable_args = (inner_inputs, eval_points)
if rop_overrides is None:
Expand Down
Loading
Loading