Skip to content

Commit 32d60d5

Browse files
committed
doc: adding info about decorator in new operator doc
1 parent 3e48cbf commit 32d60d5

File tree

1 file changed

+31
-14
lines changed

1 file changed

+31
-14
lines changed

docs/source/adding.rst

Lines changed: 31 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ Implementing new operators
44
==========================
55
Users are welcome to create new operators and add them to the PyLops library.
66

7-
In this tutorial, we will go through the key steps in the definition of an operator, using the
8-
:py:class:`pylops.Diagonal` as an example. This is a very simple operator that applies a diagonal matrix to the model
9-
in forward mode and to the data in adjoint mode.
7+
In this tutorial, we will go through the key steps in the definition of an operator, using a simplified version of the
8+
:py:class:`pylops.Diagonal` operator as an example. This is a very simple operator that applies a diagonal matrix
9+
to the model in forward mode and to the data in adjoint mode.
1010

1111

1212
Creating the operator
@@ -45,14 +45,17 @@ Initialization (``__init__``)
4545
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4646

4747
We then need to create the ``__init__`` where the input parameters are passed and saved as members of our class.
48-
While the input parameters change from operator to operator, it is always required to create three members, the first
49-
called ``shape`` with a tuple containing the dimensions of the operator in the data and model space, the second
50-
called ``dtype`` with the data type object (:obj:`np.dtype`) of the model and data, and the third
51-
called ``explicit`` with a boolean (``True`` or ``False``) identifying if the operator can be inverted by a direct
52-
solver or requires an iterative solver. This member is ``True`` if the operator has also a member ``A`` that contains
53-
the matrix to be inverted like for example in the :py:class:`pylops.MatrixMult` operator, and it will be ``False`` otherwise.
54-
In this case we have another member called ``d`` which is equal to the input vector containing the diagonal elements
55-
of the matrix we want to multiply to the model and data.
48+
While the input parameters change from operator to operator, it is always required to create three members:
49+
50+
- ``dtype``: data type object (of type :obj:`str` or :obj:`np.dtype`) of the model and data;
51+
- ``shape``: a tuple containing the dimensions of the operator in the data and model space;
52+
- ``explicit``: a boolean (``True`` or ``False``) identifying if the operator can be inverted by a direct solver or
53+
requires an iterative solver. This member is ``True`` if the operator has also a member ``A`` that contains
54+
the matrix to be inverted like for example in the :py:class:`pylops.MatrixMult` operator, and it will be
55+
``False`` otherwise.
56+
57+
In this specific case, we have another member called ``d`` which is equal to the input vector containing the diagonal
58+
elements of the matrix we want to multiply to the model and data.
5659

5760
.. code-block:: python
5861
@@ -62,7 +65,7 @@ of the matrix we want to multiply to the model and data.
6265
self.dtype = np.dtype(dtype)
6366
self.explicit = False
6467
65-
Alternatively, since version 2.0.0, the recommended way of initializing operators derived from the base
68+
Alternatively, since version ``v2.0.0``, the recommended way of initializing operators derived from the base
6669
:py:class:`pylops.LinearOperator` class is to invoke ``super`` to assign the required attributes:
6770

6871
.. code-block:: python
@@ -72,8 +75,14 @@ Alternatively, since version 2.0.0, the recommended way of initializing operator
7275
super().__init__(dtype=np.dtype(dtype), shape=(len(self.d), len(self.d)))
7376
7477
In this case, there is no need to declare ``explicit`` as it already defaults to ``False``.
75-
Since version 2.0.0, every :py:class:`pylops.LinearOperator` class is imbued with ``dims``,
76-
``dimsd``, ``clinear`` and ``explicit``, in addition to the required ``dtype`` and ``shape``.
78+
79+
Moreover, since version ``v2.0.0``, every :py:class:`pylops.LinearOperator` class is imbued with ``dims``,
80+
``dimsd``, and ``clinear`` in addition to the required ``dtype``, ``shape``, and ``explicit``. Note that
81+
``dims`` and ``dimsd`` can be defined in spite of ``shape``, which will be automatically assigned within the
82+
``super`` method: the main difference between ``dims``/``dimsd`` and ``shape`` is the the former variables can be
83+
used the define the n-dimensional nature of the input of an operator, whilst the latter variable refers to their overall
84+
shape when the input is flattened.
85+
7786
See the docs of :py:class:`pylops.LinearOperator` for more information about what these
7887
attributes mean.
7988

@@ -91,6 +100,10 @@ We will finally need to ``return`` the result of this operation:
91100
def _matvec(self, x):
92101
return self.d * x
93102
103+
Note that since version ``v2.0.0``, this method can be decorated by the decorator ``@reshaped``. As discussed in
104+
more details in the decorator documentation, by adding such decorator the input ``x`` is initially reshaped into
105+
a nd-array of shape ``dims``, fed to the actual code in ``_matvec`` and then flattened.
106+
94107
Adjoint mode (``_rmatvec``)
95108
^^^^^^^^^^^^^^^^^^^^^^^^^^^
96109
Finally we need to implement the *adjoint mode* in the method ``_rmatvec``. In other words, we will need to write
@@ -106,6 +119,10 @@ different from operator to operator):
106119
107120
And that's it, we have implemented our first linear operator!
108121

122+
Similar to ``_matvec``, since version ``v2.0.0``, this method can also be decorated by the decorator ``@reshaped``.
123+
When doing so, the input ``x`` is initially reshaped into
124+
a nd-array of shape ``dimsd``, fed to the actual code in ``_rmatvec`` and then flattened.
125+
109126
Testing the operator
110127
--------------------
111128
Being able to write an operator is not yet a guarantee of the fact that the operator is correct, or in other words

0 commit comments

Comments
 (0)