Skip to content

Commit ac62e3e

Browse files
Fix underline lengths
1 parent 3b6617e commit ac62e3e

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

doc/extending/creating_a_numba_jax_op.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
Adding JAX, Numba and Pytorch support for `Op`\s
2-
=======================================
2+
================================================
33

44
PyTensor is able to convert its graphs into JAX, Numba and Pytorch compiled functions. In order to do
55
this, each :class:`Op` in an PyTensor graph must have an equivalent JAX/Numba/Pytorch implementation function.
66

77
This tutorial will explain how JAX, Numba and Pytorch implementations are created for an :class:`Op`.
88

99
Step 1: Identify the PyTensor :class:`Op` you'd like to implement
10-
------------------------------------------------------------------------
10+
-----------------------------------------------------------------
1111

1212
Find the source for the PyTensor :class:`Op` you'd like to be supported and
1313
identify the function signature and return values. These can be determined by
@@ -98,7 +98,7 @@ how the inputs and outputs are used to compute the outputs for an :class:`Op`
9898
in Python. This method is effectively what needs to be implemented.
9999

100100
Step 2: Find the relevant method in JAX/Numba/Pytorch (or something close)
101-
---------------------------------------------------------
101+
--------------------------------------------------------------------------
102102

103103
With a precise idea of what the PyTensor :class:`Op` does we need to figure out how
104104
to implement it in JAX, Numba or Pytorch. In the best case scenario, there is a similarly named
@@ -269,7 +269,7 @@ and :func:`torch.cumprod`
269269
z[0] = np.cumprod(x, axis=self.axis)
270270
271271
Step 3: Register the function with the respective dispatcher
272-
---------------------------------------------------------------
272+
------------------------------------------------------------
273273

274274
With the PyTensor `Op` replicated, we'll need to register the
275275
function with the backends `Linker`. This is done through the use of

0 commit comments

Comments
 (0)