You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/howto/LKJ.ipynb
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -222,7 +222,7 @@
222
222
"id": "59FtijDir2Pe"
223
223
},
224
224
"source": [
225
-
"We use [expand_packed_triangular](../api/math.rst) to transform this vector into the lower triangular matrix $\\mathbf{L}$, which appears in the Cholesky decomposition $\\Sigma = \\mathbf{L} \\mathbf{L}^{\\top}$."
225
+
"We use {func}`expand_packed_triangular <pymc.expand_packed_triangular>` to transform this vector into the lower triangular matrix $\\mathbf{L}$, which appears in the Cholesky decomposition $\\Sigma = \\mathbf{L} \\mathbf{L}^{\\top}$."
Copy file name to clipboardExpand all lines: examples/howto/LKJ.myst.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -130,7 +130,7 @@ packed_L.eval()
130
130
131
131
+++ {"id": "59FtijDir2Pe"}
132
132
133
-
We use [expand_packed_triangular](../api/math.rst) to transform this vector into the lower triangular matrix $\mathbf{L}$, which appears in the Cholesky decomposition $\Sigma = \mathbf{L} \mathbf{L}^{\top}$.
133
+
We use {func}`expand_packed_triangular <pymc.expand_packed_triangular>` to transform this vector into the lower triangular matrix $\mathbf{L}$, which appears in the Cholesky decomposition $\Sigma = \mathbf{L} \mathbf{L}^{\top}$.
Copy file name to clipboardExpand all lines: examples/howto/blackbox_external_likelihood_numpy.myst.md
+12-6Lines changed: 12 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,8 @@ print(f"Running on PyMC v{pm.__version__}")
42
42
az.style.use("arviz-darkgrid")
43
43
```
44
44
45
+
+++ {"jp-MarkdownHeadingCollapsed": true}
46
+
45
47
## Introduction
46
48
PyMC is a great tool for doing Bayesian inference and parameter estimation. It has a load of {doc}`in-built probability distributions <pymc:api/distributions>` that you can use to set up priors and likelihood functions for your particular model. You can even create your own {ref}`custom distributions <custom_distribution>`.
47
49
@@ -108,9 +110,9 @@ ValueError: setting an array element with a sequence.
108
110
109
111
This is because `m` and `c` are PyTensor tensor-type objects.
110
112
111
-
So, what we actually need to do is create a [PyTensor Op](http://deeplearning.net/software/pytensor/extending/extending_pytensor.html). This will be a new class that wraps our log-likelihood function (or just our model function, if that is all that is required) into something that can take in PyTensor tensor objects, but internally can cast them as floating point values that can be passed to our log-likelihood function. We will do this below, initially without defining a [grad() method](http://deeplearning.net/software/pytensor/extending/op.html#grad) for the Op.
113
+
So, what we actually need to do is create a {ref}`PyTensor Op <pytensor:creating_an_op>`. This will be a new class that wraps our log-likelihood function (or just our model function, if that is all that is required) into something that can take in PyTensor tensor objects, but internally can cast them as floating point values that can be passed to our log-likelihood function. We will do this below, initially without defining a {func}`grad` for the Op.
What if we wanted to use NUTS or HMC? If we knew the analytical derivatives of the model/likelihood function then we could add a {ref}`grad() method <pytensor:creating_an_op>` to the Op using that analytical form.
210
+
What if we wanted to use NUTS or HMC? If we knew the analytical derivatives of the model/likelihood function then we could add a {func}`grad() method <pytensor:creating_an_op>` to the Op using that analytical form.
207
211
208
212
But, what if we don't know the analytical form. If our model/likelihood is purely Python and made up of standard maths operators and Numpy functions, then the [autograd](https://github.com/HIPS/autograd) module could potentially be used to find gradients (also, see [here](https://github.com/ActiveState/code/blob/master/recipes/Python/580610_Auto_differentiation/recipe-580610.py) for a nice Python example of automatic differentiation). But, if our model/likelihood truly is a "black box" then we can just use the good-old-fashioned [finite difference](https://en.wikipedia.org/wiki/Finite_difference) to find the gradients - this can be slow, especially if there are a large number of variables, or the model takes a long time to evaluate. Below, a function to find gradients has been defined that uses the finite difference (the central difference) - it uses an iterative method with successively smaller interval sizes to check that the gradient converges. But, you could do something far simpler and just use, for example, the SciPy {func}`~scipy.optimize.approx_fprime` function.
Now, finally, just to check things actually worked as we might expect, let's do the same thing purely using PyMC distributions (because in this simple example we can!)
We can now check that the gradient Op works as expected. First, just create and call the `LogLikeGrad` class, which should return the gradient directly (note that we have to create a [PyTensor function](http://deeplearning.net/software/pytensor/library/compile/function.html) to convert the output of the Op to an array). Secondly, we call the gradient from `LogLikeWithGrad` by using the [PyTensor tensor gradient](http://deeplearning.net/software/pytensor/library/gradient.html#pytensor.gradient.grad) function. Finally, we will check the gradient returned by the PyMC model for a Normal distribution, which should be the same as the log-likelihood function we defined. In all cases we evaluate the gradients at the true values of the model function (the straight line) that was created.
415
+
We can now check that the gradient Op works as expected. First, just create and call the `LogLikeGrad` class, which should return the gradient directly (note that we have to create a {ref}`PyTensor function <pytensor:creating_an_op>` to convert the output of the Op to an array). Secondly, we call the gradient from `LogLikeWithGrad` by using the {func}`grad` function. Finally, we will check the gradient returned by the PyMC model for a Normal distribution, which should be the same as the log-likelihood function we defined. In all cases we evaluate the gradients at the true values of the model function (the straight line) that was created.
410
416
411
417
```{code-cell} ipython3
412
418
ip = pymodel.initial_point()
@@ -421,15 +427,15 @@ print(f'Gradient of model using a PyMC "Normal" distribution:\n {grad_vals_py
421
427
422
428
We could also do some profiling to compare performance between implementations. The {ref}`profiling` notebook shows how to do it.
423
429
424
-
+++
430
+
+++ {"jp-MarkdownHeadingCollapsed": true}
425
431
426
432
## Authors
427
433
428
434
* Adapted from [Jørgen Midtbø](https://github.com/jorgenem/)'s [example](https://discourse.pymc.io/t/connecting-pymc-to-external-code-help-with-understanding-pytensor-custom-ops/670) by Matt Pitkin both as a [blogpost](http://mattpitkin.github.io/samplers-demo/pages/pymc-blackbox-likelihood/) and as an example notebook to this gallery in August, 2018 ([pymc#3169](https://github.com/pymc-devs/pymc/pull/3169) and [pymc#3177](https://github.com/pymc-devs/pymc/pull/3177))
429
435
* Updated by [Oriol Abril](https://github.com/OriolAbril) on December 2021 to drop the Cython dependency from the original notebook and use numpy instead ([pymc-examples#28](https://github.com/pymc-devs/pymc-examples/pull/28))
430
436
* Re-executed by Oriol Abril with pymc 5.0.0 ([pymc-examples#496](https://github.com/pymc-devs/pymc-examples/pull/496))
Copy file name to clipboardExpand all lines: examples/howto/howto_debugging.ipynb
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@
21
21
"## Introduction\n",
22
22
"There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.\n",
23
23
"\n",
24
-
"Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the `pytensor.printing.Print` class to print intermediate values."
24
+
"Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the {class}`pytensor.printing.Print` class to print intermediate values."
25
25
]
26
26
},
27
27
{
@@ -405,7 +405,7 @@
405
405
"cell_type": "markdown",
406
406
"metadata": {},
407
407
"source": [
408
-
"Raw output is a bit messy and requires some cleanup and formatting to convert to `numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models)."
408
+
"Raw output is a bit messy and requires some cleanup and formatting to convert to {ref}`numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models)."
Copy file name to clipboardExpand all lines: examples/howto/howto_debugging.myst.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ kernelspec:
24
24
## Introduction
25
25
There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.
26
26
27
-
Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the `pytensor.printing.Print` class to print intermediate values.
27
+
Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the {class}`pytensor.printing.Print` class to print intermediate values.
Raw output is a bit messy and requires some cleanup and formatting to convert to `numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models).
153
+
Raw output is a bit messy and requires some cleanup and formatting to convert to {ref}`numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models).
Copy file name to clipboardExpand all lines: examples/howto/model_builder.ipynb
+12-4Lines changed: 12 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@
28
28
"source": [
29
29
"Many users face difficulty in deploying their PyMC models to production because deploying/saving/loading a user-created model is not well standardized. One of the reasons behind this is there is no direct way to save or load a model in PyMC like scikit-learn or TensorFlow. The new `ModelBuilder` class is aimed to improve this workflow by providing a scikit-learn inspired API to wrap your PyMC models.\n",
30
30
"\n",
31
-
"The new `ModelBuilder` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the `ModelBuilder` class, and use predefined methods."
31
+
"The new {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class, and use predefined methods."
32
32
]
33
33
},
34
34
{
@@ -44,7 +44,15 @@
44
44
"execution_count": 1,
45
45
"id": "48e35045",
46
46
"metadata": {},
47
-
"outputs": [],
47
+
"outputs": [
48
+
{
49
+
"name": "stderr",
50
+
"output_type": "stream",
51
+
"text": [
52
+
"WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions.\n"
"How would we deploy this model? Save the fitted model, load it on an instance, and predict? Not so simple.\n",
227
235
"\n",
228
-
"`ModelBuilder` is built for this purpose. It is currently part of the `pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change."
236
+
"`ModelBuilder` is built for this purpose. It is currently part of the {ref}`pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change."
Copy file name to clipboardExpand all lines: examples/howto/model_builder.myst.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ jupytext:
5
5
format_name: myst
6
6
format_version: 0.13
7
7
kernelspec:
8
-
display_name: Python 3
8
+
display_name: Python 3 (ipykernel)
9
9
language: python
10
10
name: python3
11
11
---
@@ -25,7 +25,7 @@ kernelspec:
25
25
26
26
Many users face difficulty in deploying their PyMC models to production because deploying/saving/loading a user-created model is not well standardized. One of the reasons behind this is there is no direct way to save or load a model in PyMC like scikit-learn or TensorFlow. The new `ModelBuilder` class is aimed to improve this workflow by providing a scikit-learn inspired API to wrap your PyMC models.
27
27
28
-
The new `ModelBuilder` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the `ModelBuilder` class, and use predefined methods.
28
+
The new {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class, and use predefined methods.
29
29
30
30
+++
31
31
@@ -79,7 +79,7 @@ with pm.Model() as model:
79
79
80
80
How would we deploy this model? Save the fitted model, load it on an instance, and predict? Not so simple.
81
81
82
-
`ModelBuilder` is built for this purpose. It is currently part of the `pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change.
82
+
`ModelBuilder` is built for this purpose. It is currently part of the {ref}`pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change.
0 commit comments