Skip to content

Commit 6f2eb44

Browse files
authored
Fix broken references (pymc-devs#614)
* Fixed references in black-box likelihood notebook * Updated profiling.ipynb to v5.10 and fixed broken references * Deleted sphinxext.egg-info * Fixed references in LKJ.ipynb * Fixed references of Model_Builder.ipynb * Fixed references in spline.ipynb * Fixed references in wrapping_jax_function.ipynb * Fixed references in howto_debugging.ipynb * Fixed references in updating_priors.ipynb
1 parent a211f23 commit 6f2eb44

18 files changed

+1170
-338
lines changed

examples/howto/LKJ.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -222,7 +222,7 @@
222222
"id": "59FtijDir2Pe"
223223
},
224224
"source": [
225-
"We use [expand_packed_triangular](../api/math.rst) to transform this vector into the lower triangular matrix $\\mathbf{L}$, which appears in the Cholesky decomposition $\\Sigma = \\mathbf{L} \\mathbf{L}^{\\top}$."
225+
"We use {func}`expand_packed_triangular <pymc.expand_packed_triangular>` to transform this vector into the lower triangular matrix $\\mathbf{L}$, which appears in the Cholesky decomposition $\\Sigma = \\mathbf{L} \\mathbf{L}^{\\top}$."
226226
]
227227
},
228228
{
@@ -919,5 +919,5 @@
919919
}
920920
},
921921
"nbformat": 4,
922-
"nbformat_minor": 1
922+
"nbformat_minor": 4
923923
}

examples/howto/LKJ.myst.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ packed_L.eval()
130130

131131
+++ {"id": "59FtijDir2Pe"}
132132

133-
We use [expand_packed_triangular](../api/math.rst) to transform this vector into the lower triangular matrix $\mathbf{L}$, which appears in the Cholesky decomposition $\Sigma = \mathbf{L} \mathbf{L}^{\top}$.
133+
We use {func}`expand_packed_triangular <pymc.expand_packed_triangular>` to transform this vector into the lower triangular matrix $\mathbf{L}$, which appears in the Cholesky decomposition $\Sigma = \mathbf{L} \mathbf{L}^{\top}$.
134134

135135
```{code-cell} ipython3
136136
---

examples/howto/Missing_Data_Imputation.ipynb

Lines changed: 865 additions & 7 deletions
Large diffs are not rendered by default.

examples/howto/Missing_Data_Imputation.myst.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ jupytext:
55
format_name: myst
66
format_version: 0.13
77
kernelspec:
8-
display_name: Python 3
8+
display_name: Python 3 (ipykernel)
99
language: python
1010
name: python3
1111
---

examples/howto/blackbox_external_likelihood_numpy.ipynb

Lines changed: 65 additions & 137 deletions
Large diffs are not rendered by default.

examples/howto/blackbox_external_likelihood_numpy.myst.md

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,8 @@ print(f"Running on PyMC v{pm.__version__}")
4242
az.style.use("arviz-darkgrid")
4343
```
4444

45+
+++ {"jp-MarkdownHeadingCollapsed": true}
46+
4547
## Introduction
4648
PyMC is a great tool for doing Bayesian inference and parameter estimation. It has a load of {doc}`in-built probability distributions <pymc:api/distributions>` that you can use to set up priors and likelihood functions for your particular model. You can even create your own {ref}`custom distributions <custom_distribution>`.
4749

@@ -108,9 +110,9 @@ ValueError: setting an array element with a sequence.
108110

109111
This is because `m` and `c` are PyTensor tensor-type objects.
110112

111-
So, what we actually need to do is create a [PyTensor Op](http://deeplearning.net/software/pytensor/extending/extending_pytensor.html). This will be a new class that wraps our log-likelihood function (or just our model function, if that is all that is required) into something that can take in PyTensor tensor objects, but internally can cast them as floating point values that can be passed to our log-likelihood function. We will do this below, initially without defining a [grad() method](http://deeplearning.net/software/pytensor/extending/op.html#grad) for the Op.
113+
So, what we actually need to do is create a {ref}`PyTensor Op <pytensor:creating_an_op>`. This will be a new class that wraps our log-likelihood function (or just our model function, if that is all that is required) into something that can take in PyTensor tensor objects, but internally can cast them as floating point values that can be passed to our log-likelihood function. We will do this below, initially without defining a {func}`grad` for the Op.
112114

113-
+++
115+
+++ {"jp-MarkdownHeadingCollapsed": true}
114116

115117
## PyTensor Op without grad
116118

@@ -201,9 +203,11 @@ with pm.Model():
201203
az.plot_trace(idata_mh, lines=[("m", {}, mtrue), ("c", {}, ctrue)]);
202204
```
203205

206+
+++ {"jp-MarkdownHeadingCollapsed": true}
207+
204208
## PyTensor Op with grad
205209

206-
What if we wanted to use NUTS or HMC? If we knew the analytical derivatives of the model/likelihood function then we could add a {ref}`grad() method <pytensor:creating_an_op>` to the Op using that analytical form.
210+
What if we wanted to use NUTS or HMC? If we knew the analytical derivatives of the model/likelihood function then we could add a {func}`grad() method <pytensor:creating_an_op>` to the Op using that analytical form.
207211

208212
But, what if we don't know the analytical form. If our model/likelihood is purely Python and made up of standard maths operators and Numpy functions, then the [autograd](https://github.com/HIPS/autograd) module could potentially be used to find gradients (also, see [here](https://github.com/ActiveState/code/blob/master/recipes/Python/580610_Auto_differentiation/recipe-580610.py) for a nice Python example of automatic differentiation). But, if our model/likelihood truly is a "black box" then we can just use the good-old-fashioned [finite difference](https://en.wikipedia.org/wiki/Finite_difference) to find the gradients - this can be slow, especially if there are a large number of variables, or the model takes a long time to evaluate. Below, a function to find gradients has been defined that uses the finite difference (the central difference) - it uses an iterative method with successively smaller interval sizes to check that the gradient converges. But, you could do something far simpler and just use, for example, the SciPy {func}`~scipy.optimize.approx_fprime` function.
209213

@@ -352,6 +356,8 @@ with pm.Model() as opmodel:
352356
_ = az.plot_trace(idata_grad, lines=[("m", {}, mtrue), ("c", {}, ctrue)])
353357
```
354358

359+
+++ {"jp-MarkdownHeadingCollapsed": true}
360+
355361
## Comparison to equivalent PyMC distributions
356362
Now, finally, just to check things actually worked as we might expect, let's do the same thing purely using PyMC distributions (because in this simple example we can!)
357363

@@ -406,7 +412,7 @@ pair_kwargs["marginal_kwargs"]["color"] = "C2"
406412
az.plot_pair(idata, **pair_kwargs, ax=ax);
407413
```
408414

409-
We can now check that the gradient Op works as expected. First, just create and call the `LogLikeGrad` class, which should return the gradient directly (note that we have to create a [PyTensor function](http://deeplearning.net/software/pytensor/library/compile/function.html) to convert the output of the Op to an array). Secondly, we call the gradient from `LogLikeWithGrad` by using the [PyTensor tensor gradient](http://deeplearning.net/software/pytensor/library/gradient.html#pytensor.gradient.grad) function. Finally, we will check the gradient returned by the PyMC model for a Normal distribution, which should be the same as the log-likelihood function we defined. In all cases we evaluate the gradients at the true values of the model function (the straight line) that was created.
415+
We can now check that the gradient Op works as expected. First, just create and call the `LogLikeGrad` class, which should return the gradient directly (note that we have to create a {ref}`PyTensor function <pytensor:creating_an_op>` to convert the output of the Op to an array). Secondly, we call the gradient from `LogLikeWithGrad` by using the {func}`grad` function. Finally, we will check the gradient returned by the PyMC model for a Normal distribution, which should be the same as the log-likelihood function we defined. In all cases we evaluate the gradients at the true values of the model function (the straight line) that was created.
410416

411417
```{code-cell} ipython3
412418
ip = pymodel.initial_point()
@@ -421,15 +427,15 @@ print(f'Gradient of model using a PyMC "Normal" distribution:\n {grad_vals_py
421427

422428
We could also do some profiling to compare performance between implementations. The {ref}`profiling` notebook shows how to do it.
423429

424-
+++
430+
+++ {"jp-MarkdownHeadingCollapsed": true}
425431

426432
## Authors
427433

428434
* Adapted from [Jørgen Midtbø](https://github.com/jorgenem/)'s [example](https://discourse.pymc.io/t/connecting-pymc-to-external-code-help-with-understanding-pytensor-custom-ops/670) by Matt Pitkin both as a [blogpost](http://mattpitkin.github.io/samplers-demo/pages/pymc-blackbox-likelihood/) and as an example notebook to this gallery in August, 2018 ([pymc#3169](https://github.com/pymc-devs/pymc/pull/3169) and [pymc#3177](https://github.com/pymc-devs/pymc/pull/3177))
429435
* Updated by [Oriol Abril](https://github.com/OriolAbril) on December 2021 to drop the Cython dependency from the original notebook and use numpy instead ([pymc-examples#28](https://github.com/pymc-devs/pymc-examples/pull/28))
430436
* Re-executed by Oriol Abril with pymc 5.0.0 ([pymc-examples#496](https://github.com/pymc-devs/pymc-examples/pull/496))
431437

432-
+++
438+
+++ {"jp-MarkdownHeadingCollapsed": true}
433439

434440
## Watermark
435441

examples/howto/howto_debugging.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
"## Introduction\n",
2222
"There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.\n",
2323
"\n",
24-
"Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the `pytensor.printing.Print` class to print intermediate values."
24+
"Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the {class}`pytensor.printing.Print` class to print intermediate values."
2525
]
2626
},
2727
{
@@ -405,7 +405,7 @@
405405
"cell_type": "markdown",
406406
"metadata": {},
407407
"source": [
408-
"Raw output is a bit messy and requires some cleanup and formatting to convert to `numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models)."
408+
"Raw output is a bit messy and requires some cleanup and formatting to convert to {ref}`numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models)."
409409
]
410410
},
411411
{
@@ -564,7 +564,7 @@
564564
"name": "python",
565565
"nbconvert_exporter": "python",
566566
"pygments_lexer": "ipython3",
567-
"version": "3.10.5"
567+
"version": "3.11.6"
568568
}
569569
},
570570
"nbformat": 4,

examples/howto/howto_debugging.myst.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ kernelspec:
2424
## Introduction
2525
There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.
2626

27-
Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the `pytensor.printing.Print` class to print intermediate values.
27+
Because `PyMC` uses `PyTensor` expressions to build the model, and not functions, there is no way to place a `print` statement into a likelihood function. Instead, you can use the {class}`pytensor.printing.Print` class to print intermediate values.
2828

2929
```{code-cell} ipython3
3030
import arviz as az
@@ -150,7 +150,7 @@ sys.stdout = old_stdout # setting sys.stdout back
150150
output
151151
```
152152

153-
Raw output is a bit messy and requires some cleanup and formatting to convert to `numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models).
153+
Raw output is a bit messy and requires some cleanup and formatting to convert to {ref}`numpy.ndarray`. In the example below regex is used to clean up the output, and then it is evaluated with `eval` to give a list of floats. Code below also works with higher-dimensional outputs (in case you want to experiment with different models).
154154

155155
```{code-cell} ipython3
156156
import re

examples/howto/model_builder.ipynb

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
"source": [
2929
"Many users face difficulty in deploying their PyMC models to production because deploying/saving/loading a user-created model is not well standardized. One of the reasons behind this is there is no direct way to save or load a model in PyMC like scikit-learn or TensorFlow. The new `ModelBuilder` class is aimed to improve this workflow by providing a scikit-learn inspired API to wrap your PyMC models.\n",
3030
"\n",
31-
"The new `ModelBuilder` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the `ModelBuilder` class, and use predefined methods."
31+
"The new {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class, and use predefined methods."
3232
]
3333
},
3434
{
@@ -44,7 +44,15 @@
4444
"execution_count": 1,
4545
"id": "48e35045",
4646
"metadata": {},
47-
"outputs": [],
47+
"outputs": [
48+
{
49+
"name": "stderr",
50+
"output_type": "stream",
51+
"text": [
52+
"WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions.\n"
53+
]
54+
}
55+
],
4856
"source": [
4957
"from typing import Dict, List, Optional, Tuple, Union\n",
5058
"\n",
@@ -225,7 +233,7 @@
225233
"source": [
226234
"How would we deploy this model? Save the fitted model, load it on an instance, and predict? Not so simple.\n",
227235
"\n",
228-
"`ModelBuilder` is built for this purpose. It is currently part of the `pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change."
236+
"`ModelBuilder` is built for this purpose. It is currently part of the {ref}`pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change."
229237
]
230238
},
231239
{
@@ -959,7 +967,7 @@
959967
],
960968
"metadata": {
961969
"kernelspec": {
962-
"display_name": "Python 3",
970+
"display_name": "Python 3 (ipykernel)",
963971
"language": "python",
964972
"name": "python3"
965973
},

examples/howto/model_builder.myst.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ jupytext:
55
format_name: myst
66
format_version: 0.13
77
kernelspec:
8-
display_name: Python 3
8+
display_name: Python 3 (ipykernel)
99
language: python
1010
name: python3
1111
---
@@ -25,7 +25,7 @@ kernelspec:
2525

2626
Many users face difficulty in deploying their PyMC models to production because deploying/saving/loading a user-created model is not well standardized. One of the reasons behind this is there is no direct way to save or load a model in PyMC like scikit-learn or TensorFlow. The new `ModelBuilder` class is aimed to improve this workflow by providing a scikit-learn inspired API to wrap your PyMC models.
2727

28-
The new `ModelBuilder` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the `ModelBuilder` class, and use predefined methods.
28+
The new {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class allows users to use methods to `fit()`, `predict()`, `save()`, `load()`. Users can create any model they want, inherit the {class}`ModelBuilder <pymc_experimental.model_builder.ModelBuilder>` class, and use predefined methods.
2929

3030
+++
3131

@@ -79,7 +79,7 @@ with pm.Model() as model:
7979

8080
How would we deploy this model? Save the fitted model, load it on an instance, and predict? Not so simple.
8181

82-
`ModelBuilder` is built for this purpose. It is currently part of the `pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change.
82+
`ModelBuilder` is built for this purpose. It is currently part of the {ref}`pymc-experimental` package which we can pip install with `pip install pymc-experimental`. As the name implies, this feature is still experimental and subject to change.
8383

8484
+++
8585

0 commit comments

Comments
 (0)