|
19 | 19 | "source": [
|
20 | 20 | "## `.prior`\n",
|
21 | 21 | "\n",
|
22 |
| - "Concretely, what this means is that with some data set of finite size, the `prior` method places a multivariate normal prior distribution on the vector of function values, $\\mathbf{f}$,\n", |
| 22 | + "With some data set of finite size, the `prior` method places a multivariate normal prior distribution on the vector of function values, $\\mathbf{f}$,\n", |
23 | 23 | "\n",
|
24 | 24 | "$$\n",
|
25 | 25 | "\\mathbf{f} \\sim \\text{MvNormal}(\\mathbf{m}_{x},\\, \\mathbf{K}_{xx}) \\,,\n",
|
|
42 | 42 | " gp = pm.gp.Latent(cov_func=cov_func)\n",
|
43 | 43 | " \n",
|
44 | 44 | " # Place a GP prior over the function f.\n",
|
45 |
| - " f = gp.prior(\"f\", n_points=10, X=X)\n", |
| 45 | + " f = gp.prior(\"f\", X=X)\n", |
46 | 46 | "```\n",
|
47 | 47 | "\n",
|
48 |
| - "By default, PyMC3 reparameterizes the prior on `f` by rotating it with the Cholesky factor. This helps to reduce covariances in the posterior of the transformed GP, `v`. The reparameterized model is,\n", |
| 48 | + "By default, PyMC3 reparameterizes the prior on `f` under the hood by rotating it with the Cholesky factor of its covariance matrix. This helps to reduce covariances in the posterior of the transformed random variable, `v`. The reparameterized model is,\n", |
49 | 49 | "\n",
|
50 | 50 | "$$\n",
|
51 | 51 | "\\begin{aligned}\n",
|
|
64 | 64 | "source": [
|
65 | 65 | "## `.conditional`\n",
|
66 | 66 | "\n",
|
67 |
| - "The conditional method implements the \"predictive\" distribution for function values that were not part of the original data set. This distribution is,\n", |
| 67 | + "The conditional method implements the predictive distribution for function values that were not part of the original data set. This distribution is,\n", |
68 | 68 | "\n",
|
69 | 69 | "$$\n",
|
70 | 70 | "\\mathbf{f}_* \\mid \\mathbf{f} \\sim \\text{MvNormal} \\left(\n",
|
71 | 71 | " \\mathbf{m}_* + \\mathbf{K}_{*x}\\mathbf{K}_{xx}^{-1} \\mathbf{f} ,\\,\n",
|
72 | 72 | " \\mathbf{K}_{**} - \\mathbf{K}_{*x}\\mathbf{K}_{xx}^{-1}\\mathbf{K}_{x*} \\right)\n",
|
73 | 73 | "$$\n",
|
74 | 74 | "\n",
|
75 |
| - "Using the same `gp` object we defined above, this is specified as,\n", |
| 75 | + "Using the same `gp` object we defined above, we can construct a random variable with this\n", |
| 76 | + "distribution by,\n", |
76 | 77 | "\n",
|
77 | 78 | "```python\n",
|
78 | 79 | "# vector of new X points we want to predict the function at\n",
|
79 | 80 | "X_star = np.linspace(0, 2, 100)[:, None]\n",
|
80 | 81 | "\n",
|
81 | 82 | "with latent_gp_model:\n",
|
82 | 83 | " f_star = gp.conditional(\"f_star\", n_points=100, X_star)\n",
|
83 |
| - "```\n", |
84 |
| - "\n", |
85 |
| - "If `gp` is part of a sum of GP objects, it can be conditioned on different components of that sum using the optional keyword argument `given`,\n", |
86 |
| - "\n", |
87 |
| - "```python\n", |
88 |
| - " f_star_diff = gp.conditional(\"f_star_diff\", n_points=100, X_star, \n", |
89 |
| - " gp=a_different_gp)\n", |
90 | 84 | "```"
|
91 | 85 | ]
|
92 | 86 | },
|
|
0 commit comments