@@ -77,7 +77,7 @@ three predictor variables::
77
77
ls = [2, 5] # the lengthscales
78
78
cov_func = pm.gp.cov.ExpQuad(input_dim=3, ls=ls, active_dims=[1, 2])
79
79
80
- Here the :code: `ls ` parameter is two dimensional, allowing the second
80
+ Here the :code: `ls `, or lengthscale, parameter is two dimensional, allowing the second
81
81
and third dimension to have a different lengthscale. The reason we have to
82
82
specify :code: `input_dim `, the total number of columns of :code: `X `, and
83
83
:code: `active_dims `, which of those columns or dimensions the covariance
@@ -152,16 +152,7 @@ variable representing the function we are placing the prior over.
152
152
The second argument is the inputs to the function that the prior is over,
153
153
:code: `X `. The inputs are usually known and present in the data, but they can
154
154
also be PyMC3 random variables. If the inputs are a Theano tensor or a
155
- PyMC3 random variable, the :code: `n_points ` argument is required.
156
-
157
- .. note ::
158
-
159
- The :code: `n_points ` argument is sometimes required due to how Theano and
160
- PyMC3 handle the shape information of distributions. It needs to be specified
161
- when :code: `X ` or :code: `Xnew ` is a Theano tensor or a random variable.
162
- For :code: `prior ` or :code: `marginal_likelihood `, it is the number of rows
163
- of the inputs, :code: `X `. For :code: `conditional `, it is the number of
164
- rows in the new inputs, :code: `X_new `.
155
+ PyMC3 random variable, the :code: `shape ` needs to be given.
165
156
166
157
Usually at this point, inference is performed on the model. The
167
158
:code: `conditional ` method creates the conditional, or predictive,
@@ -214,8 +205,8 @@ with the contribution due to :math:`f_1 + f_2` factored out, we get
214
205
K_1 ^{**} - K_1 ^{*^T}(K_1 + K_2 )^{-1 }K_1 ^* \right ) \,.
215
206
216
207
217
- So one can break down GP models into individual components to examine how each
218
- contribute to the data. For more information, check out `David Duvenaud's PhD
208
+ These equations show how to break down GP models into individual components to see how each
209
+ contributes to the data. For more information, check out `David Duvenaud's PhD
219
210
thesis <https://www.cs.toronto.edu/~duvenaud/thesis.pdf> `_.
220
211
221
212
The GP objects in PyMC3 keeps track of these marginals automatically. The
@@ -225,8 +216,8 @@ other implementations. The first block fits the GP prior. We denote
225
216
:math: `f_1 + f_2 ` as just :math: `f` for brevity::
226
217
227
218
with pm.Model() as model:
228
- gp1 = pm.gp.Latent (mean_func1, cov_func1)
229
- gp2 = pm.gp.Latent (mean_func2, cov_func2)
219
+ gp1 = pm.gp.Marginal (mean_func1, cov_func1)
220
+ gp2 = pm.gp.Marginal (mean_func2, cov_func2)
230
221
231
222
# gp represents f1 + f2.
232
223
gp = gp1 + gp2
@@ -236,33 +227,33 @@ other implementations. The first block fits the GP prior. We denote
236
227
trace = pm.sample(1000)
237
228
238
229
239
- The second block produces conditional distributions, including :math: `f_2 ^*
240
- \mid f_1 + f_2 `. Notice that extra arguments are required for conditionals of
241
- :math: `f1 ` and :math: `f2 `, but not :math: `f`. This is because those arguments
242
- are cached when calling :code: `.marginal_likelihood `.
243
-
244
230
To construct the conditional distribution of :code: `gp1 ` or :code: `gp2 `, we
245
231
also need to include the additional arguments, :code: `X `, :code: `y `, and
246
232
:code: `noise `::
247
233
248
234
with model:
249
235
# conditional distributions of f1 and f2
250
- f1_star = gp1.conditional("f1_star", X_star, X=X, y=y, noise=noise)
251
- f2_star = gp2.conditional("f2_star", X_star, X=X, y=y, noise=noise)
236
+ f1_star = gp1.conditional("f1_star", X_star,
237
+ given={"X": X, "y": y, "noise": noise, "gp": gp})
238
+ f2_star = gp2.conditional("f2_star", X_star,
239
+ given={"X": X, "y": y, "noise": noise, "gp": gp})
252
240
253
- # conditional of f1 + f2, additional kwargs not required
241
+ # conditional of f1 + f2, `given` not required
254
242
f_star = gp.conditional("f_star", X_star)
255
243
244
+ This second block produces the conditional distributions. Notice that extra
245
+ arguments are required for conditionals of :math: `f1 ` and :math: `f2 `, but not
246
+ :math: `f`. This is because those arguments are cached when calling
247
+ :code: `.marginal_likelihood ` was called on :code: `gp `.
248
+
256
249
.. note ::
257
- The additional arguments :code: `X `, :code: `y `, and :code: `noise ` must be
258
- provided as **keyword arguments **!
259
-
260
- The :code: `gp ` object keeps track of the inputs it used when :code: `marginal_likelihood `
261
- or :code: `prior ` was set. Since the marginal likelihoood method of :code: `gp1 ` or
262
- :code: `gp2 ` weren't called, their conditionals need to be provided with the required
263
- inputs. In the same fashion as the prior, :code: `f_star `, :code: `f1_star ` and
264
- :code: `f2_star ` are random variables that can now be used like any other random
265
- variable in PyMC3.
250
+ When constructing conditionals, the additional arguments :code: `X `, :code: `y `,
251
+ :code: `noise ` and :code: `gp ` must be provided as a dict called `given `!
252
+
253
+ Since the marginal likelihoood method of :code: `gp1 ` or :code: `gp2 ` weren't called,
254
+ their conditionals need to be provided with the required inputs. In the same
255
+ fashion as the prior, :code: `f_star `, :code: `f1_star ` and :code: `f2_star ` are random
256
+ variables that can now be used like any other random variable in PyMC3.
266
257
267
258
Check the notebooks for detailed demonstrations of the usage of GP functionality
268
259
in PyMC3.
0 commit comments