You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
y = f_true + sigma_true * rng.standard_normal(X.shape[0])
96
98
```
97
99
@@ -174,61 +176,84 @@ plt.title("observed data 'y' (circles) with predicted mean (squares)");
174
176
175
177
Like the `gp.Latent` implementation, the `gp.LatentKron` implementation specifies a Kronecker structured GP regardless of context. **It can be used with any likelihood function, or can be used to model a variance or some other unobserved processes**. The syntax follows that of `gp.Latent` exactly.
176
178
177
-
### Example 1
179
+
### Model
178
180
179
181
To compare with `MarginalLikelihood`, we use same example as before where the noise is normal, but the GP itself is not marginalized out. Instead, it is sampled directly using NUTS. It is very important to note that `gp.LatentKron` does not require a Gaussian likelihood like `gp.MarginalKron`; rather, any likelihood is admissible.
180
182
183
+
Here though, we'll need to be more informative for our priors, at least those for the GP hyperparameters. This is a general rule when using GPs: **use as informative priors as you can**, as sampling lenghtscale and amplitude is a challenging business, so you want to make the sampler's work as easy as possible.
184
+
185
+
Here thankfully, we have a lot of information about our amplitude and lenghtscales -- we're the ones who created them ;) So we could fix them, but we'll show how you could code that prior knowledge in your own models, with, e.g, Truncated Normal distributions:
186
+
181
187
```{code-cell} ipython3
182
188
with pm.Model() as model:
183
189
# Set priors on the hyperparameters of the covariance
The posterior distribution of the unknown lengthscale parameters, covariance scaling `eta`, and white noise `sigma` are shown below. The vertical lines are the true values that were used to generate the original data set.
219
+
```{code-cell} ipython3
220
+
idata.sample_stats.diverging.sum().data
221
+
```
222
+
223
+
### Posterior convergence
224
+
225
+
+++
226
+
227
+
The posterior distribution of the unknown lengthscale parameters, covariance scaling `eta`, and white noise `sigma` are shown below. The vertical lines are the true values that were used to generate the original data set:
We can see how challenging sampling can be in these situations. Here, all went well because we were careful with our choice of priors -- especially in this simulated case, where parameters don't have a real interpretation.
Below we show the original data set as colored circles, and the mean of the conditional samples as colored squares. The results closely follow those given by the `gp.MarginalKron` implementation.
@@ -291,14 +316,15 @@ for i, ax in enumerate(axs):
291
316
* Updated by [Raul-ing Average](https://github.com/CloudChaoszero), March 2021
292
317
* Updated by [Christopher Krapu](https://github.com/ckrapu), July 2021
293
318
* Updated to PyMC 4.x by [Danh Phan](https://github.com/danhphan), November 2022
319
+
* Updated with some new plots and priors, by [Alex Andorra](https://github.com/AlexAndorra), April 2024
0 commit comments