You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/gaussian_processes/HSGP-Basic.myst.md
+134-6Lines changed: 134 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -233,7 +233,7 @@ At the end of this section, we'll give the rules of thumb given in [Ruitort-Mayo
233
233
234
234
+++
235
235
236
-
Speaking non-technically, the HSGP approximates the GP prior as a linear combination of sinusoids. The coefficients of the linear combination are IID normal random variables whose standard deviation depends on GP hyperparameters (which are an amplitude and lengthscale for the Matern family).
236
+
Speaking non-technically, the HSGP approximates the GP prior as a linear combination of sinusoids. The coefficients of the linear combination are IID normal random variables whose standard deviation depends on GP hyperparameters (which are an amplitude and lengthscale for the Matern family). Users are who are interested in further introductory details should see [this](https://juanitorduz.github.io/hsgp_intro/) fantastic blog post by Juan Ordiz.
237
237
238
238
To see this, we'll make a few plots of the $m=3$ and $m=5$ basis vectors and pay careful attention to how they behave at the boundaries of the domain. Note that we have to center the `x` data first, and then choose `L` in relation to the centered data. It's worth mentioning here that the basis vectors we're plotting do not depend on either the choice of the covariance kernel or on any unknown parameters the covariance function has.
239
239
@@ -309,15 +309,15 @@ In practice, you'll need to infer the lengthscale from the data, so the HSGP nee
309
309
310
310
[Ruitort-Mayol et. al.](https://arxiv.org/abs/2004.11408) give some handy heuristics for the range of lengthscales that are accurately reproduced for given values of $m$ and $c$. Below, we provide a function that uses their heuristics to recommend minimum $m$ and $c$ value. Note that **these recommendations are based on a one-dimensional GP**.
311
311
312
-
For example, if you're using the `Matern52` covariance and your data ranges from $x=-5$ to $x=95$, and the bulk of your lengthscale prior is between $\ell=1$ and $\ell=50$, then the smallest recommended values are $m=543$ and $c=3.7$, as you can see below:
312
+
For example, if you're using the `Matern52` covariance and your data ranges from $x=-5$ to $x=95$, and the bulk of your lengthscale prior is between $\ell=1$ and $\ell=50$, then the smallest recommended values are $m=543$ and $c=4.1$, as you can see below:
print("Recommended smallest number of basis vectors for Matern 5/2 (m):", m52_m)
320
+
print("Recommended smallest scaling factor for Matern 5/2(c):", np.round(m52_c, 1))
321
321
```
322
322
323
323
### The HSGP approximate Gram matrix
@@ -431,6 +431,134 @@ Be aware that it's also possible to encounter scenarios where a low fidelity HSG
431
431
432
432
+++
433
433
434
+
## Avoiding underflow issues
435
+
As noted above, the diagonal matrix $\Delta$ used in the calculation of the approximate Gram matrix contains information on the power spectral density, $\mathcal{S}$, of a given kernel. Thus, for the Gram matrix to be defined, $\mathcal{S} > 0$. Consequently, when picking HSGP hyperparameters $m$ and $L$ it is important to check $\mathcal{S} > 0$ for the suggested $m$ and $L$ values. The code in the next few cell compares the suitability of the suggested hyperparameters $m$ and $L$ for `matern52` to that of `ExpQuad` for the data spanning $x=-5$ to $x=95$, with the lengthscale prior between $\ell=1$ and $\ell=50$. As we shall see, the suggested hyperparameters for `ExpQuad` are for not suitable for $\ell=50$.
We see that not all values of $\mathcal{S}$ are defined for the squared exponential kernel when $\ell=50$.
493
+
494
+
To see why, the covariance of the kernels considered are plotted below along with their power spectral densities in log space. The covariance plot shows that for a set $\ell$, the tails of `matern52` are heavier than `ExpQuad`, while a higher $\ell$ for a given kernel type gives rise to higher covariance. The power spectral density is inversely proportional to the covariance - essentially the flatter the shape of the covariance function, the narrower the bandwidth and the lower the power spectral density at higher values of $\omega$. As a result, we see that for `ExpQuad` with $\ell = 50$, $\mathcal{S}\left(\omega\right)$ rapidly decreases towards $0$ before the domain of $\omega$ is exhausted, and hence we reach values at which we underflow to $0$.
These underflow issues can arise when using a broad prior on $\ell$ as you need a $m$ large to cover small lengthscales, but these may cause underflow in $\mathcal{S}$ when $\ell$ is large. As the graphs above suggest, one can **consider a different kernel with heavier tails such as `matern52` or `matern32`**.
529
+
530
+
Alternatively, if you are certain you need a specific kernel, **you can use the linear form of HSGPs (see below) with a boolean mask**. In doing so, the sinusoids with vanishingly small coefficients in the linear combination are effectively screened out. E.g:
# create mask that screens out frequencies with underflowing power spectral densities.
553
+
mask = sqrt_psd > 0
554
+
# now apply the mask over the m dimension & calculate HSGP function.
555
+
f = pm.Deterministic("f", phi[:, mask] @ (basis_coeffs[mask] * sqrt_psd[mask]))
556
+
# setup your observation model
557
+
...
558
+
```
559
+
560
+
+++
561
+
434
562
## Example 2: Working with HSGPs as a parametric, linear model
435
563
436
564
+++
@@ -684,7 +812,7 @@ Sampling diagnostics all look good, and we can see that the underlying GP was in
684
812
685
813
* Created by [Bill Engels](https://github.com/bwengals) and [Alexandre Andorra](https://github.com/AlexAndorra) in 2024 ([pymc-examples#647](https://github.com/pymc-devs/pymc-examples/pull/647))
686
814
* Use `pz.maxent` instead of `pm.find_constrained_prior`, and add random seed. [Osvaldo Martin](https://aloctavodia.github.io/). August 2024
687
-
* Use `pm.gp.util.stabilize` in `simulate_1d`. Use `pz.maxent` rather than `pm.find_constrained_prior` in linearized HSGP model. [Alexander Armstrong](https://github.com/Armatron44), July 2025.
815
+
* Use `pm.gp.util.stabilize` in `simulate_1d`. Use `pz.maxent` rather than `pm.find_constrained_prior` in linearized HSGP model. Added comparison between `matern52` and `ExpQuad` power spectral densities. [Alexander Armstrong](https://github.com/Armatron44), July-August 2025.
0 commit comments