Replies: 9 comments 1 reply
-
There is There currently is no pre-made likelihood for group-level estimation of noise as in the example, but that shouldn't be too hard to set up. In BoTorch we have a model that allows you to estimate a full heteroskedastic out-of-sample noise model: https://github.com/pytorch/botorch/blob/master/botorch/models/gp_regression.py#L280 |
Beta Was this translation helpful? Give feedback.
-
@Balandat Thanks. I don't quite understand the meaning of "estimate a full heteroskedastic out-of-sample noise model'. Does it mean that I don't have to give any test noise to a trained model during the testing period, just like the GPflow case 2 ? However, in the FixedNoiseGaussianLikelihood example, I find the test_noise has to be given. Maybe you can add a more detailed example in GPytorch as to how to use such kind of model. Example for the model in Botorch
|
Beta Was this translation helpful? Give feedback.
-
Yes, it means that at test time you don't need to provide test noise.
The noise only needs to be given at training time. With a The |
Beta Was this translation helpful? Give feedback.
-
Hi, 3 years later: but I would like to leave this GPFlow example as a reference: https://gpflow.github.io/GPflow/2.9.0/notebooks/advanced/heteroskedastic.html it basically uses two latent GPs to model mean and variance, and it introduces an heteroskedastic likelihood. It has the advantage of not requiring to provide any noise estimates during training, as the heterogeneous noise is also inferred during training (with SVI). I am unsure on whether this approach is similar to BoTorch's I am trying to find an easy way to reproduce that GPFlow notebook in GPyTorch, but no luck so far reproducing the notebook's likelihood. |
Beta Was this translation helpful? Give feedback.
-
Haven't dug into the details here, but it does seem that wile it's similar in spirit to the cc @Ryan-Rhys |
Beta Was this translation helpful? Give feedback.
-
Thank you @Balandat for pointing it out. Indeed: it uses two latent, separate GPs (as defined by two independent kernels for their covariances) that get trained jointly using SVI. I believe that currently gpytorch lacks some classes in order to reproduce a similar model. For instance:
That's all I could decypher, as I am not used to GPFlow (nor TF for that matter). |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Neat! Thanks for sharing! |
Beta Was this translation helpful? Give feedback.
-
@Balandat you're welcome! Another way to deal with heteroskedastic noise could be using the predictive log likelihood objective function, as shown in https://docs.gpytorch.ai/en/stable/examples/04_Variational_and_Approximate_GPs/Approximate_GP_Objective_Functions.html I guess that this issue can be closed now. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is there any model for varying output noise or heteroskedastic noise like example in GPflow?
"https://gpflow.readthedocs.io/en/develop/notebooks/advanced/varying_noise.html"
Beta Was this translation helpful? Give feedback.
All reactions