Replies: 1 comment 2 replies
-
|
Replying to myself. Is computing the confidence region as simple as doing the sampling of the posterior pred_samples = dist_train.sample(torch.Size((256,))).exp()
pred_samples = pred_samples / pred_samples.sum(-2, keepdim=True)and taking these softmax samples and the same way that probabilities = pred_samples.mean(axis=0)
lower_bound = pred_samples.quantile(0.025, axis=0)
upper_bound = pred_samples.quantile(1-0.025, axis=0) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This is related to #461 , but I think I need more clarification.
In the simple GP regression model, the posterior predictive distribution, predictive mean and confidence bounds are computed as
For the Dirichlet GP model, the likelihood is omitted computing instead
I think that is how you go from Fig. 2 Left (
pred_means) to Fig. 2 Right (probabilities`) in the referenced paper Dirichlet-based Gaussian Processes for Large-Scale Calibrated Classification, isn't it?However, I'm not so clear how to compute the uncertainty for the probabilities (to create the shaded areas in Fig. 2 Right), and I'd be grateful for any insights / directions.
Beta Was this translation helpful? Give feedback.
All reactions