Skip to content

expected_loglikelihood design #73

@st--

Description

@st--

Open question from #42:

expected_loglik takes a Vector y, a Vector q_f, and a lik function that maps from scalar f to UnivariateDistribution; expected_loglik then does the broadcasting internally. Is that what we want?

  • Would it be better (cleaner) to have expected_loglik handle scalars only (but this might result in some performance loss for e.g. recomputing the Gauss-Hermite points and weights for each data point)?
  • Should we expect lik to take the full Vector fs and return e.g. a Product() of UnivariateDistributions (though this might make the expectation code more complicated)?
  • How would we want to handle the heteroskedastic case, where we do want to include the correlations across the two outputs at each data point, but independence between different data points (not sure how we would handle that on the AbstractGPs side given that multi-output is all "output as input")?

Could extend it so you can directly pass a FiniteGP as follows:

function expected_loglikelihood(
    quad, lik::AbstractFactorizingLikelihood, q_f::AbstractMvNormal, y::AbstractVector
)
    mean, std = mean_and_std(q_f)
    return expected_loglikelihood(quad, lik, Normal.(mean, std), y)
end

Would have to think how to handle likelihoods depending on multiple function evaluations (e.g. heteroskedastic likelihood with a vector-valued GP).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions