You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# logp(x | y, params) using laplace approx evaluated at x0
475
474
# This step is also expensive (but not as much as minimize). Could be made more efficient by recycling hessian from the minimizer step, however that requires a bespoke algorithm described in Rasmussen & Williams
476
475
# since the general optimisation scheme maximises logp(x | y, params) rather than logp(y | x, params), and thus the hessian that comes out of methods
477
476
# like L-BFGS-B is in fact not the hessian of logp(y | x, params)
478
477
hess=pytensor.gradient.hessian(log_likelihood, x)
479
478
480
-
# Evaluate logp of Laplace approx N(x*, Q - f"(x*)) at some point x
479
+
# Evaluate logp of Laplace approx of logp(x | y, params) at some point x
0 commit comments