You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# logp(x | y, params) using laplace approx evaluated at x0
449
-
hess=pytensor.gradient.hessian(
450
-
log_likelihood, marginalized_vv
451
-
) # TODO check how stan makes this quicker
450
+
# This step is also expensive (but not as much as minimize). Could be made more efficient by recycling hessian from the minimizer step, however that requires a bespoke algorithm described in Rasmussen & Williams
451
+
# since the general optimisation scheme maximises logp(x | y, params) rather than logp(y | x, params), and thus the hessian that comes out of methods
452
+
# like L-BFGS-B is in fact not the hessian of logp(y | x, params)
0 commit comments