You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/api.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,6 +114,12 @@ See the [docs of AdvancedVI.jl](https://turinglang.org/AdvancedVI.jl/stable/) fo
114
114
|`q_locationscale`|[`Turing.Variational.q_locationscale`](@ref)| Find a numerically non-degenerate initialization for a location-scale variational family |
115
115
|`q_meanfield_gaussian`|[`Turing.Variational.q_meanfield_gaussian`](@ref)| Find a numerically non-degenerate initialization for a mean-field Gaussian family |
116
116
|`q_fullrank_gaussian`|[`Turing.Variational.q_fullrank_gaussian`](@ref)| Find a numerically non-degenerate initialization for a full-rank Gaussian family |
117
+
|`KLMinRepGradDescent`|[`Turing.Variational.KLMinRepGradDescent`](@ref)| KL divergence minimization via stochastic gradient descent with the reparameterization gradient |
118
+
|`KLMinRepGradProxDescent`|[`Turing.Variational.KLMinRepGradProxDescent`](@ref)| KL divergence minimization via stochastic proximal gradient descent with the reparameterization gradient over location-scale variational families |
119
+
|`KLMinScoreGradDescent`|[`Turing.Variational.KLMinScoreGradDescent`](@ref)| KL divergence minimization via stochastic gradient descent with the score gradient |
120
+
|`KLMinWassFwdBwd`|[`Turing.Variational.KLMinWassFwdBwd`](@ref)| KL divergence minimization via Wasserstein proximal gradient descent |
121
+
|`KLMinNaturalGradDescent`|[`Turing.Variational.KLMinNaturalGradDescent`](@ref)| KL divergence minimization via natural gradient descent |
122
+
|`KLMinSqrtNaturalGradDescent`|[`Turing.Variational.KLMinSqrtNaturalGradDescent`](@ref)| KL divergence minimization via natural gradient descent in the square-root parameterization |
0 commit comments