Skip to content

Conversation

AoifeHughes
Copy link
Contributor

@AoifeHughes AoifeHughes self-assigned this Aug 4, 2025
Copy link
Contributor

github-actions bot commented Aug 4, 2025

Preview the changes: https://turinglang.org/docs/pr-previews/629
Please avoid using the search feature and navigation bar in PR previews!

@AoifeHughes AoifeHughes requested a review from mhauru August 7, 2025 07:43
Copy link
Member

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can comment on the clarity of explanations, but I can't comment on some of the content, most importantly the Summary section, because I know nothing about these samplers. E.g. the recommendations for hyperparameters, I have no idea about them. @yebai, who would be a good reviewer for that?

# Define a simple Gaussian model
@model function gaussian_model(x)
μ ~ Normal(0, 10)
σ ~ truncated(Normal(0, 5), 0, Inf)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
σ ~ truncated(Normal(0, 5), 0, Inf)
σ ~ truncated(Normal(0, 5); lower=0)

The Inf version causes trouble with AD, see JuliaStats/Distributions.jl#1910. We are trying to guide users towards the kwargs lower and upper.


```{julia}
#| output: false
setprogress!(false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be moved up, or replaced with progress=false in the sample call. Currently the above cell still produces loads of lines of progress output that don't render nicely: https://turinglang.org/docs/pr-previews/629/usage/stochastic-gradient-samplers/

```

```{julia}
plot(chain_sgld)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The results on https://turinglang.org/docs/pr-previews/629/usage/stochastic-gradient-samplers/ don't look convincing to me, it looks like sampling hasn't converged. Can we increase sample counts without it taking too long? Or it could be a problem with some hyperparameters, I wouldn't know.

```

```{julia}
plot(chain_sghmc)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing for these results.

summarystats(chain_hmc)
```

Compare the trace plots:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we comment on the conclusions from this, what do we learn from this comparison? Also, the first trace plot looks weird.


### When to Use Stochastic Gradient Samplers

- **Large datasets**: When full gradient computation is prohibitively expensive
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this in contradiction with the statement below that with Turing full gradients are computed anyway, and noise is added?

Pkg.instantiate();
```

Turing.jl provides stochastic gradient-based MCMC samplers that are designed for large-scale datasets where computing full gradients is computationally expensive. The two main stochastic gradient samplers are **Stochastic Gradient Langevin Dynamics (SGLD)** and **Stochastic Gradient Hamiltonian Monte Carlo (SGHMC)**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first sentence seems to be immediately undermined by the next paragraph that says that you can't actually use them for this purpose. Maybe better to lead with what they are currently useful for and then comment on possible future uses on if we ever get to implementing these better, rather than the other way around.

@@ -0,0 +1,219 @@
---
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a general comment, not related to the line it's attached to: The navigation bar on the left needs a new link to this page, I think currently there's no way to navigate to it without knowing the URL.

model = gaussian_model(data)
```

SGLD requires very small step sizes to ensure stability. We use a `PolynomialStepsize` that decreases over time:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have other options for stepsize in Turing, other than PolynomialStepsize?


## Automatic Differentiation Backends

Both samplers support different AD backends:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could link to the AD page in our docs for more information.

@AoifeHughes AoifeHughes requested a review from mhauru August 18, 2025 09:18
penelopeysm and others added 11 commits August 19, 2025 16:27
* Updated theme-colors to match main site for consistency

* fixed search results color in dark mode

* fix copy button css in dark mode

* search bar background udpdate

* removed current default footer and added custom one

* Add custom footer and update styles to match TuringLang/turinglang.github.io#119

* Update styles to match original site

* cleanup code

* Added SCSS styles to match main site

* Add all icons in navbar + match few tweaks with main PR

* Enable Open Graph and Twitter Cards for SEO

* fix corrupted png

* remove old styles

---------

Co-authored-by: Penelope Yong <[email protected]>
* Fix external sampler docs

* Remove MCHMC as a dep

* update

* Explain docs in more detail

* Bump to 0.39.9
@yebai yebai self-requested a review August 20, 2025 10:29
@AoifeHughes
Copy link
Contributor Author

https://turinglang.org/docs/pr-previews/629/usage/stochastic-gradient-samplers/ - renders okay at least. Looking into the convergence things atm

@AoifeHughes
Copy link
Contributor Author

At the extent of my knowledge on these, some visual things I dont understand in the final figure, not sure why it's not converging properly. Happy to make changes if someone can direct what is needed

@mhauru
Copy link
Member

mhauru commented Sep 15, 2025

I don't know why HMC is having such trouble with this quite simple model, the numerical integration errors just blow up quite often. Seems that you can fix that with a decent initial value for the chain though. Try adding the keyword argument initial_params=[0.0, 1.0] to the sample call for HMC, that seems to help.

We should set the same initial_params for the two other samplers as well. Fixing the starting point of the MCMC chain also makes the comparison between the different samplers a bit fairer, so not a bad thing in general. However, it doesn't solve all the convergence issues with at least SGLD (I didn't look at the other one). However, those seem to be fixable by tuning the parameters of PolynomialStepsize. I don't really understand PolynomialStepsize, but at least for SGLD PolynomialStepsize(0.01, 100) seems to work decently. Try adding in that, and if necessary tune the parameters for the other sampler as well.

@yebai
Copy link
Member

yebai commented Sep 16, 2025

Thanks @AoifeHughes and @mhauru.

Quick comments:

  • The example models in this tutorial do not use stochastic gradients. They are using noiseless gradients for SGLD, SGHMC and standard HMC algorithms. This leads to incorrect SGLD and SGHMC results in specific cases since their validity depends on suitable gradient noise (intuitively, lack of gradient noise means momentum never gets refreshed).
  • SGLD and SGHMC are examples of HMC-family algorithms, so it is sufficient to mention these algorithms in available MCMC algorithms (eg, here).
  • Turing / DynamicPPL doesn't yet support minibatch yet.

In conclusion, it might be better if we close this PR and revisit in the future.

@mhauru
Copy link
Member

mhauru commented Sep 16, 2025

I wish this had come up earlier so @AoifeHughes could have avoided putting in effort trying to make the examples in this PR work.

I would still advocate for merging a version of this, even if it only explains that we have implementations of SGLD and SGHMC in principle included in the codebase, but that they don't actually do anything useful at the moment because we can't compute stochastic gradients / don't have minibatching. Hence this is only useful for research needs until it gets improved in the future. Some poor user might otherwise go through the same exercise that Aoife and I have been through here, of slowly understanding that there is little point to our current SGLD/SGHMC implementations, and waste time.

@yebai
Copy link
Member

yebai commented Sep 16, 2025

Sorry for not being able to provide feedback earlier.

@AoifeHughes could have asked questions in my office hour with her. However, hopefully, this is still a useful learning journey, as it involves many features of Turing.jl.

EDIT: I could have misremembered a conversation with Aoife, in which she asked me a question about this issue.

@penelopeysm
Copy link
Member

Maybe we could put some of these conclusions in the Turing docstrings?

github-actions bot added a commit that referenced this pull request Sep 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants