Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion faq/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ sample(m2, NUTS(), 100) # This doesn't work!
The key insight is that `filldist` creates a single distribution (not N independent distributions), which is why you cannot condition on individual elements. The distinction is not just about what appears on the LHS of `~`, but whether you're dealing with separate distributions (`.~` with univariate) or a single distribution over multiple values (`~` with multivariate or `filldist`).

To understand more about how Turing determines whether a variable is treated as random or observed, see:

- [Core Functionality]({{< meta core-functionality >}}) - basic explanation of the `~` notation and conditioning


Expand All @@ -50,6 +51,7 @@ Yes, but with important caveats! There are two types of parallelism to consider:

### 1. Parallel Sampling (Multiple Chains)
Turing.jl fully supports sampling multiple chains in parallel:

- **Multithreaded sampling**: Use `MCMCThreads()` to run one chain per thread
- **Distributed sampling**: Use `MCMCDistributed()` for distributed computing

Expand All @@ -69,6 +71,7 @@ end
```

**Important limitations:**

- **Observe statements**: Generally safe to use in threaded loops
- **Assume statements** (sampling statements): Often crash unpredictably or produce incorrect results
- **AD backend compatibility**: Many AD backends don't support threading. Check the [multithreaded column in ADTests](https://turinglang.org/ADTests/) for compatibility
Expand All @@ -78,12 +81,14 @@ For safe parallelism within models, consider vectorized operations instead of ex
## How do I check the type stability of my Turing model?

Type stability is crucial for performance. Check out:

- [Performance Tips]({{< meta usage-performance-tips >}}) - includes specific advice on type stability
- Use `DynamicPPL.DebugUtils.model_warntype` to check type stability of your model

## How do I debug my Turing model?

For debugging both statistical and syntactical issues:

- [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) - common errors and their solutions
- For more advanced debugging, DynamicPPL provides [the `DynamicPPL.DebugUtils` module](https://turinglang.org/DynamicPPL.jl/stable/api/#Debugging-Utilities) for inspecting model internals

Expand Down Expand Up @@ -125,16 +130,18 @@ end
## Which automatic differentiation backend should I use?

The choice of AD backend can significantly impact performance. See:

- [Automatic Differentiation Guide]({{< meta usage-automatic-differentiation >}}) - comprehensive comparison of ForwardDiff, Mooncake, ReverseDiff, and other backends
- [Performance Tips]({{< meta usage-performance-tips >}}#choose-your-ad-backend) - quick guide on choosing backends
- [AD Backend Benchmarks](https://turinglang.org/ADTests/) - performance comparisons across various models

## I changed one line of my model and now it's so much slower; why?

Small changes can have big performance impacts. Common culprits include:

- Type instability introduced by the change
- Switching from vectorized to scalar operations (or vice versa)
- Inadvertently causing AD backend incompatibilities
- Breaking assumptions that allowed compiler optimizations

See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.
See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.