You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The key insight is that `filldist` creates a single distribution (not N independent distributions), which is why you cannot condition on individual elements. The distinction is not just about what appears on the LHS of `~`, but whether you're dealing with separate distributions (`.~` with univariate) or a single distribution over multiple values (`~` with multivariate or `filldist`).
42
42
43
43
To understand more about how Turing determines whether a variable is treated as random or observed, see:
44
+
44
45
-[Core Functionality]({{< meta core-functionality >}}) - basic explanation of the `~` notation and conditioning
45
46
46
47
@@ -50,6 +51,7 @@ Yes, but with important caveats! There are two types of parallelism to consider:
50
51
51
52
### 1. Parallel Sampling (Multiple Chains)
52
53
Turing.jl fully supports sampling multiple chains in parallel:
54
+
53
55
-**Multithreaded sampling**: Use `MCMCThreads()` to run one chain per thread
54
56
-**Distributed sampling**: Use `MCMCDistributed()` for distributed computing
55
57
@@ -69,6 +71,7 @@ end
69
71
```
70
72
71
73
**Important limitations:**
74
+
72
75
-**Observe statements**: Generally safe to use in threaded loops
73
76
-**Assume statements** (sampling statements): Often crash unpredictably or produce incorrect results
74
77
-**AD backend compatibility**: Many AD backends don't support threading. Check the [multithreaded column in ADTests](https://turinglang.org/ADTests/) for compatibility
@@ -78,12 +81,14 @@ For safe parallelism within models, consider vectorized operations instead of ex
78
81
## How do I check the type stability of my Turing model?
79
82
80
83
Type stability is crucial for performance. Check out:
84
+
81
85
-[Performance Tips]({{< meta usage-performance-tips >}}) - includes specific advice on type stability
82
86
- Use `DynamicPPL.DebugUtils.model_warntype` to check type stability of your model
83
87
84
88
## How do I debug my Turing model?
85
89
86
90
For debugging both statistical and syntactical issues:
91
+
87
92
-[Troubleshooting Guide]({{< meta usage-troubleshooting >}}) - common errors and their solutions
88
93
- For more advanced debugging, DynamicPPL provides [the `DynamicPPL.DebugUtils` module](https://turinglang.org/DynamicPPL.jl/stable/api/#Debugging-Utilities) for inspecting model internals
89
94
@@ -125,16 +130,18 @@ end
125
130
## Which automatic differentiation backend should I use?
126
131
127
132
The choice of AD backend can significantly impact performance. See:
133
+
128
134
-[Automatic Differentiation Guide]({{< meta usage-automatic-differentiation >}}) - comprehensive comparison of ForwardDiff, Mooncake, ReverseDiff, and other backends
129
135
-[Performance Tips]({{< meta usage-performance-tips >}}#choose-your-ad-backend) - quick guide on choosing backends
130
136
-[AD Backend Benchmarks](https://turinglang.org/ADTests/) - performance comparisons across various models
131
137
132
138
## I changed one line of my model and now it's so much slower; why?
133
139
134
140
Small changes can have big performance impacts. Common culprits include:
141
+
135
142
- Type instability introduced by the change
136
143
- Switching from vectorized to scalar operations (or vice versa)
137
144
- Inadvertently causing AD backend incompatibilities
138
145
- Breaking assumptions that allowed compiler optimizations
139
146
140
-
See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.
147
+
See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.
0 commit comments