Skip to content

Commit 6842066

Browse files
committed
modified the debugging and context to make documentation more complete so it's a good outcome
1 parent 4b8b614 commit 6842066

File tree

2 files changed

+13
-6
lines changed

2 files changed

+13
-6
lines changed

docs/src/commonrandom.jl

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,10 @@
1515

1616
# CompetingClocks implements common random numbers by recording the state of the random number generator every time a clock is enabled. There are other ways to do this, but this one works with the [CombinedNextReaction](@ref) and [FirstToFire](@ref) samplers. The workflow you would use looks notionally like:
1717

18-
# 1. Create a sampler.
19-
# 2. Wrap it in a [CommonRandom](@ref).
20-
# 3. Run a lot of simulations in order to explore and record all possible clock states. Run `reset!(recorder)` after each simulation.
21-
# 4. For every parameter set to try, run it the same way, using `reset!` after each run.
22-
# 5. Compare outcomes.
18+
# 1. Create a sampler with the keyword argument `common_random=true`.
19+
# 2. Run a lot of simulations in order to explore and record all possible clock states. Run `reset!(recorder)` after each simulation.
20+
# 3. For every parameter set to try, run it the same way, using `reset!` after each run.
21+
# 4. Compare outcomes.
2322

2423
# Because the `CommonRandom` stores the state of the random number generator at each step, it works best with random number generators that have small state, such as Xoshiro on a linear congruential generator (LCG).
2524

docs/src/importance_skills.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,15 @@ When you apply importance sampling in simulation, the workflow feels like this:
1414

1515
The main problem is that too large of bias on distributions can lead to mathematical underflow in calculation of the weights. Intuitively, a stochastic simulation can have a lot of individual sampled events, and each event's probability multiplies to get the probability of a path of samples in a trajectory. If those samples are repeatedly biased, they can cause numbers that are too small to represent.
1616
```math
17-
w = \frac{L(\lambda_{\mbox{target}})}{L(\lambda_{\mbox{proposal}})} = \left(\frac{\lambda_{\mbox{target}}}{\lambda_{\mbox{proposal}}}\right)^N e^{-(\lambda_{\mbox{target}} - \lambda_{\mbox{proposal}})T}
17+
w = \frac{L(\lambda_{\mbox{target}})}{L(\lambda_{\mbox{proposal}})}
18+
```
19+
20+
```math
21+
= \left(\frac{\lambda_{\mbox{target}}}{\lambda_{\mbox{proposal}}}\right)^N
22+
```
23+
24+
```math
25+
e^{-(\lambda_{\mbox{target}} - \lambda_{\mbox{proposal}})T}
1826
```
1927
What you'll see in practice is that the initial simulation, under $p$, works fine, that a small change in a distribution's parameters still works fine, and then the importance-weighted estimates fall off a cliff and show values like $10^{-73}$.
2028

0 commit comments

Comments
 (0)