Skip to content

Commit 5400a0e

Browse files
committed
Trivia
1 parent 93ca617 commit 5400a0e

File tree

2 files changed

+3
-2
lines changed

2 files changed

+3
-2
lines changed

docs/src/assets/goodpolicy.png

28.3 KB
Loading

docs/src/man/particlesmc.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,13 +69,14 @@ This collection of moves, together with their associated probabilities, can be r
6969
## Choosing the right parameters
7070

7171
```@raw html
72+
7273
<h1 align="center">
73-
<img src="https://raw.githubusercontent.com/TheDisorderedOrganization/ParticlesMC/main/docs/src/assets/goodpolicy.pdf" width="500"/>
74+
<img src="https://raw.githubusercontent.com/TheDisorderedOrganization/ParticlesMC/main/docs/src/assets/goodpolicy.png" width="500"/>
7475
</h1>
7576
77+
7678
<p align="center"> Markov chain state space exploration for three different displacement policies. The system is a 1D particle in a potential well $U(x)$, with the target distribution being the Boltzmann distribution. The proposed move is the following: the action is the displacement of the particle and the policy is the distribution of the displacement magnitude and direction. Here, the policy is an uniform law $\sim \mathcal{U}(-u,u)$ with $u>0$. Each dot represents a state in the Markov chain, constructed from bottom to top. The horizontal position of a dot is the position of the particle. <b>Center:</b> When $u$ is too small, all moves are accepted but the state space is explored inefficiently due to small step sizes. <b>Right:</b> When $u$ is too large, nearly all moves are rejected, again leading to poor exploration. <b>Left:</b> An optimal choice of $u$ balances acceptance rate and step size, achieving efficient state space sampling. This illustrates the importance of tuning the proposal distribution.</p>
7779
```
78-
7980
A MCMC simulation consists of two phases: a *burn-in* phase and an *equilibrium* phase[^3]. During burn-in, initial samples are discarded because the chain has not yet reached its stationary distribution. Starting from an arbitrary initial state, early samples may overweight
8081
low-probability regions of the state space. The burn-in period must be long enough to allow the chain to converge to the target distribution before collecting samples for analysis.
8182

0 commit comments

Comments
 (0)