<p align="center"> Markov chain state space exploration for three different displacement policies. The system is a 1D particle in a potential well $U(x)$, with the target distribution being the Boltzmann distribution. The proposed move is the following: the action is the displacement of the particle and the policy is the distribution of the displacement magnitude and direction. Here, the policy is an uniform law $\sim \mathcal{U}(-u,u)$ with $u>0$. Each dot represents a state in the Markov chain, constructed from bottom to top. The horizontal position of a dot is the position of the particle. <b>Center:</b> When $u$ is too small, all moves are accepted but the state space is explored inefficiently due to small step sizes. <b>Right:</b> When $u$ is too large, nearly all moves are rejected, again leading to poor exploration. <b>Left:</b> An optimal choice of $u$ balances acceptance rate and step size, achieving efficient state space sampling. This illustrates the importance of tuning the proposal distribution.</p>
0 commit comments