You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 2015/12/17/pint-in-doe-exascale.html
+1-70Lines changed: 1 addition & 70 deletions
Original file line number
Diff line number
Diff line change
@@ -150,79 +150,10 @@ <h1>PINT for Exascale computing</h1>
150
150
151
151
<!--more-->
152
152
153
-
<p>Even though the report <aclass="citation" href="#DongarraEtAl2014">[1]</a> already appeared in 2014 and is strictly speaking no longer news, it
153
+
<p>Even though <ahref="http://science.energy.gov/%7E/media/ascr/pdf/research/am/docs/EMWGreport.pdf">the report</a> already appeared in 2014 and is strictly speaking no longer news, it
154
154
is still exciting to see PINT being mentioned prominently as one of the promising avenues for the path to exascale
<spanid="DongarraEtAl2014">J. Dongarra and al., “Applied Mathematics Research for Exascale Computing,” Lawrence Livermore National Laboratory, LLNL-TR-651000, 2014 [Online]. Available at: <ahref="http://science.energy.gov/" target="_blank">http://science.energy.gov/</a> /media/ascr/pdf/research/am/docs/EMWGreport.pdf</span>
<span id="ZoltowskiEtAl2025">D. M. Zoltowski, S. Wu, X. Gonzalez, L. Kozachkov, and S. W. Linderman, “Parallelizing MCMC Across the Sequence Length,” arXiv:2508.18413v1 [stat.CO], 2025 [Online]. Available at: <a href="http://arxiv.org/abs/2508.18413v1" target="_blank">http://arxiv.org/abs/2508.18413v1</a></span>
Markov chain Monte Carlo (MCMC) methods are foundational algorithms for Bayesian inference and probabilistic modeling. However, most MCMC algorithms are inherently sequential and their time complexity scales linearly with the sequence length. Previous work on adapting MCMC to modern hardware has therefore focused on running many independent chains in parallel. Here, we take an alternative approach: we propose algorithms to evaluate MCMC samplers in parallel across the chain length. To do this, we build on recent methods for parallel evaluation of nonlinear recursions that formulate the state sequence as a solution to a fixed-point problem and solve for the fixed-point using a parallel form of Newton’s method. We show how this approach can be used to parallelize Gibbs, Metropolis-adjusted Langevin, and Hamiltonian Monte Carlo sampling across the sequence length. In several examples, we demonstrate the simulation of up to hundreds of thousands of MCMC samples with only tens of parallel Newton iterations. Additionally, we develop two new parallel quasi-Newton methods to evaluate nonlinear recursions with lower memory costs and reduced runtime. We find that the proposed parallel algorithms accelerate MCMC sampling across multiple examples, in some cases by more than an order of magnitude compared to sequential evaluation.
<span id="{Bronasco2025">Bronasco, Ausra, “Improving the Efficiency and Theoretical Understanding of Time-Parallel Multigrid Methods,” PhD thesis, Université de Genève, 2025 [Online]. Available at: <a href="https://archive-ouverte.unige.ch/unige:187048" target="_blank">https://archive-ouverte.unige.ch/unige:187048</a></span>
0 commit comments