You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[@Zingale_2024_dd], and convection in massive stars [@Zingale_2024]
127
-
with Castro. This Microphysics library has also enabled recent work
128
-
in astrophysical machine learning to train deep neural networks
129
-
modeling nuclear reactions in [@nn_astro_2022] and [@dnn_astro_2025].
122
+
the approach to exascale compute architectures, in particular, GPU
123
+
support for astrophysical simulation codes.
124
+
130
125
131
-
# Project history
126
+
# Design
132
127
133
128
The Microphysics project started in 2013 as a way to centralize the
134
129
reaction networks and equations of state used by Castro and MAESTRO
@@ -138,45 +133,19 @@ Microphysics, which was an attempt to co-develop microphysics routines
138
133
for the Castro and the Flash [@flash] simulation codes. As interest
139
134
in GPUs grew (with early support added to Microphysics in 2015),
140
135
Castro moved from a mix of C++ and Fortran to pure C++ to take
141
-
advantage of GPU-offloading afforded by the AMReX library and C++
136
+
advantage of GPU-offloading afforded by the AMReX library, and C++
142
137
ports of all physics routines and solvers were added to Microphysics.
143
-
At this point, the development focused solely on AMReX-based codes and
144
-
C++ and the project was formally named the AMReX-Astrophysics
138
+
At this point, the project was formally named the AMReX-Astrophysics
145
139
Microphysics library. Today, the library is completely written in C++
146
140
and relies heavily on the AMReX data structures to take advantage of
147
141
GPUs. The GPU-enabled reaction network integrators led to the Quokka
148
142
code adopting Microphysics for their simulations.
149
143
150
-
# Design
151
-
152
144
Microphysics provides several different types of physics: equations of
153
145
state, reaction networks and screening methods, nuclear statistical
154
146
equilibrium solvers and tabulations, thermal conductivities, and
155
147
opacities, as well as the tools needed to work with them, most notably
156
148
the suite of stiff ODE integrators for the networks.
157
-
158
-
There are two ways to use Microphysics: in a standalone fashion (via
159
-
the unit tests) for simple investigations or as part of an
160
-
(AMReX-based) application code. In both cases, the core
161
-
(compile-time) requirement is to select a network---this defines the
162
-
composition that is then used by most of the other physics routines.
163
-
164
-
Microphysics uses header-only implementations of all functionality as
165
-
much as possible, to allow for easier compiler inlining. Generally,
166
-
the physics routines and solvers are written to work on a single zone
167
-
from a simulation code, and in AMReX, a C++ lambda-capturing approach
168
-
is used to loop over zones (and offload to GPUs if desired). We also
169
-
leverage C++17 `if constexpr` templating to compile out unnecessary
170
-
computations for performance. For example, our equations of state can
171
-
compute a lot of thermodynamic quantities and derivatives, but for
172
-
some operations, we only need a few of these. All of the equations of
173
-
state are templated on the `struct` that holds the thermodynamic
174
-
state. If we pass the general `eos_t` type into the EOS, then
175
-
everything is calculated, but if we pass in to the same interface the
176
-
smaller `eos_re_t` type, then only a few energy terms are computed
177
-
(those that are needed when finding temperature from specific internal
178
-
energy).
179
-
180
149
Several classic Fortran libraries have been converted to header-only
181
150
C++ implementations, including the VODE integrator [@vode], the hybrid
182
151
Powell method of MINPACK [@powell], and the Runge-Kutta Chebyshev
@@ -188,146 +157,62 @@ We also make use of the C++ autodiff library [@autodiff] to compute
188
157
thermodynamic derivatives required in the Jacobians of our reaction
189
158
networks.
190
159
160
+
Microphysics uses header-only implementations of all functionality as
161
+
much as possible to allow for easier compiler inlining, which is
162
+
especially important in GPU kernels. We also leverage C++17 `if
163
+
constexpr` templating to compile out unnecessary computations for
164
+
performance. Generally, the physics routines and solvers are written
165
+
to work on a single zone from a simulation code, and in AMReX, a C++
166
+
lambda-capturing approach is used to loop over zones (and offload to
167
+
GPUs if desired). When used with an application code, this design
168
+
permits the simulation state data to be allocated directly in GPU
169
+
memory and left there for the entire simulation, with all physics run
170
+
directly on the GPU. Since each zone in a simulation usually will
171
+
have a different thermodynamic state, the integration of reaction
172
+
networks can lead to thread divergence issues. To help mitigate this issue, we can
173
+
cap the number of integration steps and either retry an integration on
174
+
a zone-by-zone basis with different tolerances or Jacobian
175
+
approximations or pass the failure back to the application code to
176
+
deal with. This strategy has been successful for many large scale
177
+
simulations [@Zingale_2025].
178
+
179
+
191
180
Another key design feature is the separation of the reaction network
192
181
from the integrator. This allows us to easily experiment with
193
182
different integration methods (such as the RKC integrator) and also
194
183
support different modes of coupling reactions to a simulation code,
195
184
including operator splitting and spectral deferred corrections (SDC)
196
185
(see, e.g., @castro_simple_sdc). The latter is especially important
197
-
for explosive astrophysical flows.
186
+
for explosive astrophysical flows. Tight integration with pynucastro [@pynucastro; @pynucastro2], allows for the generation of custom reaction networks for a science problem.
198
187
199
-
Finally, most of the physics is chosen at compile-time. This allows
188
+
There are two ways to use Microphysics: in a standalone fashion (via
189
+
the unit tests) for simple investigations or as part of an
190
+
(AMReX-based) application code. In both cases, the core
191
+
(compile-time) requirement is to select a network---this defines the
192
+
composition that is then used by most of the other physics routines.
193
+
This compile-time requirement also allows
200
194
Microphysics to provide the number of species as a `constexpr` value
201
-
(which many application codes need), and also greatly reduces the
195
+
(which many application codes need), and greatly reduces the
202
196
compilation time (due to the templating used throughout the library).
203
197
204
-
# Capabilities
205
-
206
-
## Reaction networks
207
-
208
-
A reaction network defines the composition (including the atomic
209
-
weight and number) and the reactions that link the nuclei together.
210
-
Even if reactions are not being modeled, a `general_null` network can
211
-
be used to simply define the composition.
212
-
213
-
In multidimensional simulations, there is a desire to make the
214
-
reaction as small as possible (due to the memory and per-zone
215
-
computational costs) while still being able to represent the
216
-
nucleosynthesis reasonable accurately. As a result, approximations
217
-
to rates are common and a wide variety of networks are used depending
218
-
on the burning state being modeled.
219
-
220
-
We have ported many of the classic "aprox" networks used in the
221
-
astrophysics community (for example "aprox21" described in
222
-
@wallacewoosley:1981) to C++. Many of these originated from the
223
-
implementations of @cococubed. Our implementation relies heavily on
224
-
C++ templates, allowing us to simply define the properties of the
225
-
reactions and then the compiler builds the righthand side and Jacobian
226
-
of the system at compile-time. This reduces the maintenance costs of
227
-
the networks and also eliminates some common indexing bugs.
228
-
229
-
We also integrate with the pynucastro nuclear astrophysics library
230
-
[@pynucastro; @pynucastro2], allowing us to generate a custom network
231
-
in a few lines of python simply by specifying the nuclei we want. This
232
-
makes use of the reaction rates from @ReacLib and others, and allows us
233
-
to keep up to date with changes in rates and build more complex networks
234
-
than the traditional aprox nets.
235
-
236
-
237
-
### Screening
238
-
239
-
Nuclear reaction rates are screened by the electrons in the plasma
240
-
(which reduce the Coulomb barrier for the positively charged nuclei to
241
-
fuse). Microphysics provides several different screening
242
-
implementations: the widely-used `screen5` method based on
243
-
@graboske:1973; @jancovici:1977; @alastuey:1978; @itoh:1979, the
244
-
methods of @chugunov:2007 and @chugunov:2009, and the method of
245
-
@Chabrier_1998.
246
-
247
-
248
-
### Nuclear statistical equilibrium
249
-
250
-
At high temperatures ($T > 4\times 10^9~\mathrm{K}$), forward and
251
-
reverse reactions can come into equilibrium (nuclear statistical
252
-
equilibrium, NSE). Integrating the reaction network directly in this
253
-
regime can be difficult, since the large, but oppositely signed rates,
254
-
may not cancel exactly. In this case, instead of integrating the
255
-
network, we can impose the equilibrium state. Microphysics has two
256
-
different approaches to NSE: a self-consistent solve for the NSE state
257
-
using the nuclei in the present reaction network (similar to
258
-
@Kushnir_2020) and an interpolation from a tabulated NSE state that
259
-
was generated with $\mathcal{O}(100)$ nuclei (see @Zingale_2024).
260
-
261
-
### Thermal neutrinos
262
-
263
-
There are a number of thermal mechanisms for producing neutrinos,
264
-
including plasma, photo, pair, recombination, and Bremsstrahlung
265
-
neutrinos. These act as an energy loss term to the reaction network
266
-
and are implemented following @itoh:1996.
267
-
268
-
269
-
270
-
271
-
## Equations of state
272
-
273
-
The equations of hydrodynamics are closed via an equation of state
274
-
that related internal energy, pressure, and density (along with
275
-
composition). For systems with reactions or thermal diffusion, it
276
-
also provides temperature. Traditionally, equations of state are
277
-
implemented in terms of density and temperature, so a Newton-Raphson
278
-
method is used to invert the EOS given energy and density (or some
279
-
other thermodynamic quantities). A wide range of thermodynamic
280
-
quantities are needed by simulation codes, including pressure,
281
-
internal energy, enthalpy, entropy, and their derivatives with
282
-
respect to density, temperature, and composition. The various EOS
283
-
`struct` types carry this thermodynamic state.
284
-
285
-
A variety of EOSs are implemented, to allow for application to a range
286
-
of problems. These include a simple gamma-law EOS, the stellar EOS of
287
-
@timmes:2000, and an equation of state applicable to primordial
288
-
chemistry.
289
-
290
-
## Transport coefficients
291
-
292
-
For thermal diffusion or radiation transport, conductivities and
293
-
opacities are needed. We provide a C++ port of the stellar
294
-
conductivity opacities from @timmes:2000b. These are appropriate for
295
-
modeling thermonuclear flames in supernovae and X-ray bursts.
296
-
297
-
# GPU Strategy
298
-
299
-
Microphysics is designed such that all computation takes place on
300
-
GPUs. When used with an application code, this permits the simulation
301
-
state data to be allocated directly in GPU memory and left there for
302
-
the entire simulation. For the ODE integration, the integrator
303
-
(e.g. VODE) is run on the GPU directly. Since each zone in a
304
-
simulation usually will have a different thermodynamic state, this can
305
-
lead to thread divergence issues, since some zones will have an easier
306
-
burn than others. To help mitigate this issue, we can cap the number
307
-
of integration steps and either retry an integration on a zone-by-zone
308
-
basis with different tolerances or Jacobian approximations or pass the
309
-
failure back to the application code to deal with. This strategy
310
-
has been successful for many large scale simulations [@Zingale_2025].
311
-
312
-
313
-
# Unit tests / examples
314
-
315
-
Microphysics can be used as a standalone tool through the tests
316
-
in `Microphysics/unit_test/`. There are 2 types of tests here:
317
-
318
-
**comprehensive tests*: these test performance by setting up a cube
319
-
of data (with density, temperature, and composition varying in a
320
-
dimension) and performing an operation on the entire cube (calling
321
-
the EOS, integrating a network, ...). A separate test is provided
322
-
for each major physics module.
323
-
324
-
**one-zone tests*: these simply call one of the physics modules with
325
-
a single thermodynamic state. This can be used to explore the
326
-
physics that is implemented, and also serve to demonstrate the interfaces
327
-
used in Microphysics.
198
+
# Research Impact Statement
199
+
200
+
Microphysics has been used for simulations of convective Urca
201
+
[@Boyd_2025] and X-ray bursts [@Guichandut_2024] with MAESTROeX; and
202
+
for simulations of nova [@Smith2025], X-ray bursts [@Harpole_2021],
203
+
thermonuclear supernovae [@Zingale_2024_dd], and convection in massive
204
+
stars [@Zingale_2024] with Castro. This Microphysics library has also
205
+
enabled recent work in astrophysical machine learning to train deep
0 commit comments