|
4 | 4 |
|
5 | 5 | > :warning: This is only compatible with the latest version of Dedalus |
6 | 6 |
|
7 | | -## Usage Example |
| 7 | +## Dedalus with pySDC |
8 | 8 |
|
9 | 9 | See [demo.py](./scratch.py) for a first demo script using pySDC to apply SDC on the Advection equation. |
10 | 10 |
|
@@ -103,7 +103,7 @@ Then `uSol` contains a list of `Fields` that represent the final solution of the |
103 | 103 | See an other example with the [Burger equation](./burger.py) |
104 | 104 |
|
105 | 105 |
|
106 | | -## Use a pySDC based time-integrator within Dedalus |
| 106 | +## SDC time-integrator for Dedalus |
107 | 107 |
|
108 | 108 | This playground also provide a standalone SDC solver that can be used directly, |
109 | 109 | see the [demo file for the Burger equation](./burger_ref.py). |
@@ -131,4 +131,58 @@ while solver.proceed: |
131 | 131 | solver.step(timestep) |
132 | 132 | ``` |
133 | 133 |
|
134 | | -A full example script for the 2D Rayleigh-Benard Convection problem can be found [here](./rayleighBenardSDC.py). |
| 134 | +A full example script for the 2D Rayleigh-Benard Convection problem can be found [here](./rayleighBenardSDC.py). |
| 135 | + |
| 136 | +## Time-parallel SDC time-integrator for Dedalus |
| 137 | + |
| 138 | +Two MPI implementations of SDC time-integrators for |
| 139 | +Dedalus are provided in `pySDC.playground.dedalus.sdc`, |
| 140 | +that allow to run computations on $M$ parallel process, |
| 141 | +$M$ being the number of quadrature nodes. |
| 142 | + |
| 143 | +1. `SDCIMEX_MPI` : each rank compute the solution on one quadrature node, the initial tendencies and $MX_0$ are computed by rank 0 only and broadcasted to all other time ranks. |
| 144 | +2. `SDCIMEX_MPI2` : each rank compute the solution on one quadrature node, and the final state solution from time rank $M-1$ is broadcasted to all nodes at the end of the time-step as initial solution. |
| 145 | + |
| 146 | +While both implementations provide the same results as the `SpectralDeferredCorrectionIMEX` class |
| 147 | +(up to machine precision), `SDCIMEX_MPI2` does minimize |
| 148 | +the amount of communication between time ranks, and may slightly improve performance on some clusters. |
| 149 | + |
| 150 | +> ⚠️ Only diagonal SDC preconditioning can be used with SDC |
| 151 | +
|
| 152 | +**Example of use :** |
| 153 | + |
| 154 | +Here is a template for a Dedalus run script : |
| 155 | + |
| 156 | +```python |
| 157 | +import numpy as np |
| 158 | +import dedalus.public as d3 |
| 159 | + |
| 160 | +from pySDC.playgrounds.dedalus.sdc import SDCIMEX_MPI |
| 161 | + |
| 162 | +SDCIMEX_MPI.setParameters( |
| 163 | + nNodes=4, implSweep="MIN-SR-FLEX", explSweep="PIC", initSweep="COPY" |
| 164 | +) |
| 165 | +gComm, sComm, tComm = SDCIMEX_MPI.initSpaceTimeComms() |
| 166 | + |
| 167 | +coords = ... # Dedalus coordinate, e.g : d3.CartesianCoordinates('x', 'y', 'z') |
| 168 | +distr = d3.Distributor(coords, dtype=np.float64, comm=sComm) |
| 169 | +problem = d3.IVP(...) # rest of problem definition ... |
| 170 | + |
| 171 | +# Solver |
| 172 | +solver = problem.build_solver(SDCIMEX_MPI) |
| 173 | +# rest of solver settings ... |
| 174 | +``` |
| 175 | + |
| 176 | +The `gComm`, `sComm` and `tComm` variables are `mpi4py.MPI.Intracomm` communicator objects, respectively for the _global_, _space_ and _time_ communicators. |
| 177 | +They provide methods as `Get_rank()` or `Barrier()` that |
| 178 | +can be used within the script. |
| 179 | + |
| 180 | +To run the script, simply do : |
| 181 | + |
| 182 | +```bash |
| 183 | +mpirun -n $NPROCS python script.py |
| 184 | +``` |
| 185 | + |
| 186 | +Where `NPROCS` is a multiple of the number of quadrature nodes used with SDC, 4 in the example script. |
| 187 | +Note that the same template can be used with the |
| 188 | +`SDCIMEX_MPI2` class. |
0 commit comments