Skip to content

Commit bb319d9

Browse files
committed
add mpi4py to the docs
1 parent d538ee2 commit bb319d9

File tree

1 file changed

+59
-0
lines changed

1 file changed

+59
-0
lines changed

docs/source/tutorial/tutorial.parallelism.rst

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,3 +53,62 @@ On Windows by default `adaptive.Runner` uses a `distributed.Client`.
5353
runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)
5454
runner.live_info()
5555
runner.live_plot(update_interval=0.1)
56+
57+
`mpi4py.futures.MPIPoolExecutor`
58+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59+
60+
This makes sense if you want to run a ``Learner`` on a cluster non-interactively using a job script.
61+
62+
For example, you create the following file called ``run_learner.py``:
63+
64+
.. code:: python
65+
66+
import mpi4py.futures
67+
68+
learner = adaptive.Learner1D(f, bounds=(-1, 1))
69+
70+
# load the data
71+
learner.load(fname)
72+
73+
# run until `goal` is reached with an `MPIPoolExecutor`
74+
runner = adaptive.Runner(
75+
learner,
76+
executor=MPIPoolExecutor(),
77+
shutdown_executor=True,
78+
goal=lambda l: l.loss() < 0.01,
79+
)
80+
81+
# periodically save the data (in case the job dies)
82+
runner.start_periodic_saving(dict(fname=fname), interval=600)
83+
84+
# block until runner goal reached
85+
runner.ioloop.run_until_complete(runner.task)
86+
87+
88+
On your laptop/desktop you can run this script like:
89+
90+
.. code:: python
91+
92+
export MPI4PY_MAX_WORKERS=15
93+
mpiexec -n 1 python run_learner.py
94+
95+
Or you can pass ``max_workers=15`` programmatically when creating the executor instance.
96+
97+
Inside the job script using a job queuing system use:
98+
99+
.. code:: python
100+
101+
export MPI4PY_MAX_WORKERS=15
102+
mpiexec -n 16 python -m mpi4py.futures run_learner.py
103+
104+
How you call MPI might depend on your specific queuing system, with SLURM for example it's:
105+
106+
.. code:: python
107+
108+
#!/bin/bash
109+
#SBATCH --job-name adaptive-example
110+
#SBATCH --ntasks 100
111+
112+
export MPI4PY_MAX_WORKERS=$SLURM_NTASKS
113+
srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py
114+

0 commit comments

Comments
 (0)