Skip to content

Commit 788dc77

Browse files
committed
Updates
1 parent 49dc092 commit 788dc77

File tree

1 file changed

+238
-36
lines changed

1 file changed

+238
-36
lines changed

_episodes/11-dask-configuration.md

Lines changed: 238 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,34 @@
11
---
2-
title: "Dask Configuration"
3-
teaching: 10
4-
exercises: 10
2+
title: "Configuring Dask"
3+
teaching: 20 (+ optional 10)
4+
exercises: 40 (+ optional 20)
55
compatibility: ESMValCore v2.10.0
66

77
questions:
8-
- What is the [Dask](https://www.dask.org/) configuration file and how should I use it?
8+
- "What is the Dask configuration file and how should I use it?"
9+
- "What are Dask workers"
10+
- "What is the Dask scheduler"
911

1012
objectives:
11-
- Understand the contents of the dask.yml file
12-
- Prepare a personalized dask.yml file
13-
- Configure ESMValCore to use some settings
13+
- "Understand the contents of the dask.yml file"
14+
- "Prepare a personalized dask.yml file"
1415

1516
keypoints:
16-
- The ``dask.yml`` file tells ESMValCore how to configure Dask.
17-
- "``client`` can be used to an already running Dask cluster."
18-
- "``cluster`` can be used to start a new Dask cluster for each run."
19-
- "The Dask default scheduler can be configured by editing the files in ~/.config/dask."
17+
- "The ``~/.esmvaltool/dask.yml`` file tells ESMValCore how to configure Dask."
18+
- "``cluster`` can be used to start a new Dask cluster for each run."
19+
- "``client`` can be used to connect to an already running Dask cluster."
20+
- "The Dask default scheduler can be configured by editing the files in ``~/.config/dask``."
21+
- "The Dask Dashboard can be used to see if the Dask workers have sufficient memory available."
2022

2123
---
2224

2325
## The Dask configuration file
2426

27+
When processing larger amounts of data, and especially when the tool crashes
28+
when running a recipe because there is not enough memory available, it is
29+
usually beneficial to change the default
30+
[Dask configuration](https://docs.esmvaltool.org/projects/ESMValCore/en/latest/quickstart/configure.html#dask-configuration).
31+
2532
The preprocessor functions in ESMValCore use the
2633
[Iris](https://scitools-iris.readthedocs.io) library, which in turn uses Dask
2734
Arrays to be able to process datasets that are larger than the available memory.
@@ -31,37 +38,51 @@ but if you are interested there is a
3138
[guide to "Lazy Data"](https://scitools-iris.readthedocs.io/en/stable/userguide/real_and_lazy_data.html)
3239
available. Lazy data is the term the Iris library uses for Dask Arrays.
3340

41+
42+
### Workers
3443
The most important concept to understand when using Dask Arrays is the concept
35-
of a Dask "worker". With Dask, computations are run in parallel by Python
36-
processes or threads called "workers". These could be on the
44+
of a Dask "worker". With Dask, computations are run in parallel by little programs
45+
that are called "workers". These could be on running on the
3746
same machine that you are running ESMValTool on, or they could be on one or
38-
more other computers. Dask workers typically require 2 to 4 gigabytes of
47+
more other computers. Dask workers typically require 2 to 4 gigabytes (GiB) of
3948
memory (RAM) each. In order to avoid running out of memory, it is important
4049
to use only as many workers as your computer(s) have memory for. ESMValCore
4150
(or Dask) provide configuration files where you can configure the number of
4251
workers.
4352

53+
Note that only array computations are run using Dask, so total runtime may not
54+
decrease as much as you might expect when you increase the number of Dask
55+
workers.
56+
57+
### Scheduler
58+
4459
In order to distribute the computations over the workers, Dask makes use of a
4560
"scheduler". There are two different schedulers available. The default
4661
scheduler can be good choice for smaller computations that can run
4762
on a single computer, while the scheduler provided by the Dask Distributed
48-
package is more suitable for larger computations.
63+
package is more suitable for larger computations.
4964

5065
> ## On using ``max_parallel_tasks``
5166
>
5267
> In the config-user.yml file, there is a setting called ``max_parallel_tasks``.
68+
> Any variable or diagnostic script in the recipe is considered a 'task' in this
69+
> context and when settings this to a value larger than 1, these will be processed
70+
> in parallel on the computer running the ``esmvaltool`` command.
71+
>
5372
> With the Dask Distributed scheduler, all the tasks running in parallel
5473
> can use the same workers, but with the default scheduler each task will
55-
> start its own workers. For recipes that process large datasets, it is usually
56-
> beneficial to set ``max_parallel_tasks: 1``, while for recipes that process
57-
> many small datasets it can be beneficial to increasing this number.
74+
> start its own workers. If a recipe does not run with ``max_parallel_tasks`` set
75+
> to a value larger than 1, try reducing the value or setting it to 1. This is
76+
> especially the case for recipes with high resolution data or many datasets
77+
> per variable.
5878
>
5979
{: .callout}
6080

6181
## Starting a Dask distributed cluster
6282

83+
The workers and the scheduler together are called a Dask "cluster".
6384
Let's start the the tutorial by configuring ESMValCore so it runs its
64-
computations using 2 workers.
85+
computations on a cluster with just one worker.
6586

6687
We use a text editor called ``nano`` to edit the configuration file:
6788

@@ -86,13 +107,13 @@ cluster:
86107
memory_limit: 4GiB
87108
```
88109
89-
This tells ESMValCore to start a cluster of one worker, that can use 2
110+
This tells ESMValCore to start a new cluster of one worker, that can use 2
90111
gigabytes (GiB) of memory and run computations using 2 threads. For a more
91112
extensive description of the available arguments and their values, see
92113
[``distributed.LocalCluster``](https://distributed.dask.org/en/stable/api.html#distributed.LocalCluster).
93114
94115
To see this configuration in action, run we will run a version
95-
of [recipe_easy_ipcc.yml](https://docs.esmvaltool.org/en/latest/recipes/recipe_examples.html) with just two datasets. Download
116+
of [recipe_easy_ipcc.yml](https://docs.esmvaltool.org/en/latest/recipes/recipe_examples.html) with just two datasets. This recipe takes a few minutes to run, once you have the data available. Download
96117
the recipe [here](../files/recipe_easy_ipcc_short.yml) and run it
97118
with the command:
98119
@@ -112,54 +133,235 @@ Open the Dashboard link in a browser to see the Dask Dashboard website.
112133
When the recipe has finished running, the Dashboard website will stop working.
113134
The top left panel shows the memory use of each of the workers, the panel on the
114135
right shows one row for each thread that is doing work, and the panel at the
115-
bottom shows the progress.
136+
bottom shows the progress of all work that the scheduler currently has been asked
137+
to do.
116138
117139
> ## Explore what happens if workers do not have enough memory
118140
>
119141
> Reduce the amount of memory that the workers are allowed to use to 2GiB and
120-
> run the recipe again. Note that the bars representing the memory use turn
121-
> orange as the worker reaches the maximum amount of memory it is
122-
> allowed to use and starts 'spilling' (writing data temporarily) to disk.
123-
> The red blocks in the top right panel represent time spent reading/writing
124-
> to disk.
142+
> run the recipe again. Watch what happens.
125143
>
126144
>> ## Solution
127145
>>
128146
>> We use `memory_limit` entry in the `~/.esmvaltool/dask.yml` file to set the
129-
>> amount of memory allowed to 2 gigabytes:
147+
>> amount of memory allowed to 2GiB:
130148
>>```yaml
131149
>> cluster:
132150
>> type: distributed.LocalCluster
133151
>> n_workers: 1
134152
>> threads_per_worker: 2
135153
>> memory_limit: 2GiB
136154
>>```
155+
>> Note that the bars representing the memory use turn
156+
>> orange as the worker reaches the maximum amount of memory it is
157+
>> allowed to use and it starts 'spilling' (writing data temporarily) to disk.
158+
>> The red blocks in the top right panel represent time spent reading/writing
159+
>> to disk. While 2 GiB per worker may be enough in other cases, it is apparently
160+
>> not enough for this recipe.
137161
>>
138162
> {: .solution}
139163
{: .challenge}
140164

141165

142166
> ## Tune the configuration to your own computer
143167
>
144-
> Look at how much memory you have available on your machine (run the command
145-
> ``grep MemTotal /proc/meminfo`` on Linux), set the ``memory_limit`` back to
146-
> 4 GiB and increase the number of Dask workers so they use total amount
147-
> available minus a few gigabytes for your other work.
168+
> Look at how much memory you have available on your machine (e.g. by running
169+
> the command ``grep MemTotal /proc/meminfo`` on Linux), set the
170+
> ``memory_limit`` back to 4 GiB per worker and increase the number of Dask
171+
> workers so they use total amount available minus a few gigabytes for your
172+
> other work. Run the recipe again and notice that it completed faster.
148173
>
149174
>> ## Solution
150175
>>
151-
>> For example, if your computer has 16 GiB of memory, it can comfortably use
152-
>> 12 GiB of memory for Dask workers, so you can start 3 workers with 4 GiB
153-
>> of memory each.
176+
>> For example, if your computer has 16 GiB of memory and you do not have too
177+
>> many other programs running, it can use 12 GiB of memory for Dask workers,
178+
>> so you can start 3 workers with 4 GiB of memory each.
179+
>>
154180
>> Use the `num_workers` entry in the `~/.esmvaltool/dask.yml` file to set the
155-
>> number of workers to 3.
181+
>> number of workers to 3:
156182
>>```yaml
157183
>> cluster:
158184
>> type: distributed.LocalCluster
159185
>> n_workers: 3
160186
>> threads_per_worker: 2
161187
>> memory_limit: 4GiB
162188
>>```
189+
>> and run the recipe again with the command ``esmvaltool run recipe_easy_ipcc_short.yml``. The time it took to run the recipe is printed
190+
>> to the screen.
191+
>>
192+
> {: .solution}
193+
{: .challenge}
194+
195+
## Using an existing Dask Distributed cluster
196+
197+
In some cases, it can be useful to start the Dask Distributed cluster before
198+
running the ``esmvaltool`` command. For example, if you would like to keep the Dashboard available for further investigation after the recipe completes running, or if you are working from a Jupyter notebook environment, see
199+
[dask-labextension](https://github.com/dask/dask-labextension) and
200+
[dask_jobqueue interactive use](https://jobqueue.dask.org/en/latest/interactive.html)
201+
for more information.
202+
203+
To use a cluster that was started in some other way, the following configuration
204+
can be used in ``~/.esmvaltool/dask.yml``:
205+
206+
```yaml
207+
client:
208+
address: "tcp://127.0.0.1:33041"
209+
```
210+
where the address depends on the Dask cluster. Code to start a
211+
[``distributed.LocalCluster``](https://distributed.dask.org/en/stable/api.html#distributed.LocalCluster) that automatically scales between 0 and 2 workers, depending on demand, could look like this:
212+
213+
```python
214+
from time import sleep
215+
216+
from distributed import LocalCluster
217+
218+
if __name__ == '__main__': # Remove this line when running from a Jupyter notebook
219+
cluster = LocalCluster(
220+
threads_per_worker=2,
221+
memory_limit='4GiB',
222+
)
223+
cluster.adapt(minimum=0, maximum=2)
224+
225+
# Print connection information
226+
print(f"Connect to the Dask Dashboard by opening {cluster.dashboard_link} in a browser.")
227+
print("Add the following text to ~/.esmvaltool/dask.yml to connect to the cluster:" )
228+
print("client:")
229+
print(f' address: "{cluster.scheduler_address}"')
230+
231+
# When running this as a Python script, the next two lines keep the cluster
232+
# running for an hour.
233+
hour = 3600 # seconds
234+
sleep(1 * hour)
235+
236+
# Stop the cluster when you are done with it.
237+
cluster.close()
238+
```
239+
240+
> ## Start a cluster and use it
241+
>
242+
> Copy the Python code above into a file called ``start_dask_cluster.py`` (or
243+
into a Jupyter notebook if you prefer) and start the cluster using the command
244+
``python start_dask_cluster.py``. Edit the ``~/esmvaltool/dask.yml`` file so
245+
ESMValCore can connect to the cluster. Run the recipe again and notice that the
246+
Dashboard remains available after the recipe completes.
247+
>
248+
>> ## Solution
249+
>>
250+
>> If the script printed
251+
>> ```
252+
>> Connect to the Dask Dashboard by opening http://127.0.0.1:8787/status in a browser.
253+
>> Add the following text to ~/.esmvaltool/dask.yml to connect to the cluster:
254+
>> client:
255+
>> address: "tcp://127.0.0.1:34827"
256+
>> ```
257+
>> to the screen, edit the file ``~/.esmvaltool/dask.yml`` so it contains the
258+
lines
259+
>> ```yaml
260+
>> client:
261+
>> address: "tcp://127.0.0.1:34827"
262+
>> ```
263+
>> open the link "http://127.0.0.1:8787/status" in your browser and
264+
>> run the recipe again with the command ``esmvaltool run recipe_easy_ipcc_short.yml``.
265+
> {: .solution}
266+
{: .challenge}
267+
268+
When running from a Jupyter notebook, don't forget to `close()` the cluster
269+
when you are running on an HPC facility (see below), to avoid wasting
270+
compute hours you are not using.
271+
272+
## Using the Dask default scheduler
273+
274+
It is recommended to use the Distributed scheduler explained above for
275+
processing larger amounts of data. However, in many cases the default scheduler
276+
is good enough. Note that it does not provide a Dashboard, so it is less
277+
instructive and that is why we did not use it earlier in this tutorial.
278+
279+
To use the default scheduler, comment out all the contents of
280+
``~/.esmvaltool/dask.yml`` and create a file in ``~/.config/dask``, e.g. called
281+
``~/.config/dask/default.yml`` but the filename does not matter, with the
282+
contents:
283+
```yaml
284+
scheduler: threads
285+
num_workers: 4
286+
```
287+
to set the number of workers to 4. The ``scheduler`` can also be set to
288+
``synchronous``. In that case it will use a single thread, which may be useful
289+
for debugging.
290+
291+
> ## Use the default scheduler
292+
>
293+
> Follow the instructions above to use the default scheduler and run the recipe
294+
> again. To keep track of the amount of memory used by the process, you can
295+
> start the ``top`` command in another terminal. The amount of memory is shown
296+
> in the ``RES`` column.
297+
>
298+
>> ## Solution
299+
>>
300+
>> The recipe runs a bit faster with this configuration and you may have seen
301+
>> a memory use of around 5 GB.
302+
>>
303+
> {: .solution}
304+
{: .challenge}
305+
306+
## Optional: Using dask_jobqueue to run a Dask Cluster on an HPC system
307+
308+
The [``dask_jobqueue``](https://jobqueue.dask.org) package provides functionality
309+
to start Dask Distributed clusters on High Performance Computing (HPC) or
310+
High Throughput Computing (HTC) systems. This section is optional and only
311+
useful if you have access to a such a system.
312+
313+
An example configuration for the
314+
[Levante HPC system](https://docs.dkrz.de/doc/levante/index.html)
315+
could look like this:
316+
317+
```yaml
318+
cluster:
319+
type: dask_jobqueue.SLURMCluster # Levante uses SLURM as a job scheduler
320+
queue: compute # SLURM partition name
321+
account: bk1088 # SLURM account name
322+
cores: 128 # number of CPU cores per SLURM job
323+
memory: 240GiB # amount of memory per SLURM job
324+
processes: 64 # number of Dask workers per SLURM job
325+
interface: ib0 # use the infiniband network interface for communication
326+
local_directory: "/scratch/username/dask-tmp" # directory for spilling to disk
327+
n_workers: 64 # total number of workers to start
328+
```
329+
330+
In this example we use the popular SLURM scheduduler, but other schedulers are also supported, see [this list](https://jobqueue.dask.org/en/latest/api.html).
331+
332+
In the above example, ESMValCore will start 64 Dask workers
333+
(with 128 / 64 = 2 threads each) and for that it will need to launch a single SLURM
334+
batch job on the ``compute`` partition. If you would set ``n_workers`` to e.g.
335+
256, it would launch 4 SLURM batch jobs which would each start 64 workers for a
336+
total of 4 x 64 = 256 workers. In the above configuration, each worker is
337+
allowed to use 240 GiB per job / 64 workers per job = ~4 GiB per worker.
338+
339+
It is important to read the documentation about your HPC system and answer questions such as
340+
- Which batch scheduler does my HPC system use?
341+
- How many CPU cores are available per node (a computer in an HPC system)?
342+
- How much memory is available for use per node?
343+
- What is the fastest network interface (infiniband is much faster than ethernet)?
344+
- What path should I use for storing temporary files on the nodes (try to avoid slower network storage if possible)?
345+
- Which computing queue has the best availability?
346+
- Can I use part of a node or do I need to use the full node?
347+
- If you are always charged for using the full node, asking for only part of a node is wasteful of computational resources.
348+
- If you can ask for part of a node, make sure the amount of memory you request matches the number of CPU cores if possible, or you will be charged for a larger fraction of the node.
349+
350+
in order to find the optimal configuration for your situation.
351+
352+
> ## Tune the configuration to your own computer
353+
>
354+
> Answer the questions above and create an ``~/.esmvaltool/dask.yml`` file that
355+
> matches your situation. To benefit from using an HPC system, you will probably
356+
> need to run a larger recipe than the example we have used so far. You could
357+
> try the full version of that recipe (``esmvaltool run examples/recipe_easy_ipcc.yml``) or use your own recipe. To understand performance, you may want
358+
> to experiment with different configurations.
359+
>
360+
>> ## Solution
361+
>>
362+
>> The best configuration depends on the HPC system that you are using.
363+
>> Discuss your answer with the instructor and the class if possible. If you are
364+
>> taking this course by yourself, you can have a look at the [Dask configuration examples in the ESMValCore documentation](https://docs.esmvaltool.org/projects/ESMValCore/en/latest/quickstart/configure.html#dask-distributed-configuration).
163365
>>
164366
> {: .solution}
165367
{: .challenge}

0 commit comments

Comments
 (0)