You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To see this configuration in action, run we will run a version
118
-
of [recipe_easy_ipcc.yml](https://docs.esmvaltool.org/en/latest/recipes/recipe_examples.html) with just two datasets. This recipe takes a few minutes to run, once you have the data available. Download
119
-
the recipe [here](../files/recipe_easy_ipcc_short.yml) and run it
118
+
To see this configuration in action, run we will run a version of
en/latest/recipes/recipe_examples.html) with just two datasets.
121
+
This recipe takes a few minutes to run, once you have the data available.
122
+
Download the recipe [here](../files/recipe_easy_ipcc_short.yml) and run it
120
123
with the command:
121
124
122
125
~~~bash
@@ -188,8 +191,9 @@ asked to do.
188
191
>> threads_per_worker: 2
189
192
>> memory_limit: 4GiB
190
193
>>```
191
-
>> and run the recipe again with the command ``esmvaltool run recipe_easy_ipcc_short.yml``. The time it took to run the recipe is printed
192
-
>> to the screen.
194
+
>> and run the recipe again with the command
195
+
>> ``esmvaltool run recipe_easy_ipcc_short.yml``.
196
+
>> The time it took to run the recipe is printed to the screen.
193
197
>>
194
198
> {: .solution}
195
199
{: .challenge}
@@ -229,18 +233,15 @@ if __name__ == '__main__': # Remove this line when running from a Jupyter noteb
229
233
memory_limit='4GiB',
230
234
)
231
235
cluster.adapt(minimum=0, maximum=2)
232
-
233
236
# Print connection information
234
237
print(f"Connect to the Dask Dashboard by opening {cluster.dashboard_link} in a browser.")
235
238
print("Add the following text to ~/.esmvaltool/dask.yml to connect to the cluster:" )
236
239
print("client:")
237
240
print(f' address: "{cluster.scheduler_address}"')
238
-
239
241
# When running this as a Python script, the next two lines keep the cluster
240
242
# running for an hour.
241
243
hour = 3600 # seconds
242
244
sleep(1 * hour)
243
-
244
245
# Stop the cluster when you are done with it.
245
246
cluster.close()
246
247
```
@@ -338,10 +339,10 @@ cluster:
338
339
In this example we use the popular SLURM scheduduler, but other schedulers are also supported, see [this list](https://jobqueue.dask.org/en/latest/api.html).
339
340
340
341
In the above example, ESMValCore will start 64 Dask workers
341
-
(with 128 / 64 = 2 threads each) and for that it will need to launch a single SLURM
342
-
batch job on the ``compute`` partition. If you would set ``n_workers`` to e.g.
343
-
256, it would launch 4 SLURM batch jobs which would each start 64 workers for a
344
-
total of 4 x 64 = 256 workers. In the above configuration, each worker is
342
+
(with 128 / 64 = 2 threads each) and for that it will need to launch a single
343
+
SLURM batch job on the ``compute`` partition. If you would set ``n_workers`` to
344
+
e.g. 256, it would launch 4 SLURM batch jobs which would each start 64 workers
345
+
for a total of 4 x 64 = 256 workers. In the above configuration, each worker is
345
346
allowed to use 240 GiB per job / 64 workers per job = ~4 GiB per worker.
346
347
347
348
It is important to read the documentation about your HPC system and answer questions such as
@@ -362,14 +363,19 @@ in order to find the optimal configuration for your situation.
362
363
> Answer the questions above and create an ``~/.esmvaltool/dask.yml`` file that
363
364
> matches your situation. To benefit from using an HPC system, you will probably
364
365
> need to run a larger recipe than the example we have used so far. You could
365
-
> try the full version of that recipe (``esmvaltool run examples/recipe_easy_ipcc.yml``) or use your own recipe. To understand performance, you may want
366
-
> to experiment with different configurations.
366
+
> try the full version of that recipe (
367
+
> ``esmvaltool run examples/recipe_easy_ipcc.yml``) or use your own recipe.
368
+
> To understand how the different settings affect performance, you may want to
369
+
> experiment with different configurations.
367
370
>
368
371
>> ## Solution
369
372
>>
370
373
>> The best configuration depends on the HPC system that you are using.
371
-
>> Discuss your answer with the instructor and the class if possible. If you are
372
-
>> taking this course by yourself, you can have a look at the [Dask configuration examples in the ESMValCore documentation](https://docs.esmvaltool.org/projects/ESMValCore/en/latest/quickstart/configure.html#dask-distributed-configuration).
374
+
>> Discuss your answer with the instructor and the class if possible.
375
+
>> If you are taking this course by yourself, you can have a look at the
376
+
>> [Dask configuration examples in the ESMValCore documentation](
0 commit comments