Skip to content

Commit 0806937

Browse files
committed
Improve formatting
1 parent 3a5beda commit 0806937

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

_episodes/11-dask-configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,8 +43,8 @@ available. Lazy data is the term the Iris library uses for Dask Arrays.
4343

4444
### Workers
4545
The most important concept to understand when using Dask Arrays is the concept
46-
of a Dask "worker". With Dask, computations are run in parallel by little
47-
programs that are called "workers". These could be on running on the
46+
of a Dask *worker*. With Dask, computations are run in parallel by little
47+
Python programs that are called *workers*. These could be on running on the
4848
same machine that you are running ESMValTool on, or they could be on one or
4949
more other computers. Dask workers typically require 2 to 4 gigabytes (GiB) of
5050
memory (RAM) each. In order to avoid running out of memory, it is important
@@ -59,7 +59,7 @@ workers.
5959
### Scheduler
6060

6161
In order to distribute the computations over the workers, Dask makes use of a
62-
"scheduler". There are two different schedulers available. The default
62+
*scheduler*. There are two different schedulers available. The default
6363
scheduler can be good choice for smaller computations that can run
6464
on a single computer, while the scheduler provided by the Dask Distributed
6565
package is more suitable for larger computations.
@@ -219,7 +219,7 @@ client:
219219
where the address depends on the Dask cluster. Code to start a
220220
[``distributed.LocalCluster``](https://distributed.dask.org/
221221
en/stable/api.html#distributed.LocalCluster)
222-
that automatically scales between 0 and 2 workers, depending on demand, could
222+
that automatically scales between 0 and 2 workers depending on demand, could
223223
look like this:
224224

225225
```python

0 commit comments

Comments
 (0)