@@ -43,8 +43,8 @@ available. Lazy data is the term the Iris library uses for Dask Arrays.
4343
4444### Workers
4545The most important concept to understand when using Dask Arrays is the concept
46- of a Dask " worker" . With Dask, computations are run in parallel by little
47- programs that are called " workers" . These could be on running on the
46+ of a Dask * worker* . With Dask, computations are run in parallel by little
47+ Python programs that are called * workers* . These could be on running on the
4848same machine that you are running ESMValTool on, or they could be on one or
4949more other computers. Dask workers typically require 2 to 4 gigabytes (GiB) of
5050memory (RAM) each. In order to avoid running out of memory, it is important
@@ -59,7 +59,7 @@ workers.
5959### Scheduler
6060
6161In order to distribute the computations over the workers, Dask makes use of a
62- " scheduler" . There are two different schedulers available. The default
62+ * scheduler* . There are two different schedulers available. The default
6363scheduler can be good choice for smaller computations that can run
6464on a single computer, while the scheduler provided by the Dask Distributed
6565package is more suitable for larger computations.
@@ -219,7 +219,7 @@ client:
219219where the address depends on the Dask cluster. Code to start a
220220[``distributed.LocalCluster``](https://distributed.dask.org/
221221en/stable/api.html#distributed.LocalCluster)
222- that automatically scales between 0 and 2 workers, depending on demand, could
222+ that automatically scales between 0 and 2 workers depending on demand, could
223223look like this :
224224
225225` ` ` python
0 commit comments