Skip to content

Commit a69518f

Browse files
Fix hyphenation for high/low{-| }level (#232)
1 parent eb24d53 commit a69518f

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

00_overview.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,11 @@
3535
"\n",
3636
"We can think of Dask at a high and a low level\n",
3737
"\n",
38-
"* **High level collections:** Dask provides high-level Array, Bag, and DataFrame\n",
38+
"* **High-level collections:** Dask provides high-level Array, Bag, and DataFrame\n",
3939
" collections that mimic NumPy, lists, and Pandas but can operate in parallel on\n",
4040
" datasets that don't fit into memory. Dask's high-level collections are\n",
4141
" alternatives to NumPy and Pandas for large datasets.\n",
42-
"* **Low Level schedulers:** Dask provides dynamic task schedulers that\n",
42+
"* **Low-level schedulers:** Dask provides dynamic task schedulers that\n",
4343
" execute task graphs in parallel. These execution engines power the\n",
4444
" high-level collections mentioned above but can also power custom,\n",
4545
" user-defined workloads. These schedulers are low-latency (around 1ms) and\n",

02_bag.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
"\n",
2626
"These core data structures are optimized for general-purpose storage and processing. Adding streaming computation with iterators/generator expressions or libraries like `itertools` or [`toolz`](https://toolz.readthedocs.io/en/latest/) let us process large volumes in a small space. If we combine this with parallel processing then we can churn through a fair amount of data.\n",
2727
"\n",
28-
"Dask.bag is a high level Dask collection to automate common workloads of this form. In a nutshell\n",
28+
"Dask.bag is a high-level Dask collection to automate common workloads of this form. In a nutshell\n",
2929
"\n",
3030
" dask.bag = map, filter, toolz + parallel execution\n",
3131
" \n",

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ Dask provides multi-core execution on larger-than-memory datasets.
1010

1111
We can think of dask at a high and a low level
1212

13-
* **High level collections:** Dask provides high-level Array, Bag, and DataFrame
13+
* **High-level collections:** Dask provides high-level Array, Bag, and DataFrame
1414
collections that mimic NumPy, lists, and Pandas but can operate in parallel on
1515
datasets that don't fit into main memory. Dask's high-level collections are
1616
alternatives to NumPy and Pandas for large datasets.
17-
* **Low Level schedulers:** Dask provides dynamic task schedulers that
17+
* **Low-level schedulers:** Dask provides dynamic task schedulers that
1818
execute task graphs in parallel. These execution engines power the
1919
high-level collections mentioned above but can also power custom,
2020
user-defined workloads. These schedulers are low-latency (around 1ms) and

0 commit comments

Comments
 (0)