Skip to content

Commit 24678d7

Browse files
committed
frame
1 parent 71a73e4 commit 24678d7

File tree

1 file changed

+48
-54
lines changed

1 file changed

+48
-54
lines changed

04_dataframe.ipynb

Lines changed: 48 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
"\n",
1313
"# Dask DataFrames\n",
1414
"\n",
15-
"We finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.\n",
15+
"We finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.\n",
1616
"\n",
1717
"In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `\"data/nycflights/*.csv\"` and build parallel computations on all of our data at once.\n",
1818
"\n",
@@ -31,13 +31,16 @@
3131
"\n",
3232
"**Related Documentation**\n",
3333
"\n",
34-
"* [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)\n",
35-
"* [Pandas documentation](http://pandas.pydata.org/)\n",
34+
"* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)\n",
35+
"* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)\n",
36+
"* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)\n",
37+
"* [DataFrame examples](https://examples.dask.org/dataframe.html)\n",
38+
"* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)\n",
3639
"\n",
3740
"**Main Take-aways**\n",
3841
"\n",
39-
"1. Dask.dataframe should be familiar to Pandas users\n",
40-
"2. The partitioning of dataframes is important for efficient queries"
42+
"1. Dask DataFrame should be familiar to Pandas users\n",
43+
"2. The partitioning of dataframes is important for efficient execution"
4144
]
4245
},
4346
{
@@ -55,7 +58,7 @@
5558
"source": [
5659
"from dask.distributed import Client\n",
5760
"\n",
58-
"client = Client()"
61+
"client = Client(n_workers=4)"
5962
]
6063
},
6164
{
@@ -76,23 +79,15 @@
7679
"\n",
7780
"import os\n",
7881
"import dask\n",
79-
"filename = os.path.join('data', 'accounts.*.csv')"
82+
"filename = os.path.join('data', 'accounts.*.csv')\n",
83+
"filename"
8084
]
8185
},
8286
{
8387
"cell_type": "markdown",
8488
"metadata": {},
8589
"source": [
86-
"This works just like `pandas.read_csv`, except on multiple csv files at once."
87-
]
88-
},
89-
{
90-
"cell_type": "code",
91-
"execution_count": null,
92-
"metadata": {},
93-
"outputs": [],
94-
"source": [
95-
"filename"
90+
"Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame."
9691
]
9792
},
9893
{
@@ -103,7 +98,6 @@
10398
"source": [
10499
"import dask.dataframe as dd\n",
105100
"df = dd.read_csv(filename)\n",
106-
"# load and count number of rows\n",
107101
"df.head()"
108102
]
109103
},
@@ -113,6 +107,7 @@
113107
"metadata": {},
114108
"outputs": [],
115109
"source": [
110+
"# load and count number of rows\n",
116111
"len(df)"
117112
]
118113
},
@@ -150,7 +145,7 @@
150145
"cell_type": "markdown",
151146
"metadata": {},
152147
"source": [
153-
"Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types."
148+
"Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes."
154149
]
155150
},
156151
{
@@ -317,7 +312,11 @@
317312
{
318313
"cell_type": "code",
319314
"execution_count": null,
320-
"metadata": {},
315+
"metadata": {
316+
"jupyter": {
317+
"source_hidden": true
318+
}
319+
},
321320
"outputs": [],
322321
"source": [
323322
"len(df)"
@@ -344,7 +343,11 @@
344343
{
345344
"cell_type": "code",
346345
"execution_count": null,
347-
"metadata": {},
346+
"metadata": {
347+
"jupyter": {
348+
"source_hidden": true
349+
}
350+
},
348351
"outputs": [],
349352
"source": [
350353
"len(df[~df.Cancelled])"
@@ -371,7 +374,11 @@
371374
{
372375
"cell_type": "code",
373376
"execution_count": null,
374-
"metadata": {},
377+
"metadata": {
378+
"jupyter": {
379+
"source_hidden": true
380+
}
381+
},
375382
"outputs": [],
376383
"source": [
377384
"df[~df.Cancelled].groupby('Origin').Origin.count().compute()"
@@ -398,7 +405,11 @@
398405
{
399406
"cell_type": "code",
400407
"execution_count": null,
401-
"metadata": {},
408+
"metadata": {
409+
"jupyter": {
410+
"source_hidden": true
411+
}
412+
},
402413
"outputs": [],
403414
"source": [
404415
"df.groupby(\"Origin\").DepDelay.mean().compute()"
@@ -423,7 +434,11 @@
423434
{
424435
"cell_type": "code",
425436
"execution_count": null,
426-
"metadata": {},
437+
"metadata": {
438+
"jupyter": {
439+
"source_hidden": true
440+
}
441+
},
427442
"outputs": [],
428443
"source": [
429444
"df.groupby(\"DayOfWeek\").DepDelay.mean().compute()"
@@ -518,7 +533,7 @@
518533
"source": [
519534
"Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.\n",
520535
"\n",
521-
"During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.\n",
536+
"During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.\n",
522537
"\n",
523538
"We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded \n",
524539
"\n",
@@ -763,35 +778,14 @@
763778
"cell_type": "markdown",
764779
"metadata": {},
765780
"source": [
766-
"### What definitely works?"
767-
]
768-
},
769-
{
770-
"cell_type": "markdown",
771-
"metadata": {},
772-
"source": [
773-
"* Trivially parallelizable operations (fast):\n",
774-
" * Elementwise operations: ``df.x + df.y``\n",
775-
" * Row-wise selections: ``df[df.x > 0]``\n",
776-
" * Loc: ``df.loc[4.0:10.5]``\n",
777-
" * Common aggregations: ``df.x.max()``\n",
778-
" * Is in: ``df[df.x.isin([1, 2, 3])]``\n",
779-
" * Datetime/string accessors: ``df.timestamp.month``\n",
780-
"* Cleverly parallelizable operations (also fast):\n",
781-
" * groupby-aggregate (with common aggregations): ``df.groupby(df.x).y.max()``\n",
782-
" * value_counts: ``df.x.value_counts``\n",
783-
" * Drop duplicates: ``df.x.drop_duplicates()``\n",
784-
" * Join on index: ``dd.merge(df1, df2, left_index=True, right_index=True)``\n",
785-
"* Operations requiring a shuffle (slow-ish, unless on index)\n",
786-
" * Set index: ``df.set_index(df.x)``\n",
787-
" * groupby-apply (with anything): ``df.groupby(df.x).apply(myfunc)``\n",
788-
" * Join not on the index: ``pd.merge(df1, df2, on='name')``\n",
789-
"* Ingest operations\n",
790-
" * Files: ``dd.read_csv, dd.read_parquet, dd.read_json, dd.read_orc``, etc.\n",
791-
" * Pandas: ``dd.from_pandas``\n",
792-
" * Anything supporting numpy slicing: ``dd.from_array``\n",
793-
" * From any set of functions creating sub dataframes via ``dd.from_delayed``.\n",
794-
" * Dask.bag: ``mybag.to_dataframe(columns=[...])``"
781+
"## Learn More\n",
782+
"\n",
783+
"\n",
784+
"* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)\n",
785+
"* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)\n",
786+
"* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)\n",
787+
"* [DataFrame examples](https://examples.dask.org/dataframe.html)\n",
788+
"* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)"
795789
]
796790
},
797791
{

0 commit comments

Comments
 (0)