You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/posts/flox-smart/index.md
+23-25Lines changed: 23 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,23 +21,24 @@ See our [previous blog post](https://xarray.dev/blog/flox) for more.
21
21
Two key realizations influenced the development of flox:
22
22
23
23
1. Array workloads frequently group by a relatively small in-memory array. Quite frequently those arrays have patterns to their values e.g. `"time.month"` is exactly periodic, `"time.dayofyear"` is approximately periodic (depending on calendar), `"time.year"` is commonly a monotonic increasing array.
24
-
2.An important difference between arrays and dataframes is that chunk sizes (or "partition sizes") for arrays can be quite small along the core-dimension of an operation.
24
+
2.Chunk sizes (or "partition sizes") for arrays can be quite small along the core-dimension of an operation. This is an important difference between arrays and dataframes!
25
25
26
-
These two properties are particularly relevant for climatology calculations --- a common Xarray workload.
26
+
These two properties are particularly relevant for "climatology" calculations (e.g. `groupby("time.month").mean()`) — a common Xarray workload.
27
27
28
28
## Tree reductions can be catastrophically bad
29
29
30
30
For a catastrophic example, consider `ds.groupby("time.year").mean()`, or the equivalent `ds.resample(time="Y").mean()` for a 100 year long dataset of monthly averages with chunk size of **1** (or **4**) along the time dimension.
31
31
This is a fairly common format for climate model output.
32
-
The small chunk size along time is offset by much larger chunk sizes along the other dimensions --- commonly horizontal space (`x, y` or `latitude, longitude`).
32
+
The small chunk size along time is offset by much larger chunk sizes along the other dimensions — commonly horizontal space (`x, y` or `latitude, longitude`).
33
33
34
34
A naive tree reduction would accumulate all averaged values into a single output chunk of size 100.
35
35
Depending on the chunking of the input dataset, this may overload the worker memory and fail catastrophically.
36
-
More importantly, there is a lot of wasteful communication --- computing on the last year of data is completely independent of computing on the first year of the data, and there is no reason the two values need to reside in the same output chunk.
36
+
More importantly, there is a lot of wasteful communication — computing on the last year of data is completely independent of computing on the first year of the data, and there is no reason the two values need to reside in the same output chunk.
37
37
38
38
## Avoiding catastrophe
39
39
40
-
Thus `flox` quickly grew two new modes of computing the groupby reduction:
40
+
Thus `flox` quickly grew two new modes of computing the groupby reduction.
41
+
41
42
First, `method="blockwise"` which applies the grouped-reduction in a blockwise fashion.
42
43
This is great for `resample(time="Y").mean()` where we group by `"time.year"`, which is a monotonic increasing array.
43
44
With an appropriate (and usually quite cheap) rechunking, the problem is embarassingly parallel.
@@ -93,7 +94,7 @@ After a fun exploration involving such fun ideas as [locality-sensitive hashing]
93
94
I use set _containment_, or a "normalized intersection", to determine the similarity the sets of chunks occupied by two different groups (`Q` and `X`).
94
95
95
96
```
96
-
C = |Q ∩ X| / |Q| ≤ 1
97
+
C = |Q ∩ X| / |Q| ≤ 1; (∩ is set intersection)
97
98
```
98
99
99
100
Unlike Jaccard similarity, _containment_[isn't skewed](http://ekzhu.com/datasketch/lshensemble.html) when one of the sets is much larger than the other.
@@ -109,29 +110,25 @@ The steps are as follows:
109
110
1. Use `"blockwise"` when every group is contained to one block each.
110
111
1. Use `"cohorts"` when every chunk only has a single group, but that group might extend across multiple chunks
- On the left, is a monthly grouping for a monthly time series with chunk size 4. There are 3 non-overlapping cohorts so
116
-
`method="cohorts"` is perfect.
117
-
- On the right, is a resampling problem of a daily time series with chunk size 10 to 5-daily frequency. Two 5-day periods
118
-
are exactly contained in one chunk, so `method="blockwise"` is perfect.
119
113
120
114
1. At this point, we want to merge groups in to cohorts when they occupy _approximately_ the same chunks. For each group `i` we can quickly compute containment against
121
-
all other groups `j` as `C = S.T @ S / number_chunks_per_group`. Here is `C` for a range of chunk sizes from 1 to 12, for computing
122
-
the monthly mean of a monthly time series problem, \[the title on each image is `(chunk size, sparsity)`\].
all other groups `j` as `C = S.T @ S / number_chunks_per_group`.
130
116
131
117
1. To choose between `"map-reduce"` and `"cohorts"`, we need a summary measure of the degree to which the labels overlap with
132
118
each other. We can use _sparsity_ --- the number of non-zero elements in `C` divided by the number of elements in `C`, `C.nnz/C.size`.
133
119
We use _sparsity_ --- the number of non-zero elements in `C` divided by the number of elements in `C`, `C.nnz/C.size`. When sparsity is relatively high, we use `"map-reduce"`, otherwise we use `"cohorts"`.
134
120
121
+
For more detail [see the docs](https://flox.readthedocs.io/en/latest/implementation.html#heuristics).
122
+
123
+
Here is C for a range of chunk sizes from 1 to 12, for computing `groupby("time.month")` of a monthly mean dataset, [the title on each image is (chunk size, sparsity)].
will rechunk so that a year of data is in a single chunk.
150
147
151
148
Even so, it would be nice to automatically rechunk to minimize number of cohorts detected, or to a perfectly blockwise application.
152
-
A key limitation is that we have lost context.
149
+
A key limitation is that we have lost _context_.
153
150
The string `"time.month"` tells me that I am grouping a perfectly periodic array with period 12; similarly
154
-
the _string_`"time.dayofyear"` tells me that I am grouping by a (quasi-)\_periodic array with period 365, and that group `366` may occur occasionally (depending on calendar).
151
+
the _string_`"time.dayofyear"` tells me that I am grouping by a (quasi-)periodic array with period 365, and that group `366` may occur occasionally (depending on calendar).
155
152
This context is hard to infer from integer group labels `[1, 2, 3, 4, 5, ..., 1, 2, 3, 4, 5]`.
153
+
/[Get in touch](https://github.com/xarray-contrib/flox/issues) if you have ideas for how to do this inference!\*.
156
154
157
155
One way to preserve context may be to use Xarray's new Grouper objects, and let them report ["preferred chunks"](https://github.com/pydata/xarray/blob/main/design_notes/grouper_objects.md#the-preferred_chunks-method-) for a particular grouping.
158
156
This would allow a downstream system like `flox` or `dask-expr` to take this in to account later (or even earlier!) in the pipeline.
0 commit comments