-
Notifications
You must be signed in to change notification settings - Fork 20
More stable algorithm for variance, standard deviation #456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
||
def __init__(self, arrays): | ||
self.arrays = arrays # something else needed here to be more careful about types (not sure what) | ||
# Do we want to co-erce arrays into a tuple and make sure it's immutable? Do we want it to be immutable? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is fine as-is
return MULTIARRAY_HANDLED_FUNCTIONS[func](*args, **kwargs) | ||
|
||
# Shape is needed, seems likely that the other two might be | ||
# Making some strong assumptions here that all the arrays are the same shape, and I don't really like this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah this data structure isn't useful in general, and is only working around some limitations in the design where we need to pass in multiple intermediates to the combine function. So there will be some ugliness. You have good instincts.
flox/aggregate_flox.py
Outdated
|
||
sum_squared_deviations = sum( | ||
group_idx, | ||
(array - array_means[..., group_idx]) ** 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👏 👏🏾
@@ -235,7 +235,7 @@ def gen_array_by(size, func): | |||
@pytest.mark.parametrize("size", [(1, 12), (12,), (12, 9)]) | |||
@pytest.mark.parametrize("nby", [1, 2, 3]) | |||
@pytest.mark.parametrize("add_nan_by", [True, False]) | |||
@pytest.mark.parametrize("func", ALL_FUNCS) | |||
@pytest.mark.parametrize("func", ["nanvar"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we will revert before merging, but this is the test we need to make work first. It runs a number of complex cases.
@@ -343,12 +343,106 @@ def _mean_finalize(sum_, count): | |||
) | |||
|
|||
|
|||
def var_chunk(group_idx, array, *, engine: str, axis=-1, size=None, fill_value=None, dtype=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved this here, so that we can generalize to "all" engines. it has some ugliness (notice that it now takes the engine
kwarg)
array_sums = generic_aggregate( | ||
group_idx, | ||
array, | ||
func="nansum", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will need to be "sum" for "var".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My first thought is to pass through some kind of "are NaNs okay" boolean variable through to var_chunk and var_combine. Is this what xarray's skipna
does? Or I think I've seen it done as a string "propogate"
or "ignore"
? And then to call the var_chunk and var_combine as a partial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes the way I do this in flox is create a var_chunk = partial(_var_chunk, skipna=False)
and _nanvar_chunk=partial(_var_chunk, skipna=True)
you can stick this in the Aggregation constructor I think
@@ -1251,7 +1252,8 @@ def chunk_reduce( | |||
# optimize that out. | |||
previous_reduction: T_Func = "" | |||
for reduction, fv, kw, dt in zip(funcs, fill_values, kwargss, dtypes): | |||
if empty: | |||
# UGLY! but this is because the `var` breaks our design assumptions | |||
if empty and reduction is not var_chunk: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code path is an "optimization" for chunks that don't contain any valid groups. so group_idx
is all -1
.
We will need to override full
in MultiArray. Look up what the like
kwarg does here, it dispatches to the appropriate array type.
The next issue will be that fill_value is a scalar like np.nan
but that doesn't work for all our intermediates (e.g. the "count").
- My first thought is that
MultiArray
will need to track a default fill_value per array. Forvar
, this can be initialized to(None, None, 0)
. IfNone
we use thefill_value
passed in; else the default. - The other way would be to hardcode some behaviour in
_initialize_aggregation
so thatagg.fill_value["intermediate"] = ( (fill_value, fill_value, 0), )
, and then multi-array can receive that tuple and do the "right thing".
The other place this will matter is in reindex_numpy
, which is executed at the combine step. I suspect the second tuple approach is the best.
This bit is hairy, and ill-defined. Let me know if you want me to work through it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm partway through implementing something to work here.
- How do I trigger this code pathway without brute force overwriting
if empty:
withif True:
- When np.full is called,
like
is a np array not a MultiArray, because it's (I think) the chunk data and bypassing var_chunk (could also be an artefact of theif True
override above?). In a pinch, I guess I could add an elif that catches theempty and reduction is var_chunk
and co-erce that into a MultiArray, but it's also ugly so I'm hoping you might have better ideas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking some more, I may have misinterpreted what fill_value is used for. When is it needed for intermediates?
This is great progress! Now we reach some much harder parts. I pushed a commit to show where I think the "chunk" function should go and left a few comments. I think the next steps should be to
|
Co-authored-by: Deepak Cherian <[email protected]>
Co-authored-by: Deepak Cherian <[email protected]>
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Do you think it likely that MultiArray would ever be used for anything else? I'm tempted to rename it VarChunkArray or somesuch, so the line between "expected behaviour" and "something's not right here" can be more clearly defined. Not sure it really changes the code's behaviour right now, but it would allow some more checks in and more defensive code. By "add a new test to test_core.py with your reproducer (though modified to work with pure numpy arrays)", do you mean add the failing code from my original issue to the end of @requires_dask
@pytest.mark.parametrize("func", ("nanvar",)) # Expect to expand this to other functions once written
@pytest.mark.parametrize("engine",("flox",)) # Expect to expand this to other engines once written
# May also want labels parametrized in here?
def test_std_var_precision(func,engine, etc):
# Generate a dataset with small variance and big mean
# Check that func with engine gives you the same answer as numpy with internals mostly modelled on a trimmed down version of |
Possibly, but only within flox.
We can liberally make use of comments,
Yes, a simple-ish one would be fine. Line 111 in 8cfd999
Hopefully your changes will let us delete that line |
Co-authored-by: Deepak Cherian <[email protected]>
for more information, see https://pre-commit.ci
I pushed a commit. Your changes are looking good! I constructed an expected result from numpy and it matches! I'm not sure what the expectation should be for Lastly, I noticed that you basically have a "property" test here (which is quite cool) - this is a "metamorphic relation" To get the existing test suite to start passing, you'll have to add support for the |
I can't much anything looking at this method specifically, just one entry in a table and I'm not sure how it'd generalise. It seems like the variant of the derivation I used is more or less neglected in the speed/precision evaluation later in the paper, though the naming of various algorithms is a little tricky to follow and I might have missed it. I think as a ballpark estimate from the one line in a table we'd expect to gain another 4-6 decimal points on the
I'd expect it to fail for sufficiently big offsets, just to do better than the old algorithm. Does this present a problem?
Oops, I can fix that. I think the finalize function expects a ddof kwarg but I probably didn't pass it through. |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Absolutely not! This is a massive improvement for common cases. For now we can skip the failing comparison for large offsets. As long as it's close to numpy I'm happy. One thing to do would be to find the minimum tolerance for which we match numpy across that range of offsets. |
Updated algorithm for nanvar, to use an adapted version of the Schubert and Gertz (2018) paper mentioned in #386, following discussion in #422