Skip to content

Conversation

phofl
Copy link
Member

@phofl phofl commented Feb 21, 2024

  • closes #xxxx (Replace xxxx with the GitHub issue number)
  • Tests added and passed if fixing a bug or adding a new feature
  • All code checks passed.
  • Added type annotations to new arguments/methods/functions.
  • Added an entry in the latest doc/source/whatsnew/vX.X.X.rst file if fixing a bug or adding a new feature.

this avoids a bunch of unnecessary checks, might be worth it

| Change   | Before [b2b1aae3] <merge~1>   | After [aebecfe9] <merge>   |   Ratio | Benchmark (Parameter)                                                                       |
|----------|-------------------------------|----------------------------|---------|---------------------------------------------------------------------------------------------|
| -        | 46.9±3μs                      | 41.9±0.5μs                 |    0.89 | join_merge.ConcatIndexDtype.time_concat_series('string[pyarrow]', 'monotonic', 0, True)     |
| -        | 210±7ms                       | 187±5ms                    |    0.89 | join_merge.I8Merge.time_i8merge('left')                                                     |
| -        | 2.11±0.2ms                    | 1.88±0.01ms                |    0.89 | join_merge.MergeEA.time_merge('Float32', False)                                             |
| -        | 17.1±3ms                      | 15.2±0.2ms                 |    0.89 | join_merge.MergeMultiIndex.time_merge_sorted_multiindex(('int64', 'int64'), 'inner')        |
| -        | 1.06±0.04ms                   | 907±20μs                   |    0.86 | join_merge.Merge.time_merge_dataframe_integer_2key(False)                                   |
| -        | 7.99±0.9ms                    | 6.82±0.06ms                |    0.85 | join_merge.ConcatIndexDtype.time_concat_series('string[pyarrow]', 'non_monotonic', 1, True) |
| -        | 224±10ms                      | 191±6ms                    |    0.85 | join_merge.I8Merge.time_i8merge('inner')                                                    |
| -        | 513±90μs                      | 425±5μs                    |    0.83 | join_merge.ConcatIndexDtype.time_concat_series('string[python]', 'monotonic', 0, False)     |
| -        | 519±100μs                     | 421±2μs                    |    0.81 | join_merge.ConcatIndexDtype.time_concat_series('string[python]', 'non_monotonic', 0, False) |
| -        | 1.18±0.6ms                    | 751±7μs                    |    0.63 | join_merge.ConcatIndexDtype.time_concat_series('int64[pyarrow]', 'has_na', 1, False)        |

@phofl phofl added Performance Memory or execution speed performance Reshaping Concat, Merge/Join, Stack/Unstack, Explode labels Feb 21, 2024
@phofl phofl requested a review from WillAyd as a code owner February 21, 2024 23:22
# ignore_na is True), we can skip the actual value, and
# replace the label with na_sentinel directly
labels[i] = na_sentinel
seen_na = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of trying to find this iterating by element can we get the same performance by querying up front for missing values? That approach would work better in the future where we are more arrow based

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't we expect a two-pass implementation to be slower? i dont think we should be making decisions based on a potential arrow-based future since that would likely need a completely separate implementation anyway

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arrow strings are using a completely different code path, this is just for our own strings

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't we expect a two-pass implementation to be slower? i

Generally I would expect a two pass solution to be faster. Our nulll-checking implementations are pretty naive and always go elementwise with a boolean value. The Arrow implementation iterates 64 bits at a time, so checking for NA can be up to 64x as fast as this kind of loop. I'm not sure what NumPy does internally but with a byte mask that same type of logic would be up to 8x as fast.

By separating that out to a different has_na().any() type of check you let other libraries determine that much faster than we can

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear not a blocker for now; just something to think about as we make our code base more Arrow friendly

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arrow strings are using a completely different code path, this is just for our own strings

As I said, arrow dispatches to pyarrow compute functions for those things, so it won't impact this part for now anyway

labels[i] = idx

if return_inverse and return_labels_only:
return labels.base, seen_na # .base -> underlying ndarray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the second argument returned by this is either a bool or an ndarray right? Instead of having dynamic return types like that I think it would be better to just pass in a reference to seen_na - the caller can choose to ignore the result altogether if they'd like. But that way you don't need a return_labels_only argument and can be consistent in what is returned

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this was my first idea as well, but it isn't a good idea. We drop the result but we still set an internal variable that signals that there is an external view created, which triggers a copy later in the process because we call into the same hashtable again. Avoiding this copy is part of the performance improvement, so we can't use the same signature.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how passing the seen_na variable by reference as opposed to having a dynamic return type affects that. That seems like a code organization issue outside of this?

Copy link
Member Author

@phofl phofl Mar 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doesn't matter, but returning the same result as when return_labels_only=False matters, so we need another if condition anyway.

@phofl
Copy link
Member Author

phofl commented Mar 16, 2024

@WillAyd good to merge?

Copy link
Contributor

This pull request is stale because it has been open for thirty days with no activity. Please update and respond to this comment if you're still interested in working on this.

@github-actions github-actions bot added the Stale label Apr 18, 2024
@mroeschke
Copy link
Member

I think we would want this, but it appears this has gone stale so going to close to clear the queue

@mroeschke mroeschke closed this Apr 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Performance Memory or execution speed performance Reshaping Concat, Merge/Join, Stack/Unstack, Explode Stale

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants