Skip to content

Conversation

frankmcsherry
Copy link
Member

No description provided.

@frankmcsherry frankmcsherry force-pushed the unchained branch 4 times, most recently from f33488a to e2e49ec Compare July 18, 2025 00:38
Comment on lines +608 to +616
fn partition<I>(container: &mut Self::Container, builders: &mut [Self], mut index: I)
where
Self: for<'a> PushInto<<Self::Container as timely::Container>::Item<'a>>,
I: for<'a> FnMut(&<Self::Container as timely::Container>::Item<'a>) -> usize,
{
println!("Exchanging!");
for datum in container.drain() {
let index = index(&datum);
builders[index].push_into(datum);
}
container.clear();
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might have to revisit how to exchange containers that cannot be drained/iterated here. The implementation doesn't actually work because container.drain() is unimplemented.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally. I put it in only to be able to see if it panicked, because I wasn't certain what would happen with the default implementation (pretty sure it would panic too, but copy/pasted to be sure). It's definitely within the PR's power to have an implementation that works, and this is not it. :D

std::cmp::Ordering::Less => {
let lower = this_key_range.start;
gallop(this.keys.borrow(), &mut this_key_range, |x| x < that_key);
merged.extend_from_keys(&this, lower .. this_key_range.start);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(here and below) extend_from_keys/_vals calls extend_from_self, but we don't (or can't!) presize the merged container. Pointing out that this is a potential performance regression if we have to reallocate often.
As we can't know the final size ahead of time, we could allocate a container big enough to hold the sum of the two inputs. In the worst case, this would only waste virtual memory I think.

But, let's first measure and see whether it shows up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elsewhere (datatoad) the presizing makes sense and seems to help a bit. One way to view merging and consolidation is as literally merging the tuples (no consolidation), for which the capacities could just be the sums of the capacities for each layer, followed by a consolidation pass that ends up filtering tuples out (ones that end up as zero), for which .. we may or may not feel bad about the over allocation. In a demand-paging world, I wouldn't feel too bad. In a world where capacities are limited, even if you don't use the data, more complicated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants