Skip to content

Commit 3f0b342

Browse files
EmilyMattrluvaton
andauthored
feat: Improve sort memory resilience (#19494)
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123. --> Closes #19493 . ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> Greatly reduces the memory requested by ExternalSorter to perform sorts, adds much more granularity to the reservations, and Tries to do this with minimal overhead by merging the splitting and sorting processes. ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> The sort stream will calculate the indices once, but the take will be done in batches, so we create batch_size sized RecordBatches, whose get_record_batch_size results return info that is very close to their sliced sizes(if not completely the same), this means there is no need for the precaution of reserving a huge amount of memory in order to do the merge sort, meaning we can merge more streams at the same time, and so on and so forth. ## Are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> Yes ## Are there any user-facing changes? <!-- If there are user-facing changes then we may require documentation to be updated before approving the PR. --> <!-- If there are any breaking changes to public APIs, please add the `api change` label. --> There is a new sort_batch_chunked function, which returns a Vec of RecordBatch, based on the provided batch_size. Some docs are updated. --------- Co-authored-by: Raz Luvaton <[email protected]>
1 parent 43567b4 commit 3f0b342

File tree

5 files changed

+586
-85
lines changed

5 files changed

+586
-85
lines changed

datafusion/physical-plan/src/aggregates/row_hash.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1198,7 +1198,7 @@ impl GroupedHashAggregateStream {
11981198
// instead.
11991199
// Spilling to disk and reading back also ensures batch size is consistent
12001200
// rather than potentially having one significantly larger last batch.
1201-
self.spill()?;
1201+
self.spill()?; // TODO: use sort_batch_chunked instead?
12021202

12031203
// Mark that we're switching to stream merging mode.
12041204
self.spill_state.is_stream_merging = true;

datafusion/physical-plan/src/sorts/multi_level_merge.rs

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ use arrow::datatypes::SchemaRef;
3030
use datafusion_common::Result;
3131
use datafusion_execution::memory_pool::MemoryReservation;
3232

33-
use crate::sorts::sort::get_reserved_byte_for_record_batch_size;
33+
use crate::sorts::sort::get_reserved_bytes_for_record_batch_size;
3434
use crate::sorts::streaming_merge::{SortedSpillFile, StreamingMergeBuilder};
3535
use crate::stream::RecordBatchStreamAdapter;
3636
use datafusion_execution::{RecordBatchStream, SendableRecordBatchStream};
@@ -360,9 +360,13 @@ impl MultiLevelMergeBuilder {
360360
for spill in &self.sorted_spill_files {
361361
// For memory pools that are not shared this is good, for other this is not
362362
// and there should be some upper limit to memory reservation so we won't starve the system
363-
match reservation.try_grow(get_reserved_byte_for_record_batch_size(
364-
spill.max_record_batch_memory * buffer_len,
365-
)) {
363+
match reservation.try_grow(
364+
get_reserved_bytes_for_record_batch_size(
365+
spill.max_record_batch_memory,
366+
// Size will be the same as the sliced size, bc it is a spilled batch.
367+
spill.max_record_batch_memory,
368+
) * buffer_len,
369+
) {
366370
Ok(_) => {
367371
number_of_spills_to_read_for_current_phase += 1;
368372
}

0 commit comments

Comments
 (0)