-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Add spilling to RepartitionExec #18014
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @adriangb -- can you please add some "end to end" tests (aka that run a real query / LogicalPlan with a memory limit) and show that it still works?
Good call. I'll have to think about how we can force RepartitionExec specifically to spill. I've only experienced it with large datasets, lopsided hash keys, etc |
Should we get this PR as draft for now before spilling tests added? |
(RepartitionBatch::Memory(batch), true) | ||
} | ||
Err(_) => { | ||
// We're memory limited - spill this single batch to its own file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would we avoid out of memory error but risk "to many open files" with this approach, or I'm missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean because of spilling in general or the choice to spill each batch to its own file? In general DataFusion doesn't track number of files. I'm not sure if we keep the files open between writing and reading, I'd guess not. If we close them we only need as many file descriptors as files we spill concurrently - which should not be many (~ number of partitions)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, having a file per batch can get process to get killed (by the os) due to too many open files, also there are like 4 system calls per batch involved.
closing and then opening files for will bring at least 6 system calls per batch.
Ideally if we can keep file open and then share offsets after write, would save lot of syscals, plus we could keep a file per partition. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, having a file per batch can get process to get killed (by the os) due to too many open files
I checked and indeed we close the files after writing so this is not going to happen. the only thing we accumulate is PathBuf
s, not open file descriptors.
closing and then opening files for will bring at least 6 system calls per batch
Ideally if we can keep file open and then share offsets after write, would save lot of syscals
I think that would be nice but I can't think of a scheme that would make that work: reading from the end of the file isn't compatible with our use case because we need it to be FIFO but using a single file only really works if you do LIFO. Do you have any ideas on how we could do this in a relatively simple way?
Given that any query that would spill here would have previously errored out I'm not too worried about performance. And frankly I think if batch sizes are reasonable a couple extra sys calls won't be measurable.
That said since how the spilling happens is all very internal / private there's no reason we can't merge this and then come back and improve it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would file be LIFO if you send offset, I'm not sure I understand. Writing single batch to a file,opening closing still seams excessive to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that keeping old batches on disk wouldn't be problematic. That ticket isn't for RepartitionExec but it does show that there is a tradeoff between trying to save on system calls and overall disk usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shuffle exchange in spark or ballista will spill whole repartition to disk if I'm not mistaken I guess spilling in this case would not be different, apart from fact that this is triggered due to memory constraints.
Maybe we could learn something from new real-time mode in spark https://docs.google.com/document/d/1CvJvtlTGP6TwQIT4kW6GFT1JbdziAYOBvt60ybb7Dw8/edit?usp=sharing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is spark jira https://issues.apache.org/jira/browse/SPARK-52330 with link to design document https://docs.google.com/document/d/1CvJvtlTGP6TwQIT4kW6GFT1JbdziAYOBvt60ybb7Dw8/edit?usp=sharing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One alternative could be to do a spill file per-channel and have some sort of gc process where we say "if the spill file exceeds XGB and/or we have more than YGB of junk space we pay the price of copying over into a new spill file to keep disk usage from blowing up". That might be a general approach to handle cases like #18011 as well. But I'm not sure the complexity is worth it. If that happens even once I feel that it will vastly exceed the cost of the extra sys calls to create and delete files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one idea, would it be possible to spill before actual repartition? Try to acquire memory equal to received batch, if it fails do not do repartition, spill it. That would produce just N file (where N is number of partition)
Sure thing. Note that there are tests, just not e2e SLT tests. |
Not sure how to do it from mobile. Will do next time I'm at my computer. |
@alamb I cooked up an e2e test, let me know if it looks okay: https://github.com/apache/datafusion/pull/18014/files#diff-e8fa07bcbe7db22bdd43eacfa713d380acaf651770da26cb06957030034ebb93 |
f9296b6
to
4091979
Compare
Co-authored-by: Bruce Ritchie <[email protected]>
4091979
to
bcfd64a
Compare
I have a question: are we assuming that it's not possible to make RepartitionExec memory-constant (i.e., O(n_partitions * batch_size) memory)? Is this due to an engineering limitation, or is it theoretically impossible? |
I think it's theoretically impossible. If one partition is faster than another the data flowing into the slow partition has to accumulate somewhere. I consider the possibility of making the channels bounded, but I'm afraid that would result in deadlocks. |
Addresses #17334 (comment)
I ran into this using
datafusion-distributed
which I think makes the issue of partition execution time skew even more likely to happen. As per that issue it can also happen with non-distributed queries, e.g. if one partition's sort spills and others don't.Due to the nature of
ReparitionExec
I don't think we can bound the channels, that could lead to deadlocks. So what I did was at least make queries that would have previously fail continue forward with disk spilling. I did not account for memory usage when reading batches back from disk since DataFusion in general does not generally account for "in-flight" batches.Written with help from Claude