-
-
Notifications
You must be signed in to change notification settings - Fork 621
Stream merge blocking downstream work. #3610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream merge blocking downstream work. #3610
Conversation
… chunk is read from the output channel.
|
The failing tests:
This test assumes lock step, but we cannot assure this in the first 2 chunks. If we extend this test we can assure that we do not pul more than 1 chunk any longer. Allowing the first 2 values to be the same and then having the values increment would prove this case.
I will apply fixes to these test if we agree that this change to merge is desired. |
|
Looks good to me, good find! |
|
Thank you for the approval, I have fixed the tests. However in the test: Not sure about the failed action, seems unrelated. |
|
Hm, I wonder if we should provide an alternative |
|
I agree about inclusion of the |
…only emit next chunk when first one is fully processed.
Stream merge does not block until scope is closed, but only until the chunk is read from the output channel.
Merge was waiting for the resulting chunk to be fully processed, this introduced a scope (resource) to the stream, which then if leased for further parallel processing would cause the merge to be waiting forever (until the parallel processing finished).
Here I am changing that we are only guarding the production of values from merge. Instead of guarding the whole processing of the chunk, we are only guarding the chunk up to the point where it is read from the output of the merge.
This also fixes #3598.
This is now aligned with documentation of merge where
Stream(this, that).parJoinUnbounded==this merge that