-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Add documentation to concurrent multipart upload utility methods #128821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add documentation to concurrent multipart upload utility methods #128821
Conversation
In the hope to make more sense of how the flux of byte buffer works. Also remove the synchronization on the input stream. Relates ES-11815
|
Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination) |
henningandersen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
| // So this flux instantiates 128 ByteBuffer objects of DEFAULT_UPLOAD_BUFFERS_SIZE bytes in heap every time the NettyOutbound in | ||
| // the Azure's Netty event loop requests byte buffers to write to the network channel. That represents 128 * 64kb = 8 mb per | ||
| // flux. The creation of the ByteBuffer objects are forked to the repository_azure thread pool, which has a maximum of 15 | ||
| // threads (most of the time, can be less than that for nodes with less than 750mb). It means that max. 15 * 8 = 120mb bytes are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // threads (most of the time, can be less than that for nodes with less than 750mb). It means that max. 15 * 8 = 120mb bytes are | |
| // threads (most of the time, can be less than that for nodes with less than 750mb heap). It means that max. 15 * 8 = 120mb bytes are |
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in 680f4b6
| * | ||
| * @param stream the input stream that needs to be converted | ||
| * @param length the expected length in bytes of the input stream | ||
| * @param byteBufferSize the size of the ByteBuffer to be created |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| * @param byteBufferSize the size of the ByteBuffer to be created | |
| * @param byteBufferSize the size of the ByteBuffers to be created |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed in 680f4b6
| } | ||
| return ByteBuffer.wrap(buffer); | ||
| })).doOnComplete(() -> { | ||
| }), 0 /* no prefetching needed for concatMap as the MonoSendMany subscriber requests 128 elements */).doOnComplete(() -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not entirely sure I understand this comment. Is it an important thing or a nit? It seems like we'd generate 128 elements regardless of this being 0 or the default? Is there a difference in when?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's really a nit; I've been puzzled about the byte buffers being fetched by 32 instead of the 128 from the initial comment. This prefetching do not have much impact in our case I think.
In the hope to make more sense of how the flux of byte buffer works. Also remove the synchronization on the input stream.
Relates ES-11815