-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Add http stream content size handler (fixed #120246) #121095
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination) |
| if (isOversized) { | ||
| if (isContinueExpected) { | ||
| // Client is allowed to send content without waiting for Continue. | ||
| // See https://www.rfc-editor.org/rfc/rfc9110.html#section-10.1.1-11.3 | ||
| // this content will result in HttpRequestDecoder failure and send downstream | ||
| decoder.reset(); | ||
| } | ||
| ctx.writeAndFlush(TOO_LARGE.retainedDuplicate()).addListener(ChannelFutureListener.CLOSE_ON_FAILURE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fix. No longer close connection. Previous version:
if (isOversized) {
if (isContinueExpected) {
// Client is allowed to send content without waiting for Continue.
// See https://www.rfc-editor.org/rfc/rfc9110.html#section-10.1.1-11.3
// this content will result in HttpRequestDecoder failure and send downstream
decoder.reset();
ctx.writeAndFlush(TOO_LARGE.retainedDuplicate()).addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
} else {
// Client is sending oversized content, we cannot safely take it. Closing channel.
ctx.writeAndFlush(TOO_LARGE_CLOSE.retainedDuplicate()).addListener(ChannelFutureListener.CLOSE);
}
DaveCTurner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
💔 Backport failed
You can use sqren/backport to manually backport by running |
Reapplying #120246 with fix after it's being reverted #120934.
Fix. In original PR requests without
Expect: 100-Continueand oversized are rejected with subsequent channel closure. This is not what netty does and introduced breakage on elasticsearch-js client. The expected behaviour is that ES should reject request and discard body without closing connection.Description from original PR.
Netty's HttpObjectAggregator handles for Expect: 100-Continue and chunked oversized requests besides aggregating parts. But HTTP stream handling does not use aggregator and we plan to remove it completely. That means we need to handle these cases by ourself.
This PR introduces Netty4HttpContentSizeHandler that handles expect-continue and oversized requests in the same way as HttpObjectAggregator. Some parts are copied from netty's code and simplified for our usage. Follow up on #117787 split into smaller pieces.
Once we completely switch to HTTP stream this handler will replace HttpObjectAggregator. For now there is conditional logic between stream and aggregation.
There is an interesting interaction between HttpRequestDecoder and HttpObjectAggregator . When aggregator responds with 413-too-large to expect-100-continue it resets HttpRequestDecoder through userEvent in pipeline.
Aggregator sends event and Decoder resets state.
This reset is required to avoid treating subsequent request as content of rejected request. But this is private code. Public interface is HttpRequestDecoder#reset(), so Netty4HttpContentSizeHandler requires explicit access to decoder and reset.