-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Add http stream content size handler #120246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| // ensures that oversized chunked encoded request has no limits at http layer | ||
| // rest handler is responsible for oversized requests | ||
| public void testOversizedChunkedEncodingNoLimits() throws Exception { | ||
| // ensures that oversized chunked encoded request has maxContentLength limit and returns 413 | ||
| public void testOversizedChunkedEncoding() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With stream introduction we dropped limits for chunked content. Bringing them back.
|
Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination) |
DaveCTurner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just a couple of small points around testing.
| } else { | ||
| decoder.reset(); | ||
| ctx.writeAndFlush(EXPECTATION_FAILED.retainedDuplicate()).addListener(ChannelFutureListener.CLOSE_ON_FAILURE); | ||
| return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we cover this branch in the unit tests - could you add a test? Also just resetting the decoder seems quite a weak response, I would expect that in this situation we just don't know how to proceed with subsequent data so should give up and close the connection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will add a test. Subsequent data will result in decoding failure and trip HttpPipeliningHandler and very likely connection close. If there is no data connection can be reused. I kind of agree that closing makes sense, since the 'Expect' header supports "100-Continue" only, while other values seem suspicious.
| @ESIntegTestCase.ClusterScope(numDataNodes = 1) | ||
| public class Netty4IncrementalRequestHandlingIT extends ESNetty4IntegTestCase { | ||
|
|
||
| private static final int maxContentLength = 50 * 1024 * 1024; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: suggest
| private static final int maxContentLength = 50 * 1024 * 1024; | |
| private static final int TEST_MAX_CONTENT_LENGTH_BYTES = ByteSizeUnit.MB.toIntBytes(50); |
SHOUTY_SNAKE_CASE for a constant, plus avoiding * 1024 * 1024.
| channel.writeInbound(encode(part1)); | ||
| ((HttpContent) channel.readInbound()).release(); | ||
|
|
||
| var part2 = httpContent(MAX_CONTENT_LENGTH); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this should fail at 1 byte, no need to send double the limit here. But also, could we send a wider variety of chunk sizes rather than just one max-sized chunk? Something like this?
int contentBytesSent = 0;
do {
var thisPartSize = between(1, MAX_CONTENT_LENGTH * 2);
channel.writeInbound(encode(httpContent(thisPartSize)));
contentBytesSent += thisPartSize;
if (contentBytesSent <= MAX_CONTENT_LENGTH) {
((HttpContent) channel.readInbound()).release();
} else {
break;
}
} while (true);There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test covers both invariants of content tracking before reaching limit and after, also at precise mark of MAX_CONTENT_LENGTH to cover one-off case. The chunking itself has no meaning here. I'm fine to change thou.
DaveCTurner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| */ | ||
| public void testExpectationFailed() { | ||
| var sendRequest = httpRequest(); | ||
| sendRequest.headers().set(HttpHeaderNames.EXPECT, "Potato"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:) maybe randomValueOtherThan("100-Continue", ESTestCase::randomIdentifier); would be a more descriptive way to say what we're trying to do here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fancy
|
@DaveCTurner, CI catch an interesting exception in I'm wondering should we always close connection when request is oversized, including expect-continue? |
That sounds like invalid client behaviour to me. The whole point of I think the test should instead simulate correct client behaviour: send the headers but no body, check it gets a 413 back, and then ideally try and send another (valid) request to check the decoder gets left in the right state to handle it. |
|
I think it's correct behaviour based on rfc. The expect header is not very strict. https://www.rfc-editor.org/rfc/rfc9110.html#section-10.1.1-11.3
|
I will add this one. |
nicktindall
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Added mixed content test. 6080e50. Assuming a well behaving client that patiently awaits Expect response and does not send content if receive rejection. |
Interesting, I missed that. In which case I think we should reinstate the test behaviour that sends the content straight away too (maybe in a follow-up). The spec doesn't say how to proceed after sending a response other than |
I'm not liking spec much. The implementation seems not trivial and decoder has to be very smart about it. I think keeping current netty's behaviour - reset decoder and dont close connection - should be ok. And dont test this use case. |
💔 Backport failed
You can use sqren/backport to manually backport by running |
I'm ok with that too. |
This reverts commit 43e3e24.
…ic#120934) This reverts commit 43e3e24.
Netty's
HttpObjectAggregatorhandles forExpect: 100-Continueand chunked oversized requests besides aggregating parts. But HTTP stream handling does not use aggregator and we plan to remove it completely. That means we need to handle these cases by ourself.This PR introduces
Netty4HttpContentSizeHandlerthat handles expect-continue and oversized requests in the same way asHttpObjectAggregator. Some parts are copied from netty's code and simplified for our usage. Follow up on #117787 split into smaller pieces.Once we completely switch to HTTP stream this handler will replace HttpObjectAggregator. For now there is conditional logic between stream and aggregation.
There is an interesting interaction between
HttpRequestDecoderandHttpObjectAggregator. When aggregator responds with413-too-largetoexpect-100-continueit resetsHttpRequestDecoderthrough userEvent in pipeline.Aggregator sends event and Decoder resets state.
This reset is required to avoid treating subsequent request as content of rejected request. But this is private code. Public interface is
HttpRequestDecoder#reset(), soNetty4HttpContentSizeHandlerrequires explicit access to decoder and reset.