You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Linkerd's proxy does not currently support retries on requests with
payloads. This precludes Linkerd from retrying gRPC requests, which is a
substantial limitation.
This PR adds support for retries on requests with bodies, if and only if
the request has a `Content-length` header and the content length is <=
64 KB.
In order to retry requests with payloads, we need to buffer the body
data for that request so that it can be sent again if a retry is
necessary. This is implemented by wrapping profile requests in a new
`Body` type which lazily buffers each chunk of data polled from the
inner `Body`. The buffered data is shared with a clone of the request,
and when the original body is dropped, ownership of the buffered data is
transferred to the clone. When the cloned request is sent, polling its
body will yield the buffered data.
If the server returns an error _before_ an entire streaming body has
been read, the replay body will continue reading from the initial
request body after playing back the buffered portion.
Data is buffered by calling `Buf::copy_to_bytes` on each chunk of data.
Although we call this method on an arbitrary `Buf` type, all data chunks
in the proxy are actually `Bytes`, and their `copy_to_bytes` methods are
therefore a cheap reference-count bump on the `Bytes`. After calling
`copy_to_bytes`, we can clone the returned `Bytes` and store it in a
vector. This allows us to buffer the body without actually copying the
bytes --- we just increase the reference count on the original buffer.
This buffering strategy also has the advantage of allowing us to write
out the entire buffered body in one big `writev` call. Because we store
the buffered body as a list of distinct buffers for each chunk, we can
expand the buffered body to a large number of scatter-gather buffers in
`Buf::bytes_vectored`. This should make replaying the body more
efficient, as we don't have to make a separate `write` call for each
chunk.
I've also added several tests for the new buffering body. In particular,
there are tests for a number of potential edge cases, including:
- when a retry is started before the entire initial body has been read
(i.e. if the server returns an error before the request completes),
- when a retry body is cloned multiple times, including if the client's
body has not yet completed,
- dropping clones prior to completion
Closeslinkerd/linkerd2#6130.
Signed-off-by: Eliza Weisman <[email protected]>
0 commit comments