Describe the bug
When user code implements “redirect follow” for an HTTP/1.1 upstream by returning Err from ProxyHttp::response_filter after seeing a 3xx (e.g. with Location), and relies on the outer process_request retry loop (while retries < max_retries → proxy_to_upstream) with error.retry == true, the upstream keep-alive connection is not returned to the pool for reuse.
A 3xx with a body is not a transport failure; correct H1 behavior is to fully consume the redirect response on the same TCP connection, then issue the follow-up request on that connection when host/port matches (or after resolving a relative Location on the same origin). Today, erroring out of response_filter causes proxy_1to1’s try_join! to fail, yielding client_reuse = false, so release_http_session is not called for the H1 session and the next retry opens a new upstream connection. Under high RPS this hurts latency and pool efficiency.
Relevant code: pingora-proxy/src/proxy_h1.rs (h1_response_filter → response_filter, proxy_1to1 / try_join!), pingora-proxy/src/lib.rs (proxy_to_upstream and the client_reuse branch that calls Connector::release_http_session).
Pingora info
Please include the following information about your environment:
Pingora version: e.g. 0.8.x or commit hash ________________
Rust version: output of rustc --version / cargo --version
Operating system version: e.g. Ubuntu 22.04, Debian 12.4, macOS 15.x
Steps to reproduce
- Run an origin (or mock) that responds to
GET /a with 302 Found, Location: /b, and optionally a small body; GET /b returns 200 OK.
- Implement a
ProxyHttp that in response_filter detects the 3xx + Location, returns Err with retry enabled (per your usual pattern) so process_request runs proxy_to_upstream again on the next iteration.
- Observe upstream connection behavior (e.g. new TCP connect per logical client request, pool miss, or tracing in the connector).
Example (conceptual) — adapt to your crate:
// In response_filter: on 3xx with Location, trigger outer retry
if upstream_response.status.is_redirection() {
// Build error with retry = true; user may also mutate session for next try
return Err(/* Error with retry enabled */);
}
Expected results
For the same logical downstream request, when policy allows following the redirect on the same origin: the 3xx response body (if any) is fully read on the same H1 connection, then the follow-up request is sent on that connection; the connection remains eligible for pooling via release_http_session when still healthy.
Alternatively, the framework offers a supported first-class path (API or internal loop) for “same-connection redirect follow” without treating it as a fatal proxy error that drops client_reuse.
Observed results
After response_filter returns Err, the H1 path does not complete with client_reuse == true for that session, so the upstream session is not released to the pool for keep-alive reuse. The retry iteration typically uses a new upstream connection.
Additional context
- Scope: HTTP/1.1 to origin only; HTTP/2 upstream is out of scope for this report unless you expand it.
- Security: Cross-origin redirects should remain under explicit user policy; this issue is about connection reuse semantics, not blind following.
- RFC-ish constraint: Do not reuse the connection if the 3xx body was not fully consumed (unless the connection is closing anyway).
- Workarounds: Any approach that avoids
Err from response_filter for this path, or avoids the outer retry for an “internal” redirect, may behave differently; document any workaround you use.
- Optional: Link to a PR that adds docs (e.g.
ProxyHttp::response_filter rustdoc) or a full fix (internal proxy_1to1 loop / new API).
Describe the bug
When user code implements “redirect follow” for an HTTP/1.1 upstream by returning
ErrfromProxyHttp::response_filterafter seeing a 3xx (e.g. withLocation), and relies on the outerprocess_requestretry loop (while retries < max_retries→proxy_to_upstream) witherror.retry == true, the upstream keep-alive connection is not returned to the pool for reuse.A 3xx with a body is not a transport failure; correct H1 behavior is to fully consume the redirect response on the same TCP connection, then issue the follow-up request on that connection when host/port matches (or after resolving a relative
Locationon the same origin). Today, erroring out ofresponse_filtercausesproxy_1to1’stry_join!to fail, yieldingclient_reuse = false, sorelease_http_sessionis not called for the H1 session and the next retry opens a new upstream connection. Under high RPS this hurts latency and pool efficiency.Relevant code:
pingora-proxy/src/proxy_h1.rs(h1_response_filter→response_filter,proxy_1to1/try_join!),pingora-proxy/src/lib.rs(proxy_to_upstreamand theclient_reusebranch that callsConnector::release_http_session).Pingora info
Please include the following information about your environment:
Pingora version: e.g.
0.8.xor commit hash________________Rust version: output of
rustc --version/cargo --versionOperating system version: e.g. Ubuntu 22.04, Debian 12.4, macOS 15.x
Steps to reproduce
GET /awith302 Found,Location: /b, and optionally a small body;GET /breturns200 OK.ProxyHttpthat inresponse_filterdetects the 3xx +Location, returnsErrwith retry enabled (per your usual pattern) soprocess_requestrunsproxy_to_upstreamagain on the next iteration.Example (conceptual) — adapt to your crate:
Expected results
For the same logical downstream request, when policy allows following the redirect on the same origin: the 3xx response body (if any) is fully read on the same H1 connection, then the follow-up request is sent on that connection; the connection remains eligible for pooling via
release_http_sessionwhen still healthy.Alternatively, the framework offers a supported first-class path (API or internal loop) for “same-connection redirect follow” without treating it as a fatal proxy error that drops
client_reuse.Observed results
After
response_filterreturnsErr, the H1 path does not complete withclient_reuse == truefor that session, so the upstream session is not released to the pool for keep-alive reuse. The retry iteration typically uses a new upstream connection.Additional context
Errfromresponse_filterfor this path, or avoids the outer retry for an “internal” redirect, may behave differently; document any workaround you use.ProxyHttp::response_filterrustdoc) or a full fix (internalproxy_1to1loop / new API).