Releases: linkerd/linkerd2-proxy
Releases · linkerd/linkerd2-proxy
v2.103.0
This release increases the default buffer size to match the proxy's in-flight request limit. This reduces contention in overload--especially high-concurrency--situations, substantially reducing tail latency.
v2.102.0
This release fixes a regression that could cause service profile lookups to be retried indefinitely, despite the server returning an `InvalidArgument` response (which indicates the proxy should not retry).
v2.101.0
This release primarily features an upgrade of the proxy's underlying Tokio runtime and its related libraries. We've observed lower latencies in initial benchmarks, but further testing and burn-in is warranted. Also, the proxy now honors the `LINKERD_PROXY_LOG_FORMAT=json` configuration to enable JSON-formatted logging.
v2.100.0
This change modifies the inbound gateway caching so that requests may be routed to multiple leaves of a traffic split.
v2.99.0
The proxy can now operate as gateway, routing requests from its inbound proxy to the outbound proxy, without passing the requests to a local application. This supports Linkerd's multicluster feature by adding a `Forwarded` header to propagate the original client identity and assist in loop detection.
v2.98.0
In some ingress setups, the proxy could be tricked into looping requests through the outbound proxy. We now detect these loops and fail these requests with a 502, saving your precious CPU.
v2.97.0
This release adds special handling for I/O errors in HTTP responses so that an `errno` label is included to describe the underlying errors in the proxy's metrics.
v2.96.0
This release reduces latency and CPU consumption, especially for high- concurrency use cases.
v2.95.0
This release modifies Linkerd's internal buffering to avoid idling out services as a request arrives. This could cause failures for requests that are sent exactly once per minute, such as Prometheus scrapes.
v2.94.0
This release improves gRPC-aware error handling to set a `grpc-status` to `UNAVAILABLE` when a response stream is interrupted by a transport error. This is consistent with common gRPC implementations' error- handling behavior.