Releases: linkerd/linkerd2-proxy
Releases · linkerd/linkerd2-proxy
v2.165.1
This release fixes an issue in v2.165.0 where clients were not configured with the trust roots until identity was provisioned. This prevented the identity client form establishing TLS with the identity controller so proxies could never become ready.
v2.165.0
This release improves retries so that requests without a `content-length` can be retried. This should permit requests emitted by grpc-go to be retried. Discovery diagnostics have also been improved by ensuring that service discovery updates are logged at DEBUG. Previously these messages were only emitted at the TRACE level.
v2.164.0
This release changes the default allocator (on linux/x86_64) to be `jemalloc`. In tests, jemalloc proves to use less memory without noticeable impacts on CPU utilization or latency. This change also includes dependency updates, including an update of Rust to v1.56.0.
v2.161.1
This release backports several fixes onto the proxy version used in the Linkerd stable-2.11.0 release. These changes include: - A fix for an issue that could cause the proxy to infinite loop and fail liveness probes when handling errors on meshed HTTP/1 connections - An upgrade of the `h2` crate to support large header values - An update to reduce memory overhead on linux/x86_64 by adopting `jemalloc` as the default allocator
v2.163.0
This release fixes a bug where the outbound proxy could loop infinitely while handling errors on meshed HTTP/1 connections. This would typically cause proxies to be fail health checks and be restarted. Furthermore, the proxy now requires identity. Proxies will log an error and fail to start if identity is disabled.
v2.162.0
This release updates the `h2` crate to support HTTP/2 messages with large header values. Legacy support for TLSv1.2 has been removed. Now the proxy only uses TLSv1.3 for mTLS communication. Also, gateway proxies now only support clients that use the `transport.l5d.io` protocol, negotiated via ALPN. With these changes, older clients (before ~v2.133.0) are no longer supported by new servers.
v2.161.0
This release fixes a bug where HTTP load balancers could continue trying to establish connections to endpoints that were removed from service discovery. This could happen when an idle load balancer had other ready endpoints. The old endpoint would only be dropped once a new request was issued on the service. Now these clients do not attempt to reconnect to unless prompted by the load balancer.
v2.160.0
This release improves the proxy's error handling, introducing a new `l5d-proxy-connection` header to signal from an inbound proxy when its peers outbound connections should be torn down. Furthermore, error handling has been improved so that the `l5d-proxy-error` header is only sent to trusted peers--the inbound proxy only emits this header when its client is meshed; and the outbound proxy can be configured to disable these headers via configuration.
v2.159.0
This release includes a change to how inbound connections are forwarded on the local pod: instead of sending traffic to the application on 127.0.0.1, the proxy now only forwards traffic on the pod's public IP address. This protects services that are only bound on the loopback interface from being exposed to other pods. This release also improves authorization metrics to include TLS labels (including the client's identity, if available). The `/tasks` admin endpoint has been removed, as it no longer works. It will be replaced with an admin endpoint supporting tokio's `console` in a future release.
v2.158.0
This release features improved memory utilization, especially for TCP forwarding. Previously 128KB was allocated for each proxied connection. This has been reduced to 16KB. This release also includes updates to the inbound policy system: connections are now always permitted from localhost and logging has been improved to make it easier to debug policy issues.