You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: keps/sig-network/1669-proxy-terminating-endpoints/README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,10 +67,10 @@ API now includes terminating endpoints, kube-proxy strictly forwards traffic to
67
67
terminating endpoints can lead to traffic loss. It's worth diving into one specific scenario described in [this issue](https://github.com/kubernetes/kubernetes/issues/85643):
68
68
69
69
When using Service Type=LoadBalancer w/ externalTrafficPolicy=Local, the availability of node backend is determined by the healthCheckNodePort served by kube-proxy.
70
-
Kube-proxy returns a "200 OK" http response on this endpoint if there is a local ready endpoint for a Serivce, otherwise it returns 500 http response signalling to the load balancer that the node should be removed
70
+
Kube-proxy returns a "200 OK" http response on this endpoint if there is a local ready endpoint for a Service, otherwise it returns 500 http response signalling to the load balancer that the node should be removed
71
71
from the backend pool. Upon performing a rolling update of a Deployment, there can be a small window of time where old pods on a node are terminating (hence not "Ready") but the load balancer
72
72
has not probed kube-proxy's healthCheckNodePort yet. In this event, there is traffic loss because the load balancer is routing traffic to a node where the proxy rules will blackhole
73
-
the traffic due to a lack of local endpoints. The likihood of this traffic loss is impacted by two factors: the number of local endpoints on the node and the interval between health checks
73
+
the traffic due to a lack of local endpoints. The likihood of this traffic loss is impacted by two factors: the number of local endpoints on the node and the interval between health checks
74
74
from the load balancer. The worse case scenario is a node with 1 local endpoint and a load balancer with a long health check interval.
75
75
76
76
Currently there are several workarounds that users can leverage:
@@ -82,7 +82,7 @@ While some of these solutions help, there's more that Kubernetes can do to handl
82
82
83
83
### Goals
84
84
85
-
* Reduce potential traffic loss from kube-proxy that occurs on rolling updates because trafffic is sent to Pods that are terminating.
85
+
* Reduce potential traffic loss from kube-proxy that occurs on rolling updates because traffic is sent to Pods that are terminating.
86
86
87
87
### Non-Goals
88
88
@@ -132,7 +132,7 @@ until either one of the conditions are satisfied.
132
132
### Risks and Mitigations
133
133
134
134
There are scalability implications to tracking termination state in EndpointSlice. For now we are assuming that the performance trade-offs are worthwhile but
135
-
future testing may change this decision. See KEP 1672 for more details.
135
+
future testing may change this decision. See [KEP 1672](../1672-tracking-terminating-endpoints) for more details.
0 commit comments