Skip to content

Commit bea551e

Browse files
committed
kep-1669: fix some typos
Signed-off-by: Andrew Sy Kim <[email protected]>
1 parent afa2ffc commit bea551e

File tree

1 file changed

+4
-4
lines changed
  • keps/sig-network/1669-proxy-terminating-endpoints

1 file changed

+4
-4
lines changed

keps/sig-network/1669-proxy-terminating-endpoints/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -67,10 +67,10 @@ API now includes terminating endpoints, kube-proxy strictly forwards traffic to
6767
terminating endpoints can lead to traffic loss. It's worth diving into one specific scenario described in [this issue](https://github.com/kubernetes/kubernetes/issues/85643):
6868

6969
When using Service Type=LoadBalancer w/ externalTrafficPolicy=Local, the availability of node backend is determined by the healthCheckNodePort served by kube-proxy.
70-
Kube-proxy returns a "200 OK" http response on this endpoint if there is a local ready endpoint for a Serivce, otherwise it returns 500 http response signalling to the load balancer that the node should be removed
70+
Kube-proxy returns a "200 OK" http response on this endpoint if there is a local ready endpoint for a Service, otherwise it returns 500 http response signalling to the load balancer that the node should be removed
7171
from the backend pool. Upon performing a rolling update of a Deployment, there can be a small window of time where old pods on a node are terminating (hence not "Ready") but the load balancer
7272
has not probed kube-proxy's healthCheckNodePort yet. In this event, there is traffic loss because the load balancer is routing traffic to a node where the proxy rules will blackhole
73-
the traffic due to a lack of local endpoints. The likihood of this traffic loss is impacted by two factors: the number of local endpoints on the node and the interval between health checks
73+
the traffic due to a lack of local endpoints. The likihood of this traffic loss is impacted by two factors: the number of local endpoints on the node and the interval between health checks
7474
from the load balancer. The worse case scenario is a node with 1 local endpoint and a load balancer with a long health check interval.
7575

7676
Currently there are several workarounds that users can leverage:
@@ -82,7 +82,7 @@ While some of these solutions help, there's more that Kubernetes can do to handl
8282

8383
### Goals
8484

85-
* Reduce potential traffic loss from kube-proxy that occurs on rolling updates because trafffic is sent to Pods that are terminating.
85+
* Reduce potential traffic loss from kube-proxy that occurs on rolling updates because traffic is sent to Pods that are terminating.
8686

8787
### Non-Goals
8888

@@ -132,7 +132,7 @@ until either one of the conditions are satisfied.
132132
### Risks and Mitigations
133133

134134
There are scalability implications to tracking termination state in EndpointSlice. For now we are assuming that the performance trade-offs are worthwhile but
135-
future testing may change this decision. See KEP 1672 for more details.
135+
future testing may change this decision. See [KEP 1672](../1672-tracking-terminating-endpoints) for more details.
136136

137137
## Design Details
138138

0 commit comments

Comments
 (0)