Skip to content

Commit 83d07ce

Browse files
Add documentation for Service.spec.trafficDistribution
1 parent fe2efe0 commit 83d07ce

File tree

2 files changed

+144
-0
lines changed

2 files changed

+144
-0
lines changed
Lines changed: 128 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
---
2+
reviewers:
3+
- gauravkg
4+
title: Service Traffic Distribution
5+
content_type: concept
6+
weight: 130
7+
description: >-
8+
The `spec.trafficDistribution` field within a Kubernetes Service allows you to
9+
express routing preferences for service endpoints. This can optimize network
10+
traffic patterns for performance, cost, or reliability.
11+
---
12+
13+
14+
<!-- overview -->
15+
16+
{{< feature-state for_k8s_version="v1.30" state="alpha" >}}
17+
18+
The `spec.trafficDistribution` field within a Kubernetes Service allows you to
19+
express preferences for how traffic should be routed to Service endpoints. This
20+
field acts as a hint, and implementations, while encouraged to consider the
21+
preference, are not strictly bound by it.
22+
23+
<!-- body -->
24+
25+
## Using Service Traffic Distribution
26+
27+
You can influence how a Kubernetes Service routes traffic by setting the
28+
optional `.spec.trafficDistribution` field. Currently, the following value is
29+
supported:
30+
31+
* `PreferClose`: Indicates a preference for routing traffic to endpoints that
32+
are topologically proximate to the client. The interpretation of
33+
"topologically proximate" may vary across implementations and could encompass
34+
endpoints within the same node, rack, zone, or even region. Setting this value
35+
gives implementations permission to make different tradeoffs, e.g. optimizing
36+
for proximity rather than equal distribution of load. Users should not set
37+
this value if such tradeoffs are not acceptable.
38+
39+
If the field is not set, the implementation will apply its default routing strategy.
40+
41+
## How it Works
42+
43+
Implementations like kube-proxy use the `spec.trafficDistribution` field as a
44+
guideline. The behavior associated with a given preference may subtly differ
45+
between implementations.
46+
47+
* `PreferClose` with kube-proxy: For kube-proxy, this means prioritizing
48+
endpoints within the same zone as the client. The EndpointSlice controller
49+
updates EndpointSlices with hints to communicate this preference, which
50+
kube-proxy then uses for routing decisions. If a client's zone does not have
51+
any available endpoints, traffic will be routed cluster-wide for that client.
52+
53+
In the absence of any value for `trafficDistribution`, the default routing
54+
strategy for kube-proxy is to distribute traffic to any endpoint in the cluster.
55+
56+
### Comparison with `service.kubernetes.io/topology-mode=Auto`
57+
58+
The `trafficDistribution` field with `PreferClose` shares a common goal of
59+
prioritizing same-zone traffic with `service.kubernetes.io/topology-mode=Auto`
60+
annotation. However, there are key differences in their approaches:
61+
62+
* `service.kubernetes.io/topology-mode=Auto`: Attempts to distribute traffic
63+
proportionally across zones based on allocatable CPU resources. This heuristic
64+
includes safeguards (like
65+
[these]((docs/concepts/services-networking/topology-aware-routing/#three-or-more-endpoints-per-zone)))
66+
and could lead to the feature being disabled in certain scenarios for
67+
load-balancing reasons. This approach sacrifices some predictability in favor
68+
of potential load balancing.
69+
70+
* `trafficDistribution: PreferClose`: This approach aims to be slightly simpler
71+
and more predictable: "If there are endpoints in the zone, they will receive
72+
all traffic for that zone, if there are no endpoints in a zone, the traffic
73+
will be distributed to other zones". While the approach may offer more
74+
predictability, it does mean that the customer is in control of managing a
75+
[potential overload](#important-considerations).
76+
77+
If the `service.kubernetes.io/topology-mode` annotation is set to `Auto`, it
78+
will take precedence over `trafficDistribution`. (The annotation may be deprecated
79+
in the future in favour of the `trafficDistribution` field)
80+
81+
### Interaction with `externalTrafficPolicy` and `internalTrafficPolicy`
82+
83+
When compared to the `trafficDistribution` field, the traffic policy fields are
84+
meant to offer a stricter traffic locality requirements. Here's how
85+
`trafficDistribution` interacts with them:
86+
87+
* Precedence of Traffic Policies: If either `externalTrafficPolicy` or
88+
`internalTrafficPolicy` is set to `Local`, it takes precedence over
89+
`trafficDistribution: PreferClose`. Here's how this behavior impacts traffic
90+
routing:
91+
92+
* `internalTrafficPolicy: Local`: Traffic is restricted to endpoints on
93+
the same node as the originating pod. If no node-local endpoints exist,
94+
the traffic is dropped.
95+
96+
* `externalTrafficPolicy: Local`: Traffic originating outside the cluster
97+
is routed to a node-local endpoint to preserve the client source IP. If no
98+
node-local endpoints exist, the kube-proxy does not forward any traffic
99+
for the relevant Service.
100+
101+
* `trafficDistribution` Influence: If either `externalTrafficPolicy` or
102+
`internalTrafficPolicy` is set to `Cluster` (the default), or if these fields
103+
are not set, then `trafficDistribution: PreferClose` guides the routing
104+
behavior. This means that an attempt will be made to route traffic to an
105+
endpoint that is topologically proximate to the client.
106+
107+
## Important Considerations
108+
109+
* Potential Overload: The PreferClose preference might increase the risk of
110+
endpoint overload in certain zones if traffic patterns within a zone are
111+
heavily skewed. To mitigate this, consider the following strategies:
112+
113+
* [Pod Topology Spread
114+
Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/):
115+
Use Pod Topology Spread Constraints to distribute your pods more evenly
116+
across zones.
117+
118+
* Zone-Specific Deployments: If traffic skew is expected, create separate
119+
deployments per zone, allowing for independent scaling.
120+
121+
* Preferences, Not Guarantees: The `trafficDistribution` field provides hints to
122+
influence routing, but it does not enforce strict behavior.
123+
124+
**What's Next**
125+
126+
* Explore [Topology Aware Routing](/docs/concepts/services-networking/topology-aware-routing) for related concepts.
127+
* Read about [Service Internal Traffic Policy](/docs/concepts/services-networking/service-traffic-policy.md)
128+
* Read about [Service External Traffic Policy](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
title: ServiceTrafficDistribution
3+
content_type: feature_gate
4+
5+
_build:
6+
list: never
7+
render: false
8+
9+
stages:
10+
- stage: alpha
11+
defaultValue: false
12+
fromVersion: "1.30"
13+
---
14+
Allows usage of the optional `spec.trafficDistribution` field in Services. The
15+
field offers a way to express preferences for how traffic is distributed to
16+
Service endpoints.

0 commit comments

Comments
 (0)