|
| 1 | +--- |
| 2 | +reviewers: |
| 3 | +- thockin |
| 4 | +- dwinship |
| 5 | +min-kubernetes-server-version: v1.29 |
| 6 | +title: 扩展 Service IP 范围 |
| 7 | +content_type: task |
| 8 | +--- |
| 9 | +<!-- |
| 10 | +reviewers: |
| 11 | +- thockin |
| 12 | +- dwinship |
| 13 | +min-kubernetes-server-version: v1.29 |
| 14 | +title: Extend Service IP Ranges |
| 15 | +content_type: task |
| 16 | +--> |
| 17 | + |
| 18 | +<!-- overview --> |
| 19 | + |
| 20 | +{{< feature-state state="alpha" for_k8s_version="v1.29" >}} |
| 21 | + |
| 22 | +<!-- |
| 23 | +This document shares how to extend the existing Service IP range assigned to a cluster. |
| 24 | +--> |
| 25 | +本文将介绍如何扩展分配给集群的现有 Service IP 范围。 |
| 26 | + |
| 27 | +## {{% heading "prerequisites" %}} |
| 28 | + |
| 29 | +{{< include "task-tutorial-prereqs.md" >}} |
| 30 | + |
| 31 | +{{< version-check >}} |
| 32 | + |
| 33 | +<!-- steps --> |
| 34 | + |
| 35 | +## API |
| 36 | + |
| 37 | +<!-- |
| 38 | +Kubernetes clusters with kube-apiservers that have enabled the `MultiCIDRServiceAllocator` |
| 39 | +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and the `networking.k8s.io/v1alpha1` API, |
| 40 | +will create a new ServiceCIDR object that takes the well-known name `kubernetes`, and that uses an IP address range |
| 41 | +based on the value of the `--service-cluster-ip-range` command line argument to kube-apiserver. |
| 42 | +--> |
| 43 | +如果 Kubernetes 集群的 kube-apiserver 启用了 `MultiCIDRServiceAllocator` |
| 44 | +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)和 |
| 45 | +`networking.k8s.io/v1alpha1` API,集群将创建一个新的 ServiceCIDR 对象, |
| 46 | +该对象采用 `kubernetes` 这个众所周知的名称并基于 kube-apiserver 的 `--service-cluster-ip-range` |
| 47 | +命令行参数的值来使用 IP 地址范围。 |
| 48 | + |
| 49 | +```sh |
| 50 | +kubectl get servicecidr |
| 51 | +``` |
| 52 | + |
| 53 | +``` |
| 54 | +NAME CIDRS AGE |
| 55 | +kubernetes 10.96.0.0/28 17d |
| 56 | +``` |
| 57 | + |
| 58 | +<!-- |
| 59 | +The well-known `kubernetes` Service, that exposes the kube-apiserver endpoint to the Pods, calculates |
| 60 | +the first IP address from the default ServiceCIDR range and uses that IP address as its |
| 61 | +cluster IP address. |
| 62 | +--> |
| 63 | +公认的 `kubernetes` Service 将 kube-apiserver 的端点暴露给 Pod, |
| 64 | +计算出默认 ServiceCIDR 范围中的第一个 IP 地址,并将该 IP 地址用作其集群 IP 地址。 |
| 65 | + |
| 66 | +```sh |
| 67 | +kubectl get service kubernetes |
| 68 | +``` |
| 69 | + |
| 70 | +``` |
| 71 | +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE |
| 72 | +kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d |
| 73 | +``` |
| 74 | + |
| 75 | +<!-- |
| 76 | +The default Service, in this case, uses the ClusterIP 10.96.0.1, that has the corresponding IPAddress object. |
| 77 | +--> |
| 78 | +在本例中,默认 Service 使用具有对应 IPAddress 对象的 ClusterIP 10.96.0.1。 |
| 79 | + |
| 80 | +```sh |
| 81 | +kubectl get ipaddress 10.96.0.1 |
| 82 | +``` |
| 83 | + |
| 84 | +``` |
| 85 | +NAME PARENTREF |
| 86 | +10.96.0.1 services/default/kubernetes |
| 87 | +``` |
| 88 | + |
| 89 | +<!-- |
| 90 | +The ServiceCIDRs are protected with {{<glossary_tooltip text="finalizers" term_id="finalizer">}}, to avoid leaving Service ClusterIPs orphans; |
| 91 | +the finalizer is only removed if there is another subnet that contains the existing IPAddresses or |
| 92 | +there are no IPAddresses belonging to the subnet. |
| 93 | +--> |
| 94 | +ServiceCIDR 受到 {{<glossary_tooltip text="终结器" term_id="finalizer">}} 的保护, |
| 95 | +以避免留下孤立的 Service ClusterIP;只有在存在包含现有 IPAddress 的另一个子网或者没有属于此子网的 |
| 96 | +IPAddress 时,才会移除终结器。 |
| 97 | + |
| 98 | +<!-- |
| 99 | +## Extend the number of available IPs for Services |
| 100 | +
|
| 101 | +There are cases that users will need to increase the number addresses available to Services, previously, increasing the Service range was a disruptive operation that could also cause data loss. With this new feature users only need to add a new ServiceCIDR to increase the number of available addresses. |
| 102 | +--> |
| 103 | +## 扩展 Service 可用的 IP 数量 {#extend-the-number-of-available-ips-for-services} |
| 104 | + |
| 105 | +有时候用户需要增加可供 Service 使用的 IP 地址数量。 |
| 106 | +以前,增加 Service 范围是一个可能导致数据丢失的破坏性操作。 |
| 107 | +有了这个新的特性后,用户只需添加一个新的 ServiceCIDR 对象,便能增加可用地址的数量。 |
| 108 | + |
| 109 | +<!-- |
| 110 | +### Adding a new ServiceCIDR |
| 111 | +
|
| 112 | +On a cluster with a 10.96.0.0/28 range for Services, there is only 2^(32-28) - 2 = 14 IP addresses available. The `kubernetes.default` Service is always created; for this example, that leaves you with only 13 possible Services. |
| 113 | +--> |
| 114 | +### 添加新的 ServiceCIDR {#adding-a-new-servicecidr} |
| 115 | + |
| 116 | +对于 Service 范围为 10.96.0.0/28 的集群,只有 2^(32-28) - 2 = 14 个可用的 IP 地址。 |
| 117 | +`kubernetes.default` Service 始终会被创建;在这个例子中,你只剩下了 13 个可能的 Service。 |
| 118 | + |
| 119 | +```sh |
| 120 | +for i in $(seq 1 13); do kubectl create service clusterip "test-$i" --tcp 80 -o json | jq -r .spec.clusterIP; done |
| 121 | +``` |
| 122 | + |
| 123 | +``` |
| 124 | +10.96.0.11 |
| 125 | +10.96.0.5 |
| 126 | +10.96.0.12 |
| 127 | +10.96.0.13 |
| 128 | +10.96.0.14 |
| 129 | +10.96.0.2 |
| 130 | +10.96.0.3 |
| 131 | +10.96.0.4 |
| 132 | +10.96.0.6 |
| 133 | +10.96.0.7 |
| 134 | +10.96.0.8 |
| 135 | +10.96.0.9 |
| 136 | +error: failed to create ClusterIP service: Internal error occurred: failed to allocate a serviceIP: range is full |
| 137 | +``` |
| 138 | + |
| 139 | +<!-- |
| 140 | +You can increase the number of IP addresses available for Services, by creating a new ServiceCIDR |
| 141 | +that extends or adds new IP address ranges. |
| 142 | +--> |
| 143 | +通过创建一个扩展或新增 IP 地址范围的新 ServiceCIDR,你可以提高 Service 可用的 IP 地址数量。 |
| 144 | + |
| 145 | +```sh |
| 146 | +cat <EOF | kubectl apply -f - |
| 147 | +apiVersion: networking.k8s.io/v1alpha1 |
| 148 | +kind: ServiceCIDR |
| 149 | +metadata: |
| 150 | + name: newcidr1 |
| 151 | +spec: |
| 152 | + cidrs: |
| 153 | + - 10.96.0.0/24 |
| 154 | +EOF |
| 155 | +``` |
| 156 | + |
| 157 | +``` |
| 158 | +servicecidr.networking.k8s.io/newcidr1 created |
| 159 | +``` |
| 160 | + |
| 161 | +<!-- |
| 162 | +and this will allow you to create new Services with ClusterIPs that will be picked from this new range. |
| 163 | +--> |
| 164 | +这将允许你创建新的 Service,其 ClusterIP 将从这个新的范围中选取。 |
| 165 | + |
| 166 | +```sh |
| 167 | +for i in $(seq 13 16); do kubectl create service clusterip "test-$i" --tcp 80 -o json | jq -r .spec.clusterIP; done |
| 168 | +``` |
| 169 | + |
| 170 | +``` |
| 171 | +10.96.0.48 |
| 172 | +10.96.0.200 |
| 173 | +10.96.0.121 |
| 174 | +10.96.0.144 |
| 175 | +``` |
| 176 | + |
| 177 | +<!-- |
| 178 | +### Deleting a ServiceCIDR |
| 179 | +
|
| 180 | +You cannot delete a ServiceCIDR if there are IPAddresses that depend on the ServiceCIDR. |
| 181 | +--> |
| 182 | +### 删除 ServiceCIDR {#deleting-a-servicecidr} |
| 183 | + |
| 184 | +如果存在依赖于 ServiceCIDR 的 IPAddress,你将无法删除 ServiceCIDR。 |
| 185 | + |
| 186 | +```sh |
| 187 | +kubectl delete servicecidr newcidr1 |
| 188 | +``` |
| 189 | + |
| 190 | +``` |
| 191 | +servicecidr.networking.k8s.io "newcidr1" deleted |
| 192 | +``` |
| 193 | + |
| 194 | +<!-- |
| 195 | +Kubernetes uses a finalizer on the ServiceCIDR to track this dependent relationship. |
| 196 | +--> |
| 197 | +Kubernetes 在 ServiceCIDR 上使用一个终结器来跟踪这种依赖关系。 |
| 198 | + |
| 199 | +```sh |
| 200 | +kubectl get servicecidr newcidr1 -o yaml |
| 201 | +``` |
| 202 | + |
| 203 | +```yaml |
| 204 | +apiVersion: networking.k8s.io/v1alpha1 |
| 205 | +kind: ServiceCIDR |
| 206 | +metadata: |
| 207 | + creationTimestamp: "2023-10-12T15:11:07Z" |
| 208 | + deletionGracePeriodSeconds: 0 |
| 209 | + deletionTimestamp: "2023-10-12T15:12:45Z" |
| 210 | + finalizers: |
| 211 | + - networking.k8s.io/service-cidr-finalizer |
| 212 | + name: newcidr1 |
| 213 | + resourceVersion: "1133" |
| 214 | + uid: 5ffd8afe-c78f-4e60-ae76-cec448a8af40 |
| 215 | +spec: |
| 216 | + cidrs: |
| 217 | + - 10.96.0.0/24 |
| 218 | +status: |
| 219 | + conditions: |
| 220 | + - lastTransitionTime: "2023-10-12T15:12:45Z" |
| 221 | + message: |
| 222 | + There are still IPAddresses referencing the ServiceCIDR, please remove |
| 223 | + them or create a new ServiceCIDR |
| 224 | + reason: OrphanIPAddress |
| 225 | + status: "False" |
| 226 | + type: Ready |
| 227 | +``` |
| 228 | +
|
| 229 | +<!-- |
| 230 | +By removing the Services containing the IP addresses that are blocking the deletion of the ServiceCIDR |
| 231 | +--> |
| 232 | +移除一些 Service,这些 Service 包含阻止删除 ServiceCIDR 的 IP 地址: |
| 233 | +
|
| 234 | +```sh |
| 235 | +for i in $(seq 13 16); do kubectl delete service "test-$i" ; done |
| 236 | +``` |
| 237 | + |
| 238 | +``` |
| 239 | +service "test-13" deleted |
| 240 | +service "test-14" deleted |
| 241 | +service "test-15" deleted |
| 242 | +service "test-16" deleted |
| 243 | +``` |
| 244 | + |
| 245 | +<!-- |
| 246 | +the control plane notices the removal. The control plane then removes its finalizer, |
| 247 | +so that the ServiceCIDR that was pending deletion will actually be removed. |
| 248 | +--> |
| 249 | +控制平面会注意到这种移除操作。控制平面随后会移除其终结器,以便真正移除待删除的 ServiceCIDR。 |
| 250 | + |
| 251 | +```sh |
| 252 | +kubectl get servicecidr newcidr1 |
| 253 | +``` |
| 254 | + |
| 255 | +``` |
| 256 | +Error from server (NotFound): servicecidrs.networking.k8s.io "newcidr1" not found |
| 257 | +``` |
0 commit comments