You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
@@ -70,7 +70,7 @@ You can configure the maximum number of pods per node at the time of cluster cre
70
70
71
71
## Choosing a network model to use
72
72
73
-
Azure CNI offers two IP addressing options for pods: the traditional configuration that assigns VNet IPs to pods and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
73
+
Azure CNI offers two IP addressing options for pods: The traditional configuration that assigns VNet IPs to pods and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model might be the most appropriate.
74
74
75
75
**Use Overlay networking when**:
76
76
@@ -91,7 +91,6 @@ Azure CNI Overlay has the following limitations:
91
91
92
92
- You can't use Application Gateway as an Ingress Controller (AGIC) for an Overlay cluster.
93
93
- Virtual Machine Availability Sets (VMAS) aren't supported for Overlay.
94
-
- Dual stack networking isn't supported in Overlay.
95
94
- You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead.
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
108
+
az aks create -n $clusterName -g $resourceGroup \
109
+
--location $location \
110
+
--network-plugin azure \
111
+
--network-plugin-mode overlay \
112
+
--pod-cidr 192.168.0.0/16
110
113
```
111
114
112
115
## Upgrade an existing cluster to CNI Overlay
@@ -123,8 +126,8 @@ az aks create -n $clusterName -g $resourceGroup --location $location --network-p
123
126
> Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, which had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build greater than or equal to 20348.1668**.
124
127
125
128
> [!WARNING]
126
-
> If using a custom azure-ip-masq-agent config to include additional IP ranges that should not SNAT packets from pods, upgrading to Azure CNI Overlay may break connectivity to these ranges. Pod IPs from the overlay space will not be reachable by anything outside the cluster nodes.
127
-
> Additionally, for sufficiently old clusters there may be a ConfigMap left over from a previous version of azure-ip-masq-agent. If this ConfigMap, named `azure-ip-masq-agent-config`, exists and is not intetionally in-place it should be deleted before running the update command.
129
+
> If using a custom azure-ip-masq-agent config to include additional IP ranges that should not SNAT packets from pods, upgrading to Azure CNI Overlay can break connectivity to these ranges. Pod IPs from the overlay space will not be reachable by anything outside the cluster nodes.
130
+
> Additionally, for sufficiently old clusters there might be a ConfigMap left over from a previous version of azure-ip-masq-agent. If this ConfigMap, named `azure-ip-masq-agent-config`, exists and is not intetionally in-place it should be deleted before running the update command.
128
131
> If not using a custom ip-masq-agent config, only the `azure-ip-masq-agent-config-reconciled` ConfigMap should exist with respect to Azure ip-masq-agent ConfigMaps and this will be updated automatically during the upgrade process.
129
132
130
133
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn't supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
@@ -144,6 +147,258 @@ az aks update --name $clusterName \
144
147
145
148
The `--pod-cidr` parameter is required when upgrading from legacy CNI because the pods need to get IPs from a new overlay space, which doesn't overlap with the existing node subnet. The pod CIDR also can't overlap with any VNet address of the node pools. For example, if your VNet address is *10.0.0.0/8*, and your nodes are in the subnet *10.240.0.0/16*, the `--pod-cidr` can't overlap with *10.0.0.0/8* or the existing service CIDR on the cluster.
146
149
150
+
## Dual-stack Networking (Preview)
151
+
152
+
You can deploy your AKS clusters in a dual-stack mode when using Overlay networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).
153
+
154
+
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
155
+
156
+
### Prerequisites
157
+
158
+
- You must have Azure CLI 2.48.0 or later installed.
159
+
- You must register the `Microsoft.ContainerService``AzureOverlayDualStackPreview` feature flag.
160
+
- Kubernetes version 1.26.3 or greater.
161
+
162
+
### Limitations
163
+
164
+
The following features aren't supported with dual-stack networking:
165
+
- Windows Nodepools
166
+
- Azure network policies
167
+
- Calico network policies
168
+
- NAT Gateway
169
+
- Virtual nodes add-on
170
+
171
+
## Deploy a dual-stack AKS cluster
172
+
173
+
The following attributes are provided to support dual-stack clusters:
174
+
175
+
***`--ip-families`**: Takes a comma-separated list of IP families to enable on the cluster.
176
+
* Only `ipv4` or `ipv4,ipv6` are supported.
177
+
***`--pod-cidrs`**: Takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from.
178
+
* The count and order of ranges in this list must match the value provided to `--ip-families`.
179
+
* If no values are supplied, the default value `10.244.0.0/16,fd12:3456:789a::/64` is used.
180
+
***`--service-cidrs`**: Takes a comma-separated list of CIDR notation IP ranges to assign service IPs from.
181
+
* The count and order of ranges in this list must match the value provided to `--ip-families`.
182
+
* If no values are supplied, the default value `10.0.0.0/16,fd12:3456:789a:1::/108` is used.
183
+
* The IPv6 subnet assigned to `--service-cidrs` can be no larger than a /108.
184
+
185
+
### Register the `AzureOverlayDualStackPreview` feature flag
186
+
187
+
1. Register the `AzureOverlayDualStackPreview` feature flag using the [`az feature register`][az-feature-register] command. It takes a few minutes for the status to show *Registered*.
188
+
189
+
```azurecli-interactive
190
+
az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"
191
+
```
192
+
193
+
2. Verify the registration status using the [`az feature show`][az-feature-show] command.
194
+
195
+
```azurecli-interactive
196
+
az feature show --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"
197
+
```
198
+
199
+
3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
200
+
201
+
```azurecli-interactive
202
+
az provider register --namespace Microsoft.ContainerService
203
+
```
204
+
205
+
### Create a dual-stack AKS cluster
206
+
207
+
1. Create an Azure resource group for the cluster using the [`az group create`][az-group-create] command.
208
+
209
+
```azurecli-interactive
210
+
az group create -l <region> -n <resourceGroupName>
211
+
```
212
+
213
+
2. Create a dual-stack AKS cluster using the [`az aks create`][az-aks-create] command with the `--ip-families` parameter set to `ipv4,ipv6`.
214
+
215
+
```azurecli-interactive
216
+
az aks create -l <region> -g <resourceGroupName> -n <clusterName> \
217
+
--network-plugin azure \
218
+
--network-plugin-mode overlay \
219
+
--ip-families ipv4,ipv6
220
+
```
221
+
222
+
---
223
+
224
+
## Create an example workload
225
+
226
+
Once the cluster has been created, you can deploy your workloads. This article walks you through an example workload deployment of an NGINX web server.
227
+
228
+
### Deploy an NGINX web server
229
+
230
+
# [kubectl](#tab/kubectl)
231
+
232
+
1. Create an NGINX web server using the `kubectl create deployment nginx` command.
2. View the pod resources using the `kubectl get pods` command.
239
+
240
+
```bash-interactive
241
+
kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
242
+
```
243
+
244
+
The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
1. Create an NGINX web server using the following YAML manifest.
256
+
257
+
```yml
258
+
apiVersion: apps/v1
259
+
kind: Deployment
260
+
metadata:
261
+
labels:
262
+
app: nginx
263
+
name: nginx
264
+
spec:
265
+
replicas: 3
266
+
selector:
267
+
matchLabels:
268
+
app: nginx
269
+
template:
270
+
metadata:
271
+
labels:
272
+
app: nginx
273
+
spec:
274
+
containers:
275
+
- image: nginx:latest
276
+
name: nginx
277
+
```
278
+
279
+
2. View the pod resources using the `kubectl get pods` command.
280
+
281
+
```bash-interactive
282
+
kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
283
+
```
284
+
285
+
The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
## Expose the workload via a `LoadBalancer` type service
297
+
298
+
> [!IMPORTANT]
299
+
> There are currently **two limitations** pertaining to IPv6 services in AKS.
300
+
>
301
+
> 1. Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node.
302
+
> 2. Prior to Kubernetes version 1.27, only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6. This is no longer a limitation in kubernetes 1.27 or later.
303
+
304
+
# [kubectl](#tab/kubectl)
305
+
306
+
1. Expose the NGINX deployment using the `kubectl expose deployment nginx` command.
You receive an output that shows the services have been exposed.
314
+
315
+
```output
316
+
service/nginx-ipv4 exposed
317
+
service/nginx-ipv6 exposed
318
+
```
319
+
320
+
2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
3. Verify functionality via a command-line web request from an IPv6 capable host. Azure Cloud Shell isn't IPv6 capable.
333
+
334
+
```bash-interactive
335
+
SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
336
+
curl -s "http://[${SERVICE_IP}]" | head -n5
337
+
```
338
+
339
+
```html
340
+
<!DOCTYPE html>
341
+
<html>
342
+
<head>
343
+
<title>Welcome to nginx!</title>
344
+
<style>
345
+
```
346
+
347
+
# [YAML](#tab/yaml)
348
+
349
+
1. Expose the NGINX deployment using the following YAML manifest.
350
+
351
+
```yml
352
+
---
353
+
apiVersion: v1
354
+
kind: Service
355
+
metadata:
356
+
labels:
357
+
app: nginx
358
+
name: nginx-ipv4
359
+
spec:
360
+
externalTrafficPolicy: Cluster
361
+
ports:
362
+
- port: 80
363
+
protocol: TCP
364
+
targetPort: 80
365
+
selector:
366
+
app: nginx
367
+
type: LoadBalancer
368
+
---
369
+
apiVersion: v1
370
+
kind: Service
371
+
metadata:
372
+
labels:
373
+
app: nginx
374
+
name: nginx-ipv6
375
+
spec:
376
+
externalTrafficPolicy: Cluster
377
+
ipFamilies:
378
+
- IPv6
379
+
ports:
380
+
- port: 80
381
+
protocol: TCP
382
+
targetPort: 80
383
+
selector:
384
+
app: nginx
385
+
type: LoadBalancer
386
+
```
387
+
388
+
2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
0 commit comments