Skip to content

Commit 95b5247

Browse files
authored
Merge pull request #119514 from tyler-lloyd/patch-7
docs: kubenet to overlay upgrade
2 parents e095f0f + abb521d commit 95b5247

File tree

1 file changed

+2
-6
lines changed

1 file changed

+2
-6
lines changed

articles/aks/azure-cni-overlay.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -173,11 +173,7 @@ az aks update --name $clusterName \
173173
The `--pod-cidr` parameter is required when upgrading from legacy CNI because the pods need to get IPs from a new overlay space, which doesn't overlap with the existing node subnet. The pod CIDR also can't overlap with any VNet address of the node pools. For example, if your VNet address is *10.0.0.0/8*, and your nodes are in the subnet *10.240.0.0/16*, the `--pod-cidr` can't overlap with *10.0.0.0/8* or the existing service CIDR on the cluster.
174174

175175

176-
### Kubenet Cluster Upgrade (Preview)
177-
178-
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
179-
180-
You must have the latest aks-preview Azure CLI extension installed and register the `Microsoft.ContainerService` `AzureOverlayPreview` feature flag.
176+
### Kubenet Cluster Upgrade
181177

182178
Update an existing Kubenet cluster to use Azure CNI Overlay using the [`az aks update`][az-aks-update] command.
183179

@@ -192,7 +188,7 @@ az aks update --name $clusterName \
192188
--network-plugin-mode overlay
193189
```
194190

195-
Since the cluster is already using a private CIDR for pods, you don't need to specify the `--pod-cidr` parameter and the Pod CIDR will remain the same.
191+
Since the cluster is already using a private CIDR for pods which doesn't overlap with the VNet IP space, you don't need to specify the `--pod-cidr` parameter and the Pod CIDR will remain the same.
196192

197193
> [!NOTE]
198194
> When upgrading from Kubenet to CNI Overlay, the route table will no longer be required for pod routing. If the cluster is using a customer provided route table, the routes which were being used to direct pod traffic to the correct node will automatically be deleted during the migration operation. If the cluster is using a managed route table (the route table was created by AKS and lives in the node resource group) then that route table will be deleted as part of the migration.

0 commit comments

Comments
 (0)