Skip to content

Commit 6cb2dfd

Browse files
committed
Added the last set of updates. Included region info
1 parent 47106e4 commit 6cb2dfd

File tree

2 files changed

+8
-4
lines changed

2 files changed

+8
-4
lines changed

articles/aks/TOC.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -339,7 +339,7 @@
339339
href: configure-kubenet-dual-stack.md
340340
- name: Use Azure-CNI
341341
href: configure-azure-cni.md
342-
- name: Use Azure-CNI Overlay
342+
- name: Use Azure-CNI Overlay (Preview)
343343
href: azure-cni-overlay.md
344344
- name: Use API Server VNet Integration
345345
href: api-server-vnet-integration.md

articles/aks/azure-cni-overlay.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,21 @@
11
---
2-
title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
2+
title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview)
33
description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.
44
services: container-service
55
ms.topic: article
6-
ms.date: 08/23/2022
6+
ms.custom: references_regions
7+
ms.date: 08/29/2022
78
---
89

910
# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
1011

11-
The traditional [Azure Container Networking Interface (CNI)](https://docs.microsoft.com/azure/aks/configure-azure-cni) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
12+
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
1213

1314
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the pods. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
1415

16+
> [!NOTE]
17+
> Azure CNI overlay is currently only available in US West Central region.
18+
1519
## Overview of overlay networking
1620

1721
In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.

0 commit comments

Comments
 (0)