Skip to content

Commit 2042575

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-docs-pr (branch live)
2 parents bd19542 + 1510e14 commit 2042575

File tree

60 files changed

+1062
-369
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+1062
-369
lines changed

articles/ai-studio/includes/create-projects.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ To create a project in [Azure AI Studio](https://ai.azure.com), follow these ste
2020
:::image type="content" source="../media/how-to/projects/projects-create-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/how-to/projects/projects-create-details.png":::
2121

2222
> [!NOTE]
23-
> To create a hub, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share a hub with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. For more options to create a hub, see [how to create and manage an Azure AI Studio hub](../how-to/create-azure-ai-resource.md).
23+
> To create a hub, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share a hub with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. For more options to create a hub, see [how to create and manage an Azure AI Studio hub](../how-to/create-azure-ai-resource.md). A project name must be unique between projects that share the same hub.
2424
2525
1. If you're creating a new hub, enter a name.
2626

articles/ai-studio/quickstarts/get-started-code.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-studio
77
ms.custom:
88
- build-2024
99
ms.topic: how-to
10-
ms.date: 5/21/2024
10+
ms.date: 5/30/2024
1111
ms.reviewer: dantaylo
1212
ms.author: eur
1313
author: eric-urban
@@ -140,7 +140,7 @@ Activating the Python environment means that when you run ```python``` or ```pip
140140
141141
## Install the prompt flow SDK
142142

143-
In this section, we use prompt flow to build our application. [https://microsoft.github.io/promptflow/](Prompt flow) is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring.
143+
In this section, we use prompt flow to build our application. [Prompt flow](https://microsoft.github.io/promptflow/) is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring.
144144

145145
Use pip to install the prompt flow SDK into the virtual environment that you created.
146146
```

articles/aks/TOC.yml

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,10 +132,20 @@
132132
href: cis-windows.md
133133
- name: Networking
134134
items:
135-
- name: Concepts
135+
- name: Networking concepts
136136
href: concepts-network.md
137137
- name: CNI networking
138-
href: azure-cni-overview.md
138+
items:
139+
- name: CNI networking overview
140+
href: concepts-network-cni-overview.md
141+
- name: Azure CNI Overlay
142+
href: concepts-network-azure-cni-overlay.md
143+
- name: Azure CNI Pod subnet
144+
href: concepts-network-azure-cni-pod-subnet.md
145+
- name: Legacy CNI options
146+
href: concepts-network-legacy-cni.md
147+
- name: IP address planning
148+
href: concepts-network-ip-address-planning.md
139149
- name: Services
140150
href: concepts-network-services.md
141151
- name: Advanced Container Networking Services

articles/aks/azure-cni-overview.md

Lines changed: 27 additions & 86 deletions
Large diffs are not rendered by default.

articles/aks/concepts-clusters-workloads.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ Reserved memory in AKS includes the sum of two values:
145145
* If the VM provides 8GB of memory and the node supports up to 30 pods, AKS reserves *20MB * 30 Max Pods + 50MB = 650MB* for kube-reserved. `Allocatable space = 8GB - 0.65GB (kube-reserved) - 0.1GB (eviction threshold) = 7.25GB or 90.625% allocatable.`
146146
* If the VM provides 4GB of memory and the node supports up to 70 pods, AKS reserves *25% * 4GB = 1000MB* for kube-reserved, as this is less than *20MB * 70 Max Pods + 50MB = 1450MB*.
147147

148-
For more information, see [Configure maximum pods per node in an AKS cluster](./azure-cni-overview.md#maximum-pods-per-node).
148+
For more information, see [Configure maximum pods per node in an AKS cluster][maximum-pods].
149149

150150
**AKS versions prior to 1.29**
151151

@@ -439,3 +439,5 @@ For more information on core Kubernetes and AKS concepts, see the following arti
439439
[aks-support]: support-policies.md#user-customization-of-agent-nodes
440440
[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
441441
[fully-managed-resource-group]: ./node-resource-group-lockdown.md
442+
[maximum-pods]: concepts-network-ip-address-planning.md#maximum-pods-per-node
443+
Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
---
2+
title: Concepts - Azure CNI Overlay networking in AKS
3+
description: Learn about Azure CNI Overlay in Azure Kubernetes Service (AKS)
4+
ms.topic: conceptual
5+
ms.date: 05/14/2024
6+
author: schaffererin
7+
ms.author: schaffererin
8+
9+
ms.custom: fasttrack-edit
10+
---
11+
12+
# Azure Container Networking Interface (CNI) Overlay networking
13+
14+
15+
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet. Pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an Overlay network. Network Address Translation (NAT) uses the node's IP address to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to scale your cluster to large sizes. An extra advantage is that you can reuse the private CIDR in different AKS clusters, which extends the IP space available for containerized applications in Azure Kubernetes Service (AKS).
16+
17+
## Overview of Overlay networking
18+
19+
In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from subnets. Pods receive IPs from a private CIDR provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Extra nodes created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
20+
21+
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There's no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods, which provides connectivity performance between pods on par with VMs in a VNet. Workloads running within the pods are not even aware that network address manipulation is happening.
22+
23+
:::image type="content" source="media/azure-cni-Overlay/azure-cni-overlay.png" alt-text="A diagram showing two nodes with three pods each running in an Overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
24+
25+
Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through NAT. Azure CNI translates the source IP (Overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
26+
27+
You can provide outbound (egress) connectivity to the internet for Overlay pods using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
28+
29+
You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay).
30+
31+
## Differences between kubenet and Azure CNI Overlay
32+
33+
The following table provides a detailed comparison between kubenet and Azure CNI Overlay:
34+
35+
| Area | Azure CNI Overlay | kubenet |
36+
|------------------------------|--------------------------------------------------------------|-------------------------------------------------------------------------------|
37+
| Cluster scale | 5000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
38+
| Network configuration | Simple - no extra configurations required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
39+
| Pod connectivity performance | Performance on par with VMs in a VNet | Extra hop adds minor latency |
40+
| Kubernetes Network Policies | Azure Network Policies, Calico, Cilium | Calico |
41+
| OS platforms supported | Linux and Windows Server 2022, 2019 | Linux only |
42+
43+
## IP address planning
44+
45+
### Cluster nodes
46+
47+
When setting up your AKS cluster, make sure your VNet subnets have enough room to grow for future scaling. You can assign each node pool to a dedicated subnet.
48+
49+
A `/24` subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
50+
51+
### Pods
52+
53+
The Overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node.
54+
55+
When planning IP address space for pods, consider the following factors:
56+
57+
* Ensure the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
58+
* The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
59+
* Pod CIDR space must not overlap with the cluster subnet range.
60+
* Pod CIDR space must not overlap with directly connected networks (like VNet peering, ExpressRoute, or VPN). If external traffic has source IPs in the podCIDR range, it needs translation to a non-overlapping IP via SNAT to communicate with the cluster.
61+
62+
### Kubernetes service address range
63+
64+
The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range shouldn't overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
65+
66+
### Kubernetes DNS service IP address
67+
68+
This IP address is within the Kubernetes service address range used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
69+
70+
## Network security groups
71+
72+
Pod to pod traffic with Azure CNI Overlay isn't encapsulated, and subnet [network security group][nsg] rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all [AKS egress requirements][aks-egress]):
73+
74+
- Traffic from the node CIDR to the node CIDR on all ports and protocols
75+
- Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
76+
- Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)
77+
78+
Traffic from a pod to any destination outside of the pod CIDR block utilizes SNAT to set the source IP to the IP of the node where the pod runs.
79+
80+
If you wish to restrict traffic between workloads in the cluster, we recommend using [network policies][aks-network-policies].
81+
82+
## Maximum pods per node
83+
84+
You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default and maximum value for Azure CNI Overlay is 250., and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only.
85+
86+
## Choosing a network model to use
87+
88+
Azure CNI offers two IP addressing options for pods: The traditional configuration that assigns VNet IPs to pods and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model might be the most appropriate.
89+
90+
**Use Overlay networking when**:
91+
92+
- You would like to scale to a large number of pods, but have limited IP address space in your VNet.
93+
- Most of the pod communication is within the cluster.
94+
- You don't need advanced AKS features, such as virtual nodes.
95+
96+
**Use the traditional VNet option when**:
97+
98+
- You have available IP address space.
99+
- Most of the pod communication is to resources outside of the cluster.
100+
- Resources outside the cluster need to reach pods directly.
101+
- You need AKS advanced features, such as virtual nodes.
102+
103+
## Limitations with Azure CNI Overlay
104+
105+
Azure CNI Overlay has the following limitations:
106+
107+
- You can't use Application Gateway as an Ingress Controller (AGIC).
108+
- Virtual Machine Availability Sets (VMAS) aren't supported.
109+
- You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead.
110+
- If you're using your own subnet to deploy the cluster, the names of the subnet, VNet, and resource group containing the VNet, must be 63 characters or less. These names will be used as labels in AKS worker nodes and are subject to [Kubernetes label syntax rules](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
111+
112+
<!-- LINKS - Internal -->
113+
[aks-egress]: limit-egress-traffic.md
114+
[aks-network-policies]: use-network-policies.md
115+
[nsg]: ../virtual-network/network-security-groups-overview.md

0 commit comments

Comments
 (0)