You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-network.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,16 +24,16 @@ This article introduces the core concepts that provide networking to your applic
24
24
25
25
## Kubernetes networking basics
26
26
27
-
Kubernetes employs a virtual networking layer to manage access within and between your applications or their components. This involves the following key aspects:
27
+
Kubernetes employs a virtual networking layer to manage access within and between your applications or their components:
28
28
29
29
-**Kubernetes nodes and virtual network**: Kubernetes nodes are connected to a virtual network. This setup enables pods (basic units of deployment in Kubernetes) to have both inbound and outbound connectivity.
30
30
31
-
-**Kube-proxy component**: Running on each node, kube-proxy is responsible for providing the necessary network features.
31
+
-**Kube-proxy component**: kube-proxy runs on each node and is responsible for providing the necessary network features.
32
32
33
33
Regarding specific Kubernetes functionalities:
34
34
35
-
-**Services**: These are used to logically group pods, allowing direct access to them through a specific IP address or DNS name on a designated port.
36
-
-**Service types**: This feature lets you specify the kind of Service you wish to create.
35
+
-**Services**: Services is used to logically group pods, allowing direct access to them through a specific IP address or DNS name on a designated port.
36
+
-**Service types**: Specifies the kind of Service you wish to create.
37
37
-**Load balancer**: You can use a load balancer to distribute network traffic evenly across various resources.
38
38
-**Ingress controllers**: These facilitate Layer 7 routing, which is essential for directing application traffic.
39
39
-**Egress traffic control**: Kubernetes allows you to manage and control outbound traffic from cluster nodes.
@@ -48,13 +48,13 @@ In the context of the Azure platform:
48
48
49
49
## Services
50
50
51
-
To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. You can specify a Kubernetes *ServiceType* to specify what kind of Service you want, for example if you want to expose a Service onto an external IP address that's outside of your cluster. For more information, see the Kubernetes documentation for[Publishing Services (ServiceTypes)][service-types].
51
+
To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. You can specify a Kubernetes *ServiceType* to define the type of Service you want. For example, if you want to expose a Service on an external IP address outside of your cluster. For more information, see the Kubernetes documentation on[Publishing Services (ServiceTypes)][service-types].
52
52
53
53
The following ServiceTypes are available:
54
54
55
55
***ClusterIP**
56
56
57
-
ClusterIP creates an internal IP address for use within the AKS cluster. This Service is good for *internal-only applications* that support other workloads within the cluster. This is the default that's used if you don't explicitly specify a type for a Service.
57
+
ClusterIP creates an internal IP address for use within the AKS cluster. The ClusterIP Service is good for *internal-only applications* that support other workloads within the cluster. ClusterIP is the default used if you don't explicitly specify a type for a Service.
58
58
59
59
![Diagram showing ClusterIP traffic flow in an AKS cluster][aks-clusterip]
60
60
@@ -114,12 +114,12 @@ For more information, see [Configure kubenet networking for an AKS cluster][aks-
114
114
115
115
### Azure CNI (advanced) networking
116
116
117
-
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
117
+
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it's possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
118
118
119
119
> [!NOTE]
120
120
> Due to Kubernetes limitations, the Resource Group name, the Virtual Network name and the subnet name must be 63 characters or less.
121
121
122
-
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
122
+
Unlike kubenet, traffic to endpoints in the same virtual network isn't translated (NAT) to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
123
123
124
124
Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
125
125
@@ -129,19 +129,19 @@ For more information, see [Configure Azure CNI for an AKS cluster][aks-configure
129
129
130
130
### Azure CNI Overlay networking
131
131
132
-
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Additionally, Azure CNI Overlay can scale beyond the 400 node limit enforced in Kubenet clusters. Azure CNI Overlay is the recommended option for most clusters.
132
+
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of virtual network IPs to pods. Azure CNI Overlay assigns private CIDR IPs to pods. The private IPs are separate from the virtual network and can be reused across multiple clusters. Azure CNI Overlay can scale beyond the 400 node limit enforced in Kubenet clusters. Azure CNI Overlay is the recommended option for most clusters.
133
133
134
134
### Azure CNI Powered by Cilium
135
135
136
-
[Azure CNI Powered by Cilium][azure-cni-powered-by-cilium] uses [Cilium](https://cilium.io) to provide high-performance networking, observability, and network policy enforcement. It integrates natively with [Azure CNI Overlay][azure-cni-overlay] for scalable IP address management (IPAM)
136
+
[Azure CNI Powered by Cilium][azure-cni-powered-by-cilium] uses [Cilium](https://cilium.io) to provide high-performance networking, observability, and network policy enforcement. It integrates natively with [Azure CNI Overlay][azure-cni-overlay] for scalable IP address management (IPAM).
137
137
138
-
Additionally, Cilium enforces network policies by default, without requiring a separate network policy engine. Using eBPF programs and a more efficient API object structure, Azure CNI Powered by Cilium can scale beyond [Azure Network Policy Manager's limits of 250 nodes / 20K pod][use-network-policies].
138
+
Additionally, Cilium enforces network policies by default, without requiring a separate network policy engine. Azure CNI Powered by Cilium can scale beyond [Azure Network Policy Manager's limits of 250 nodes / 20-K pod][use-network-policies] by using ePBF programs and a more efficient API object structure.
139
139
140
140
Azure CNI Powered by Cilium is the recommended option for clusters that require network policy enforcement.
141
141
142
142
### Bring your own CNI
143
143
144
-
It is possible to install in AKS a third party CNI using the [Bring your own CNI][use-byo-cni] feature.
144
+
It's possible to install in AKS a non-Microsoft CNI using the [Bring your own CNI][use-byo-cni] feature.
145
145
146
146
### Compare network models
147
147
@@ -163,7 +163,7 @@ Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
163
163
|---------------|-------------|
164
164
|**Kubenet**| • IP address space conversation is a priority. </br> • Simple configuration. </br> • Fewer than 400 nodes per cluster. </br> • Kubernetes internal or external load balancers are sufficient for reaching pods from outside the cluster. </br> • Manually managing and maintaining user defined routes is acceptable. |
165
165
|**Azure CNI**| • Full virtual network connectivity is required for pods. </br> • Advanced AKS features (such as virtual nodes) are needed. </br> • Sufficient IP address space is available. </br> • Pod to pod and pod to virtual machine connectivity is needed. </br> • External resources need to reach pods directly. </br> • AKS network policies are required. |
166
-
|**Azure CNI Overlay**| • IP address shortage is a concern. </br> • Scaling up to 1000 nodes and 250 pods per node is sufficient. </br> • Additional hop for pod connectivity is acceptable. </br> • Simpler network configuration. </br> • AKS egress requirements can be met. |
166
+
|**Azure CNI Overlay**| • IP address shortage is a concern. </br> • Scaling up to 1,000 nodes and 250 pods per node is sufficient. </br> • Extra hop for pod connectivity is acceptable. </br> • Simpler network configuration. </br> • AKS egress requirements can be met. |
167
167
168
168
The following behavior differences exist between kubenet and Azure CNI:
169
169
@@ -217,7 +217,7 @@ To learn more about the AGIC add-on for AKS, see [What is Application Gateway In
217
217
218
218
### SSL/TLS termination
219
219
220
-
SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt".
220
+
SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt."
221
221
222
222
For more information on configuring an NGINX ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
0 commit comments