Skip to content

Commit 7aca14a

Browse files
committed
acrolinx
1 parent e7e0bc9 commit 7aca14a

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/aks/concepts-network.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -24,16 +24,16 @@ This article introduces the core concepts that provide networking to your applic
2424

2525
## Kubernetes networking basics
2626

27-
Kubernetes employs a virtual networking layer to manage access within and between your applications or their components. This involves the following key aspects:
27+
Kubernetes employs a virtual networking layer to manage access within and between your applications or their components:
2828

2929
- **Kubernetes nodes and virtual network**: Kubernetes nodes are connected to a virtual network. This setup enables pods (basic units of deployment in Kubernetes) to have both inbound and outbound connectivity.
3030

31-
- **Kube-proxy component**: Running on each node, kube-proxy is responsible for providing the necessary network features.
31+
- **Kube-proxy component**: kube-proxy runs on each node and is responsible for providing the necessary network features.
3232

3333
Regarding specific Kubernetes functionalities:
3434

35-
- **Services**: These are used to logically group pods, allowing direct access to them through a specific IP address or DNS name on a designated port.
36-
- **Service types**: This feature lets you specify the kind of Service you wish to create.
35+
- **Services**: Services is used to logically group pods, allowing direct access to them through a specific IP address or DNS name on a designated port.
36+
- **Service types**: Specifies the kind of Service you wish to create.
3737
- **Load balancer**: You can use a load balancer to distribute network traffic evenly across various resources.
3838
- **Ingress controllers**: These facilitate Layer 7 routing, which is essential for directing application traffic.
3939
- **Egress traffic control**: Kubernetes allows you to manage and control outbound traffic from cluster nodes.
@@ -48,13 +48,13 @@ In the context of the Azure platform:
4848

4949
## Services
5050

51-
To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. You can specify a Kubernetes *ServiceType* to specify what kind of Service you want, for example if you want to expose a Service onto an external IP address that's outside of your cluster. For more information, see the Kubernetes documentation for [Publishing Services (ServiceTypes)][service-types].
51+
To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. You can specify a Kubernetes *ServiceType* to define the type of Service you want. For example, if you want to expose a Service on an external IP address outside of your cluster. For more information, see the Kubernetes documentation on [Publishing Services (ServiceTypes)][service-types].
5252

5353
The following ServiceTypes are available:
5454

5555
* **ClusterIP**
5656

57-
ClusterIP creates an internal IP address for use within the AKS cluster. This Service is good for *internal-only applications* that support other workloads within the cluster. This is the default that's used if you don't explicitly specify a type for a Service.
57+
ClusterIP creates an internal IP address for use within the AKS cluster. The ClusterIP Service is good for *internal-only applications* that support other workloads within the cluster. ClusterIP is the default used if you don't explicitly specify a type for a Service.
5858

5959
![Diagram showing ClusterIP traffic flow in an AKS cluster][aks-clusterip]
6060

@@ -114,12 +114,12 @@ For more information, see [Configure kubenet networking for an AKS cluster][aks-
114114

115115
### Azure CNI (advanced) networking
116116

117-
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
117+
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it's possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
118118

119119
> [!NOTE]
120120
> Due to Kubernetes limitations, the Resource Group name, the Virtual Network name and the subnet name must be 63 characters or less.
121121
122-
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
122+
Unlike kubenet, traffic to endpoints in the same virtual network isn't translated (NAT) to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
123123

124124
Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
125125

@@ -129,19 +129,19 @@ For more information, see [Configure Azure CNI for an AKS cluster][aks-configure
129129

130130
### Azure CNI Overlay networking
131131

132-
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Additionally, Azure CNI Overlay can scale beyond the 400 node limit enforced in Kubenet clusters. Azure CNI Overlay is the recommended option for most clusters.
132+
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of virtual network IPs to pods. Azure CNI Overlay assigns private CIDR IPs to pods. The private IPs are separate from the virtual network and can be reused across multiple clusters. Azure CNI Overlay can scale beyond the 400 node limit enforced in Kubenet clusters. Azure CNI Overlay is the recommended option for most clusters.
133133

134134
### Azure CNI Powered by Cilium
135135

136-
[Azure CNI Powered by Cilium][azure-cni-powered-by-cilium] uses [Cilium](https://cilium.io) to provide high-performance networking, observability, and network policy enforcement. It integrates natively with [Azure CNI Overlay][azure-cni-overlay] for scalable IP address management (IPAM)
136+
[Azure CNI Powered by Cilium][azure-cni-powered-by-cilium] uses [Cilium](https://cilium.io) to provide high-performance networking, observability, and network policy enforcement. It integrates natively with [Azure CNI Overlay][azure-cni-overlay] for scalable IP address management (IPAM).
137137

138-
Additionally, Cilium enforces network policies by default, without requiring a separate network policy engine. Using eBPF programs and a more efficient API object structure, Azure CNI Powered by Cilium can scale beyond [Azure Network Policy Manager's limits of 250 nodes / 20K pod][use-network-policies].
138+
Additionally, Cilium enforces network policies by default, without requiring a separate network policy engine. Azure CNI Powered by Cilium can scale beyond [Azure Network Policy Manager's limits of 250 nodes / 20-K pod][use-network-policies] by using ePBF programs and a more efficient API object structure.
139139

140140
Azure CNI Powered by Cilium is the recommended option for clusters that require network policy enforcement.
141141

142142
### Bring your own CNI
143143

144-
It is possible to install in AKS a third party CNI using the [Bring your own CNI][use-byo-cni] feature.
144+
It's possible to install in AKS a non-Microsoft CNI using the [Bring your own CNI][use-byo-cni] feature.
145145

146146
### Compare network models
147147

@@ -163,7 +163,7 @@ Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
163163
|---------------|-------------|
164164
| **Kubenet** | • IP address space conversation is a priority. </br> • Simple configuration. </br> • Fewer than 400 nodes per cluster. </br> • Kubernetes internal or external load balancers are sufficient for reaching pods from outside the cluster. </br> • Manually managing and maintaining user defined routes is acceptable. |
165165
| **Azure CNI** | • Full virtual network connectivity is required for pods. </br> • Advanced AKS features (such as virtual nodes) are needed. </br> • Sufficient IP address space is available. </br> • Pod to pod and pod to virtual machine connectivity is needed. </br> • External resources need to reach pods directly. </br> • AKS network policies are required. |
166-
| **Azure CNI Overlay** | • IP address shortage is a concern. </br> • Scaling up to 1000 nodes and 250 pods per node is sufficient. </br> • Additional hop for pod connectivity is acceptable. </br> • Simpler network configuration. </br> • AKS egress requirements can be met. |
166+
| **Azure CNI Overlay** | • IP address shortage is a concern. </br> • Scaling up to 1,000 nodes and 250 pods per node is sufficient. </br> • Extra hop for pod connectivity is acceptable. </br> • Simpler network configuration. </br> • AKS egress requirements can be met. |
167167

168168
The following behavior differences exist between kubenet and Azure CNI:
169169

@@ -217,7 +217,7 @@ To learn more about the AGIC add-on for AKS, see [What is Application Gateway In
217217

218218
### SSL/TLS termination
219219

220-
SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt".
220+
SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt."
221221

222222
For more information on configuring an NGINX ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
223223

0 commit comments

Comments
 (0)