You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-network.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -159,6 +159,12 @@ Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
159
159
* Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
160
160
* Requires more IP address space.
161
161
162
+
| Network model | When to use |
163
+
|---------------|-------------|
164
+
|**Kubenet**| • IP address space conversation is a priority. </br> • Simple configuration. </br> • Fewer than 400 nodes per cluster. </br> • Kubernetes internal or external load balancers are sufficient for reaching pods from outside the cluster. </br> • Manually managing and maintaining user defined routes is acceptable. |
165
+
|**Azure CNI**| • Full virtual network connectivity is required for pods. </br> • Advanced AKS features (such as virtual nodes) are needed. </br> • Sufficient IP address space is available. </br> • Pod to pod and pod to virtual machine connectivity is needed. </br> • External resources need to reach pods directly. </br> • AKS network policies are required. |
166
+
|**Azure CNI Overlay**| • IP address shortage is a concern. </br> • Scaling up to 1000 nodes and 250 pods per node is sufficient. </br> • Additional hop for pod connectivity is acceptable. </br> • Simpler network configuration. </br> • AKS egress requirements can be met. |
167
+
162
168
The following behavior differences exist between kubenet and Azure CNI:
@@ -168,10 +174,10 @@ The following behavior differences exist between kubenet and Azure CNI:
168
174
| Pod-VM connectivity; VM in the same virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
169
175
| Pod-VM connectivity; VM in peered virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
170
176
| On-premises access using VPN or Express Route | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
171
-
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported |[No Application Gateway Ingress Controller (AGIC) support][azure-cni-overlay-limitations]| Same limitations when using Overlay mode |
177
+
| Access to resources secured by service endpoints | Supported | Supported | Supported | Supported |
178
+
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported | Supported| Same limitations when using Overlay mode |
172
179
| Support for Windows node pools | Not Supported | Supported | Supported |[Available only for Linux and not for Windows.][azure-cni-powered-by-cilium-limitations]|
173
-
174
-
For both kubenet and Azure CNI plugins, the DNS service is provided by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS by default is configured to forward unknown domains to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS.
180
+
| Default Azure DNS and Private Zones | Supported | Supported | Supported | Supported |
175
181
176
182
For more information on Azure CNI and kubenet and to help determine which option is best for you, see [Configure Azure CNI networking in AKS][azure-cni-aks] and [Use kubenet networking in AKS][aks-configure-kubenet-networking].
0 commit comments