Skip to content

Commit dd98072

Browse files
committed
Updated image text.
1 parent 08a3201 commit dd98072

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

articles/aks/azure-cni-overlay.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,6 @@ In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a
2323
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
2424

2525
:::image type="content" source="media/azure-cni-overlay/AzureCNI-Overlay.png" alt-text="A diagram showing two nodes with three pods each running in an overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
26-
![Azure CNI Overlay network model with an AKS cluster](media/azure-cni-overlay/AzureCNI-Overlay.png)
2726

2827
Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
2928

0 commit comments

Comments
 (0)