You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/use-network-policies.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -198,7 +198,7 @@ Create the AKS cluster and specify `--network-plugin azure`, and `--network-poli
198
198
If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters that meet the [Windows Server password requirements][windows-server-password].
199
199
200
200
> [!IMPORTANT]
201
-
> At this time, using Calico network policies with Windows nodes is available on new clusters by using Kubernetes version 1.20 or later with Calico 3.17.2 and requires that you use Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have [Direct Server Return (DSR)][dsr] enabled by default.
201
+
> At this time, using Calico network policies with Windows nodes is available on new clusters by using Kubernetes version 1.20 or later with Calico 3.17.2 and requires that you use Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have Floating IP enabled by default.
202
202
>
203
203
> For clusters with only Linux node pools running Kubernetes 1.20 with earlier versions of Calico, the Calico version automatically upgrades to 3.17.2.
Copy file name to clipboardExpand all lines: articles/aks/windows-best-practices.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ You might want to containerize existing applications and run them using Windows
35
35
36
36
AKS uses Windows Server 2019 and Windows Server 2022 as the host OS versions and only supports process isolation. AKS doesn't support container images built by other versions of Windows Server. For more information, see [Windows container version compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility).
37
37
38
-
Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life (EOL). Windows Server 2022 will retire after Kubernetes version 1.34 reaches its end of life (EOL). For more information, see [AKS release notes][aks-release-notes]. To stay up to date on the latest Windows Server OS versions and learn more about our roadmap of what's planned for support on AKS, see our [AKS public roadmap](https://github.com/azure/aks/projects/1).
38
+
Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life. Windows Server 2022 will retire after Kubernetes version 1.34 reaches its end of life. For more information, see [AKS release notes][aks-release-notes]. To stay up to date on the latest Windows Server OS versions and learn more about our roadmap of what's planned for support on AKS, see our [AKS public roadmap](https://github.com/azure/aks/projects/1).
39
39
40
40
41
41
## Networking
@@ -63,7 +63,7 @@ To help you decide which networking mode to use, see [Choosing a network model][
63
63
64
64
When managing traffic between pods, you should apply the principle of least privilege. The Network Policy feature in Kubernetes allows you to define and enforce ingress and egress traffic rules between the pods in your cluster. For more information, see [Secure traffic between pods using network policies in AKS][network-policies-aks].
65
65
66
-
Windows pods on AKS clusters that use the Calico Network Policy enable [Floating IP][dsr] by default.
66
+
Windows pods on AKS clusters that use the Calico Network Policy enable Floating IP by default.
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-multivip-overview.md
+41-80Lines changed: 41 additions & 80 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,119 +6,80 @@ services: load-balancer
6
6
author: mbender-ms
7
7
ms.service: load-balancer
8
8
ms.topic: conceptual
9
-
ms.date: 12/04/2023
9
+
ms.date: 04/12/2024
10
10
ms.author: mbender
11
11
ms.custom: template-concept
12
12
---
13
13
14
14
# Multiple frontends for Azure Load Balancer
15
15
16
-
Azure Load Balancer allows you to load balance services on multiple ports, multiple IP addresses, or both. You can use a public or internal load balancer to load balance traffic across a set of services like virtual machine scale sets or virtual machines (VMs).
16
+
Azure Load Balancer allows you to load balance services on multiple frontend IPs. You can use a public or internal load balancer to load balance traffic across a set of services like virtual machine scale sets or virtual machines (VMs).
17
17
18
-
This article describes the fundamentals of load balancing across multiple IP addresses using the same port and protocol. If you only intend to expose services on one IP address, you can find simplified instructions for [public](./quickstart-load-balancer-standard-public-portal.md) or [internal](./quickstart-load-balancer-standard-internal-portal.md) load balancer configurations. Adding multiple frontends is incremental to a single frontend configuration. Using the concepts in this article, you can expand a simplified configuration at any time.
18
+
This article describes the fundamentals of load balancing across multiple frontend IP addresses. If you only intend to expose services on one IP address, you can find simplified instructions for [public](./quickstart-load-balancer-standard-public-portal.md) or [internal](./quickstart-load-balancer-standard-internal-portal.md) load balancer configurations. Adding multiple frontends is incremental to a single frontend configuration. Using the concepts in this article, you can expand a simplified configuration at any time.
19
19
20
-
When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with a load balancing rule. The health probe referenced by the load balancing rule is used to determine the health of a VM on a certain port and protocol. Based on the health probe results, new flows are sent to VMs in the backend pool. The frontend is defined using a three-tuple comprised of an IP address (public or internal), a transport protocol (UDP or TCP), and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations (part of the NIC resource) which reference the Load Balancer backend pool.
20
+
When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with a load balancing rule. The health probe referenced by the load balancing rule is used to determine the health of a VM on a certain port and protocol. Based on the health probe results, new flows are sent to VMs in the backend pool. The frontend is defined using a three-tuple comprised of a frontend IP address (public or internal), a protocol, and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations. Load balancing rules can deliver traffic to the same backend pool instance on different ports. This is done by varying the destination port on the load balancing rule.
21
21
22
-
The following table contains some example frontend configurations:
22
+
You can use multiple frontends (and the associated load balancing rules) to load balance to the same backend port or a different backend port. If you want to load balance to the same backend port, you must enable [Azure Load Balancer Floating IP configuration](load-balancer-floating-ip.md) as part of the load balancing rules for each frontend.
23
23
24
-
| Frontend | IP address | protocol | port |
25
-
| --- | --- | --- | --- |
26
-
| 1 |65.52.0.1 |TCP |80 |
27
-
| 2 |65.52.0.1 |TCP |*8080*|
28
-
| 3 |65.52.0.1 |*UDP*|80 |
29
-
| 4 |*65.52.0.2*|TCP |80 |
24
+
## Add Load Balancer frontend
25
+
In this example, add another frontend to your Load Balancer.
30
26
31
-
The table shows four different frontend configurations. Frontends #1, #2 and #3 use the same IP address but the port or protocol is different for each frontend. Frontends #1 and #4 are an example of multiple frontends, where the same frontend protocol and port are reused across multiple frontend IPs.
27
+
1. Sign in to the [Azure portal](https://portal.azure.com).
32
28
33
-
Azure Load Balancer provides flexibility in defining the load balancing rules. A load balancing rule declares how an address and port on the frontend is mapped to the destination address and port on the backend. Whether or not backend ports are reused across rules depends on the type of the rule. Each type of rule has specific requirements that can affect host configuration and probe design. There are two types of rules:
29
+
2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
34
30
35
-
1. The default rule with no backend port reuse.
36
-
2. The Floating IP rule where backend ports are reused.
31
+
3. Select **myLoadBalancer** or your load balancer.
37
32
38
-
Azure Load Balancer allows you to mix both rule types on the same load balancer configuration. The load balancer can use them simultaneously for a given VM, or any combination, if you abide by the constraints of the rule. The rule type you choose depends on the requirements of your application and the complexity of supporting that configuration. You should evaluate which rule types are best for your scenario. We explore these scenarios further by starting with the default behavior.
33
+
4. In the load balancer page, select **Frontend IP configuration** in **Settings**.
39
34
40
-
## Rule type #1: No backend port reuse
41
-
:::image type="content" source="media/load-balancer-multivip-overview/load-balancer-multivip.png" alt-text="Diagram of Load Balancer traffic with no backend port reuse.":::
35
+
5. Select **+ Add** in **Frontend IP configuration** to add a frontend.
42
36
43
-
In this scenario, the frontends are configured as follows:
37
+
6. Enter or select the following information in **Add frontend IP configuration**.
38
+
If **myLoadBalancer** is a _Public_ Load Balancer:
| IP type | Select **IP address** or **IP prefix**. |
45
+
| Public IP address | Select an existing Public IP address or create a new one. |
46
+
47
+
If **myLoadBalancer** is an _Internal_ Load Balancer:
49
48
50
-
The backend instance IP (BIP) is the IP address of the backend service in the backend pool, each VM exposes the desired service on a unique port on the backend instance IP. This service is associated with the frontend IP (FIP) through a rule definition.
Next you must associate the frontend IP configuration you have created with an appropriate load balancing rule. Refer to [Manage rules for Azure Load Balancer](manage-rules-how-to.md#load-balancing-rules) for more information on how to do this.
58
60
59
-
The complete mapping in Azure Load Balancer is now as follows:
61
+
## Remove a frontend
60
62
61
-
| Rule | Frontend IP address | protocol | port | Destination | port |
In this example, you remove a frontend from your Load Balancer.
65
64
66
-
Each rule must produce a flow with a unique combination of destination IP address and destination port. Multiple load balancing rules can deliver flows to the same backend instance IP on different ports by varying the destination port of the flow.
65
+
1. Sign in to the [Azure portal](https://portal.azure.com).
67
66
68
-
Health probes are always directed to the backend instance IP of a VM. You must ensure that your probe reflects the health of the VM.
67
+
2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
69
68
70
-
## Rule type #2: backend port reuse by using Floating IP
69
+
3. Select **myLoadBalancer** or your load balancer.
71
70
72
-
Azure Load Balancer provides the flexibility to reuse the frontend port across multiple frontends configurations. Additionally, some application scenarios prefer or require the same port to be used by multiple application instances on a single VM in the backend pool. Common examples of port reuse include clustering for high availability, network virtual appliances, and exposing multiple TLS endpoints without re-encryption.
71
+
4. In the load balancer page, select **Frontend IP configuration**in **Settings**.
73
72
74
-
If you want to reuse the backend port across multiple rules, you must enable Floating IP in the load balancing rule definition.
73
+
5. Select the delete icon next to the frontend you would like to remove.
75
74
76
-
*Floating IP* is Azure's terminology for a portion of what is known as Direct Server Return (DSR). DSR consists of two parts: a flow topology and an IP address mapping scheme. At a platform level, Azure Load Balancer always operates in a DSR flow topology regardless of whether Floating IP is enabled or not. This means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin.
75
+
6. Note the associated resources that will also be deleted. Check the box that says 'I have read and understood that this frontend IP configuration as well as the associated resources listed above will be deleted'
77
76
78
-
With the default rule type, Azure exposes a traditional load balancing IP address mapping scheme for ease of use. Enabling Floating IP changes the IP address mapping scheme to allow for more flexibility.
77
+
7. Select **Delete**.
79
78
80
-
:::image type="content" source="media/load-balancer-multivip-overview/load-balancer-multivip-dsr.png" alt-text="Diagram of load balancer traffic for multiple frontend IPs with floating IP.":::
81
-
82
-
For this scenario, every VM in the backend pool has three network interfaces:
83
-
84
-
* Backend IP: a Virtual NIC associated with the VM (IP configuration of Azure's NIC resource).
85
-
* Frontend 1 (FIP1): a loopback interface within guest OS that is configured with IP address of FIP1.
86
-
* Frontend 2 (FIP2): a loopback interface within guest OS that is configured with IP address of FIP2.
87
-
88
-
Let's assume the same frontend configuration as in the previous scenario:
| 1 | FIP1:80 | FIP1:80 (in VM1 and VM2) |
100
-
| 2 | FIP2:80 | FIP2:80 (in VM1 and VM2) |
101
-
102
-
The following table shows the complete mapping in the load balancer:
103
-
104
-
| Rule | Frontend IP address | protocol | port | Destination | port |
105
-
| --- | --- | --- | --- | --- | --- |
106
-
| 1 |65.52.0.1 |TCP |80 |same as frontend (65.52.0.1) |same as frontend (80) |
107
-
| 2 |65.52.0.2 |TCP |80 |same as frontend (65.52.0.2) |same as frontend (80) |
108
-
109
-
The destination of the inbound flow is now the frontend IP address on the loopback interface in the VM. Each rule must produce a flow with a unique combination of destination IP address and destination port. Port reuse is possible on the same VM by varying the destination IP address to the frontend IP address of the flow. Your service is exposed to the load balancer by binding it to the frontend’s IP address and port of the respective loopback interface.
110
-
111
-
You notice the destination port doesn't change in the example. In floating IP scenarios, Azure Load Balancer also supports defining a load balancing rule to change the backend destination port and to make it different from the frontend destination port.
112
-
113
-
The Floating IP rule type is the foundation of several load balancer configuration patterns. One example that is currently available is the [Configure one or more Always On availability group listeners](/azure/azure-sql/virtual-machines/windows/availability-group-listener-powershell-configure) configuration. Over time, we'll document more of these scenarios.
114
-
115
-
> [!NOTE]
116
-
> For more detailed information on the specific Guest OS configurations required to enable Floating IP, please refer to [Azure Load Balancer Floating IP configuration](load-balancer-floating-ip.md).
117
79
118
80
## Limitations
119
81
120
-
* Multiple frontend configurations are only supported with IaaS VMs and virtual machine scale sets.
121
-
* With the Floating IP rule, your application must use the primary IP configuration for outbound SNAT flows. If your application binds to the frontend IP address configured on the loopback interface in the guest OS, Azure's outbound SNAT won't rewrite the outbound flow, and the flow fails. Review [outbound scenarios](load-balancer-outbound-connections.md).
82
+
* With the Floating IP rule, your application must use the primary IP configuration of the network interface of your virtual machine for outbound flows. If your application binds to the frontend IP address configured on the loopback interface in the guest OS, Azure's outbound won't rewrite the outbound flow, and the flow fails. Review [outbound scenarios](load-balancer-outbound-connections.md).
122
83
* Floating IP isn't currently supported on secondary IP configurations.
123
84
* Public IP addresses have an effect on billing. For more information, see [IP Address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/)
124
85
* Subscription limits apply. For more information, see [Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) for details.
0 commit comments