You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-floating-ip.md
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,14 +28,19 @@ If you want to reuse the backend port across multiple rules, you must enable Flo
28
28
29
29
When Floating IP is enabled, Azure changes the IP address mapping to the Frontend IP address of the Load Balancer frontend instead of backend instance's IP. Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
30
30
31
+
In the diagrams below, you see how IP address mapping works before and after enabling Floating IP:
32
+
:::image type="content" source="media/load-balancer-floating-ip/load-balancer-floating-ip-before.png" alt-text="This diagram shows network traffic through a load balancer before Floating IP is enabled.":::
33
+
34
+
:::image type="content" source="media/load-balancer-floating-ip/load-balancer-floating-ip-after.png" alt-text="This diagram shows network traffic through a load balancer after Floating IP is enabled.":::
35
+
31
36
Floating IP can be configured on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to use Floating IP.
32
37
33
38
## Floating IP Guest OS configuration
34
39
35
40
In order to function, the Guest OS for the virtual machine needs to be configured to receive all traffic bound for the frontend IP and port of the load balancer. To accomplish this requires:
36
41
* a loopback network interface to be added
37
42
* configuring the loopback with the frontend IP address of the load balancer
38
-
* ensure the system can send/receive packets on interfaces that do not have the IP address assigned to that interface (on Windows, this requires setting interfaces to use the "weak host" model; on Linux this model is normally used by default)
43
+
* ensure the system can send/receive packets on interfaces that don't have the IP address assigned to that interface (on Windows, this requires setting interfaces to use the "weak host" model; on Linux this model is normally used by default)
39
44
The host firewall also needs to be open to receiving traffic on the frontend IP port.
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-ha-ports-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ For NVA HA scenarios, HA ports offer the following advantages:
40
40
41
41
The following diagram presents a hub-and-spoke virtual network deployment. The spokes force-tunnel their traffic to the hub virtual network and through the NVA, before leaving the trusted space. The NVAs are behind an internal Standard Load Balancer with an HA ports configuration. All traffic can be processed and forwarded accordingly. When configured as show in the following diagram, an HA Ports load-balancing rule additionally provides flow symmetry for ingress and egress traffic.
42
42
43
-
:::image type="content" source="./media/load-balancer-ha-ports-overview/nvaha.png" alt-text="Diagram of hub-and-spoke virtual network, with NVAs deployed in HA mode.":::
43
+
:::image type="content" source="./media/load-balancer-ha-ports-overview/nvahathmb.png" alt-text="Diagram of hub-and-spoke virtual network, with NVAs deployed in HA mode." lightbox="media/load-balancer-ha-ports-overview/nvaha.png":::
44
44
45
45
>[!NOTE]
46
46
> If you are using NVAs, confirm with their providers how to best use HA ports and to learn which scenarios are supported.
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-tcp-reset.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,29 +16,29 @@ ms.author: mbender
16
16
17
17
# Load Balancer TCP Reset and Idle Timeout
18
18
19
-
You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling this feature will cause Load Balancer to send bidirectional TCP Resets (TCP RST packet) on idle timeout. This will inform your application endpoints that the connection has timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
19
+
You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling TCP reset will cause Load Balancer to send bidirectional TCP Resets (TCP RST packet) on idle timeout. This will inform your application endpoints that the connection has timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
You change this default behavior and enable sending TCP Resets on idle timeout on inbound NAT rules, load balancing rules, and [outbound rules](./load-balancer-outbound-connections.md#outboundrules). When enabled per rule, Load Balancer will send bidirectional TCP Reset (TCP RST packets) to both client and server endpoints at the time of idle timeout for all matching flows.
26
26
27
27
Endpoints receiving TCP RST packets close the corresponding socket immediately. This provides an immediate notification to the endpoints that the release of the connection has occurred and any future communication on the same TCP connection will fail. Applications can purge connections when the socket closes and reestablish connections as needed without waiting for the TCP connection to eventually time out.
28
28
29
-
For many scenarios, this may reduce the need to send TCP (or application layer) keepalives to refresh the idle timeout of a flow.
29
+
For many scenarios, TCP reset may reduce the need to send TCP (or application layer) keepalives to refresh the idle timeout of a flow.
30
30
31
-
If your idle durations exceed those of allowed by the configuration or your application shows an undesirable behavior with TCP Resets enabled, you may still need to use TCP keepalives (or application layer keepalives) to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
31
+
If your idle durations exceed configuration limits or your application shows an undesirable behavior with TCP Resets enabled, you may still need to use TCP keepalives, or application layer keepalives, to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
32
32
33
-
Carefully examine the entire end to end scenario to decide whether you benefit from enabling TCP Resets, adjusting the idle timeout, and if additional steps may be required to ensure the desired application behavior.
33
+
By carefully examining the entire end to end scenario, you can determine the benefits from enabling TCP Resets and adjusting the idle timeout. Then you decide if more steps may be required to ensure the desired application behavior.
34
34
35
35
## Configurable TCP idle timeout
36
36
37
37
Azure Load Balancer has the following idle timeout range:
38
38
- 4 minutes to 100 minutes for Outbound Rules
39
39
- 4 minutes to 30 minutes for Load Balancer rules and Inbound NAT rules
40
40
41
-
By default, it is set to 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your cloud service.
41
+
By default, it's set to 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your cloud service.
42
42
43
43
When the connection is closed, your client application may receive the following error message: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."
44
44
@@ -52,8 +52,8 @@ TCP keep-alive works for scenarios where battery life isn't a constraint. It isn
52
52
## Limitations
53
53
54
54
- TCP reset only sent during TCP connection in ESTABLISHED state.
55
-
- TCP idle timeout does not affect load balancing rules on UDP protocol.
56
-
- TCP reset is not supported for ILB HA ports when a network virtual appliance is in the path. A workaround could be to use outbound rule with TCP reset from NVA.
- TCP reset isn't supported for ILB HA ports when a network virtual appliance is in the path. A workaround could be to use outbound rule with TCP reset from NVA.
0 commit comments