You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-standard-diagnostics.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,8 +64,6 @@ To view the metrics for your Standard Load Balancer resources:
64
64
65
65
For API guidance for retrieving multi-dimensional metric definitions and values, see [Azure Monitoring REST API walkthrough](https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-rest-api-walkthrough#retrieve-metric-definitions-multi-dimensional-api). These metrics can be written to a storage account via the 'All Metrics' option only.
66
66
67
-
### <aname = "DiagnosticScenarios"></a>Common diagnostic scenarios and recommended views
68
-
69
67
### Configure alerts for multi-dimensional metrics ###
70
68
71
69
Azure Standard Load Balancer supports easily configurable alerts for multi-dimensional metrics. Configure custom thresholds for specific metrics to trigger alerts with varying levels of severity to empower a touchless resource monitoring experience.
@@ -81,9 +79,11 @@ To configure alerts:
81
79
>[!NOTE]
82
80
>Alert condition configuration window will show time series for signal history. There is an option to filter this time series by dimensions such as Backend IP. This will filter the time series graph but **not** the alert itself. You cannot configure alerts for specific Backend IP addresses.
83
81
82
+
### <aname = "DiagnosticScenarios"></a>Common diagnostic scenarios and recommended views
83
+
84
84
#### Is the data path up and available for my load balancer VIP?
85
85
<details>
86
-
<summary>Click to expand!</summary>
86
+
<summary>Click to learn how to answer with metrics!</summary>
87
87
88
88
The VIP availability metric describes the health of the data path within the region to the compute host where your VMs are located. The metric is a reflection of the health of the Azure infrastructure. You can use the metric to:
89
89
- Monitor the external availability of your service
@@ -115,7 +115,7 @@ Use **Average** as the aggregation for most scenarios.
115
115
116
116
#### Are the back-end instances for my VIP responding to probes?
117
117
<details>
118
-
<summary>Click to expand!</summary>
118
+
<summary>Click to learn how to answer with metrics!</summary>
119
119
The health probe status metric describes the health of your application deployment as configured by you when you configure the health probe of your load balancer. The load balancer uses the status of the health probe to determine where to send new flows. Health probes originate from an Azure infrastructure address and are visible within the guest OS of the VM.
120
120
121
121
To get the health probe status for your Standard Load Balancer resources:
@@ -131,7 +131,7 @@ Use **Average** as the aggregation for most scenarios.
131
131
132
132
#### How do I check my outbound connection statistics?
133
133
<details>
134
-
<summary>Click to expand!</summary>
134
+
<summary>Click to learn how with metrics!</summary>
135
135
The SNAT connections metric describes the volume of successful and failed connections for [outbound flows](https://aka.ms/lboutbound).
136
136
137
137
A failed connections volume of greater than zero indicates SNAT port exhaustion. You must investigate further to determine what may be causing these failures. SNAT port exhaustion manifests as a failure to establish an [outbound flow](https://aka.ms/lboutbound). Review the article about outbound connections to understand the scenarios and mechanisms at work, and to learn how to mitigate and design to avoid SNAT port exhaustion.
@@ -148,7 +148,7 @@ To get SNAT connection statistics:
148
148
149
149
#### How do I check my SNAT port usage and allocation?
150
150
<details>
151
-
<summary>Click to expand!</summary>
151
+
<summary>Click to learn how with metrics!</summary>
152
152
The SNAT Usage metric indicates how many unique flows are established between an internet source and a backend VM or virtual machine scale set that is behind a load balancer and does not have a public IP address. By comparing this with the SNAT Allocation metric, you can determine if your service is experiencing or at risk of SNAT exhaustion and resulting outbound flow failure.
153
153
154
154
If your metrics indicate risk of [outbound flow](https://aka.ms/lboutbound) failure, reference the article and take steps to mitigate this to ensure service health.
@@ -174,7 +174,7 @@ To view SNAT port usage and allocation:
174
174
175
175
#### How do I check inbound/outbound connection attempts for my service?
176
176
<details>
177
-
<summary>Click to expand!</summary>
177
+
<summary>Click to learn how with metrics!</summary>
178
178
A SYN packets metric describes the volume of TCP SYN packets, which have arrived or were sent (for [outbound flows](https://aka.ms/lboutbound)) that are associated with a specific front end. You can use this metric to understand TCP connection attempts to your service.
179
179
180
180
Use **Total** as the aggregation for most scenarios.
@@ -187,7 +187,7 @@ Use **Total** as the aggregation for most scenarios.
187
187
188
188
#### How do I check my network bandwidth consumption?
189
189
<details>
190
-
<summary>Click to expand!</summary>
190
+
<summary>Click to learn how with metrics!</summary>
191
191
The bytes and packet counters metric describes the volume of bytes and packets that are sent or received by your service on a per-front-end basis.
192
192
193
193
Use **Total** as the aggregation for most scenarios.
@@ -205,7 +205,7 @@ To get byte or packet count statistics:
205
205
206
206
#### <aname = "vipavailabilityandhealthprobes"></a>How do I diagnose my load balancer deployment?
207
207
<details>
208
-
<summary>Click to expand!</summary>
208
+
<summary>Click to learn how with metrics!</summary>
209
209
By using a combination of the VIP availability and health probe metrics on a single chart you can identify where to look for the problem and resolve the problem. You can gain assurance that Azure is working correctly and use this knowledge to conclusively determine that the configuration or application is the root cause.
210
210
211
211
You can use health probe metrics to understand how Azure views the health of your deployment as per the configuration you have provided. Looking at health probes is always a great first step in monitoring or determining a cause.
0 commit comments