Skip to content

Commit 2b3fafd

Browse files
authored
Make summaries more relevant
1 parent eeab8a8 commit 2b3fafd

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

articles/load-balancer/load-balancer-standard-diagnostics.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -64,8 +64,6 @@ To view the metrics for your Standard Load Balancer resources:
6464

6565
For API guidance for retrieving multi-dimensional metric definitions and values, see [Azure Monitoring REST API walkthrough](https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-rest-api-walkthrough#retrieve-metric-definitions-multi-dimensional-api). These metrics can be written to a storage account via the 'All Metrics' option only.
6666

67-
### <a name = "DiagnosticScenarios"></a>Common diagnostic scenarios and recommended views
68-
6967
### Configure alerts for multi-dimensional metrics ###
7068

7169
Azure Standard Load Balancer supports easily configurable alerts for multi-dimensional metrics. Configure custom thresholds for specific metrics to trigger alerts with varying levels of severity to empower a touchless resource monitoring experience.
@@ -81,9 +79,11 @@ To configure alerts:
8179
>[!NOTE]
8280
>Alert condition configuration window will show time series for signal history. There is an option to filter this time series by dimensions such as Backend IP. This will filter the time series graph but **not** the alert itself. You cannot configure alerts for specific Backend IP addresses.
8381
82+
### <a name = "DiagnosticScenarios"></a>Common diagnostic scenarios and recommended views
83+
8484
#### Is the data path up and available for my load balancer VIP?
8585
<details>
86-
<summary>Click to expand!</summary>
86+
<summary>Click to learn how to answer with metrics!</summary>
8787

8888
The VIP availability metric describes the health of the data path within the region to the compute host where your VMs are located. The metric is a reflection of the health of the Azure infrastructure. You can use the metric to:
8989
- Monitor the external availability of your service
@@ -115,7 +115,7 @@ Use **Average** as the aggregation for most scenarios.
115115

116116
#### Are the back-end instances for my VIP responding to probes?
117117
<details>
118-
<summary>Click to expand!</summary>
118+
<summary>Click to learn how to answer with metrics!</summary>
119119
The health probe status metric describes the health of your application deployment as configured by you when you configure the health probe of your load balancer. The load balancer uses the status of the health probe to determine where to send new flows. Health probes originate from an Azure infrastructure address and are visible within the guest OS of the VM.
120120

121121
To get the health probe status for your Standard Load Balancer resources:
@@ -131,7 +131,7 @@ Use **Average** as the aggregation for most scenarios.
131131

132132
#### How do I check my outbound connection statistics?
133133
<details>
134-
<summary>Click to expand!</summary>
134+
<summary>Click to learn how with metrics!</summary>
135135
The SNAT connections metric describes the volume of successful and failed connections for [outbound flows](https://aka.ms/lboutbound).
136136

137137
A failed connections volume of greater than zero indicates SNAT port exhaustion. You must investigate further to determine what may be causing these failures. SNAT port exhaustion manifests as a failure to establish an [outbound flow](https://aka.ms/lboutbound). Review the article about outbound connections to understand the scenarios and mechanisms at work, and to learn how to mitigate and design to avoid SNAT port exhaustion.
@@ -148,7 +148,7 @@ To get SNAT connection statistics:
148148

149149
#### How do I check my SNAT port usage and allocation?
150150
<details>
151-
<summary>Click to expand!</summary>
151+
<summary>Click to learn how with metrics!</summary>
152152
The SNAT Usage metric indicates how many unique flows are established between an internet source and a backend VM or virtual machine scale set that is behind a load balancer and does not have a public IP address. By comparing this with the SNAT Allocation metric, you can determine if your service is experiencing or at risk of SNAT exhaustion and resulting outbound flow failure.
153153

154154
If your metrics indicate risk of [outbound flow](https://aka.ms/lboutbound) failure, reference the article and take steps to mitigate this to ensure service health.
@@ -174,7 +174,7 @@ To view SNAT port usage and allocation:
174174

175175
#### How do I check inbound/outbound connection attempts for my service?
176176
<details>
177-
<summary>Click to expand!</summary>
177+
<summary>Click to learn how with metrics!</summary>
178178
A SYN packets metric describes the volume of TCP SYN packets, which have arrived or were sent (for [outbound flows](https://aka.ms/lboutbound)) that are associated with a specific front end. You can use this metric to understand TCP connection attempts to your service.
179179

180180
Use **Total** as the aggregation for most scenarios.
@@ -187,7 +187,7 @@ Use **Total** as the aggregation for most scenarios.
187187

188188
#### How do I check my network bandwidth consumption?
189189
<details>
190-
<summary>Click to expand!</summary>
190+
<summary>Click to learn how with metrics!</summary>
191191
The bytes and packet counters metric describes the volume of bytes and packets that are sent or received by your service on a per-front-end basis.
192192

193193
Use **Total** as the aggregation for most scenarios.
@@ -205,7 +205,7 @@ To get byte or packet count statistics:
205205

206206
#### <a name = "vipavailabilityandhealthprobes"></a>How do I diagnose my load balancer deployment?
207207
<details>
208-
<summary>Click to expand!</summary>
208+
<summary>Click to learn how with metrics!</summary>
209209
By using a combination of the VIP availability and health probe metrics on a single chart you can identify where to look for the problem and resolve the problem. You can gain assurance that Azure is working correctly and use this knowledge to conclusively determine that the configuration or application is the root cause.
210210

211211
You can use health probe metrics to understand how Azure views the health of your deployment as per the configuration you have provided. Looking at health probes is always a great first step in monitoring or determining a cause.

0 commit comments

Comments
 (0)