You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| CPU percentage | CPU utilization across all nodes for the data warehouse | Maximum |
26
-
| Data IO percentage | IO Utilization across all nodes for the data warehouse | Maximum |
27
-
| Memory percentage | Memory utilization (SQL Server) across all nodes for the data warehouse | Maximum |
28
-
| Successful Connections | Number of successful connections to the data | Total |
29
-
| Failed Connections | Number of failed connections to the data warehouse | Total |
30
-
| Blocked by Firewall | Number of logins to the data warehouse which was blocked | Total |
31
-
| DWU limit | Service level objective of the data warehouse | Maximum |
32
-
| DWU percentage | Maximum between CPU percentage and Data IO percentage | Maximum |
33
-
| DWU used | DWU limit * DWU percentage | Maximum |
34
-
| Cache hit percentage | (cache hits / cache miss) * 100 where cache hits is the sum of all columnstore segments hits in the local SSD cache and cache miss is the columnstore segments misses in the local SSD cache summed across all nodes | Maximum |
35
-
| Cache used percentage | (cache used / cache capacity) * 100 where cache used is the sum of all bytes in the local SSD cache across all nodes and cache capacity is the sum of the storage capacity of the local SSD cache across all nodes | Maximum |
36
-
| Local tempdb percentage | Local tempdb utilization across all compute nodes - values are emitted every five minutes | Maximum |
37
-
38
-
> Things to consider when viewing metrics and setting alerts:
39
-
>
40
-
> - Failed and successful connections are reported for a particular data warehouse - not for the logical server
41
-
> - Memory percentage reflects utilization even if the data warehouse is in idle state - it does not reflect active workload memory consumption. Use and track this metric along with others (tempdb, gen2 cache) to make a holistic decision on if scaling for additional cache capacity will increase workload performance to meet your requirements.
25
+
| CPU percentage | CPU utilization across all nodes for the data warehouse | Avg, Min, Max |
26
+
| Data IO percentage | IO Utilization across all nodes for the data warehouse | Avg, Min, Max |
27
+
| Memory percentage | Memory utilization (SQL Server) across all nodes for the data warehouse | Avg, Min, Max |
28
+
| Active Queries | Number of active queries executing on the system | Sum |
29
+
| Queued Queries | Number of queued queries waiting to start executing | Sum |
30
+
| Successful Connections | Number of successful connections to the data | Sum, Count |
31
+
| Failed Connections | Number of failed connections to the data warehouse | Sum, Count |
32
+
| Blocked by Firewall | Number of logins to the data warehouse which was blocked | Sum, Count |
33
+
| DWU limit | Service level objective of the data warehouse | Avg, Min, Max |
34
+
| DWU percentage | Maximum between CPU percentage and Data IO percentage | Avg, Min, Max |
35
+
| DWU used | DWU limit * DWU percentage | Avg, Min, Max |
36
+
| Cache hit percentage | (cache hits / cache miss) * 100 where cache hits is the sum of all columnstore segments hits in the local SSD cache and cache miss is the columnstore segments misses in the local SSD cache summed across all nodes | Avg, Min, Max |
37
+
| Cache used percentage | (cache used / cache capacity) * 100 where cache used is the sum of all bytes in the local SSD cache across all nodes and cache capacity is the sum of the storage capacity of the local SSD cache across all nodes | Avg, Min, Max |
38
+
| Local tempdb percentage | Local tempdb utilization across all compute nodes - values are emitted every five minutes | Avg, Min, Max |
39
+
40
+
Things to consider when viewing metrics and setting alerts:
41
+
42
+
- Failed and successful connections are reported for a particular data warehouse - not for the logical server
43
+
- Memory percentage reflects utilization even if the data warehouse is in idle state - it does not reflect active workload memory consumption. Use and track this metric along with others (tempdb, gen2 cache) to make a holistic decision on if scaling for additional cache capacity will increase workload performance to meet your requirements.
The chart shows that with 25% workload isolation, only 10% is being used on average. In this case, the `MIN_PERCENTAGE_RESOURCE` parameter value could be lowered to between 10 or 15 and allow for other workloads on the system to consume the resources.
The chart shows that with a 9% cap on resources, the workload group is 90%+ utilized (from the *Workload group allocation by max resource percent metric*). There is a steady queuing of queries as shown from the *Workload group queued queries metric*. In this case, increasing the `CAP_PERCENTAGE_RESOURCE` to a value higher than 9% will allow more queries to execute concurrently. Increasing the `CAP_PERCENTAGE_RESOURCE` assumes that there are enough resources available and not isolated by other workload groups. Verify the cap increased by checking the *Effective cap resource percent metric*. If more throughput is desired, also consider increasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` to a value greater than 3. Increasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` could allow queries to run faster.
0 commit comments