Skip to content

Commit d095544

Browse files
authored
Merge pull request #97363 from dagiro/freshness76
freshness76
2 parents 09887f6 + 8763890 commit d095544

File tree

7 files changed

+8
-8
lines changed

7 files changed

+8
-8
lines changed

articles/hdinsight/hdinsight-key-scenarios-to-monitor.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ author: hrasheed-msft
55
ms.author: hrasheed
66
ms.reviewer: jasonh
77
ms.service: hdinsight
8-
ms.custom: hdinsightactive
98
ms.topic: conceptual
10-
ms.date: 05/29/2019
9+
ms.custom: hdinsightactive
10+
ms.date: 11/27/2019
1111
---
1212

1313
# Monitor cluster performance in Azure HDInsight
@@ -26,9 +26,9 @@ To get a high-level look at the nodes of your cluster and their loading, sign in
2626
| --- | --- |
2727
| Red | At least one master component on the host is down. Hover to see a tooltip that lists affected components. |
2828
| Orange | At least one secondary component on the host is down. Hover to see a tooltip that lists affected components. |
29-
| Yellow | Ambari Server has not received a heartbeat from the host for more than 3 minutes. |
29+
| Yellow | Ambari Server hasn't received a heartbeat from the host for more than 3 minutes. |
3030
| Green | Normal running state. |
31-
31+
3232
You'll also see columns showing the number of cores and amount of RAM for each host, and the disk usage and load average.
3333

3434
![Apache Ambari hosts tab overview](./media/hdinsight-key-scenarios-to-monitor/apache-ambari-hosts-tab.png)
@@ -47,7 +47,7 @@ YARN divides the two responsibilities of the JobTracker, resource management and
4747

4848
The Resource Manager is a *pure scheduler*, and solely arbitrates available resources between all competing applications. The Resource Manager ensures that all resources are always in use, optimizing for various constants such as SLAs, capacity guarantees, and so forth. The ApplicationMaster negotiates resources from the Resource Manager, and works with the NodeManager(s) to execute and monitor the containers and their resource consumption.
4949

50-
When multiple tenants share a large cluster, there is competition for the cluster's resources. The CapacityScheduler is a pluggable scheduler that assists in resource sharing by queueing up requests. The CapacityScheduler also supports *hierarchical queues* to ensure that resources are shared between the sub-queues of an organization, before other applications' queues are allowed to use free resources.
50+
When multiple tenants share a large cluster, there's competition for the cluster's resources. The CapacityScheduler is a pluggable scheduler that assists in resource sharing by queueing up requests. The CapacityScheduler also supports *hierarchical queues* to ensure that resources are shared between the sub-queues of an organization, before other applications' queues are allowed to use free resources.
5151

5252
YARN allows us to allocate resources to these queues, and shows you whether all of your available resources are assigned. To view information about your queues, sign in to the Ambari Web UI, then select **YARN Queue Manager** from the top menu.
5353

@@ -67,11 +67,11 @@ In the Resource Manager UI, select **Scheduler** from the left-hand menu. You se
6767

6868
## Storage throttling
6969

70-
A cluster's performance bottleneck can happen at the storage level. This type of bottleneck is most often due to *blocking* input/output (IO) operations, which happen when your running tasks send more IO than the storage service can handle. This blocking creates a queue of IO requests waiting to be processed until after current IOs are processed. The blocks are due to *storage throttling*, which is not a physical limit, but rather a limit imposed by the storage service by a service level agreement (SLA). This limit ensures that no single client or tenant can monopolize the service. The SLA limits the number of IOs per second (IOPS) for Azure Storage - for details, see [Azure Storage Scalability and Performance Targets](https://docs.microsoft.com/azure/storage/storage-scalability-targets).
70+
A cluster's performance bottleneck can happen at the storage level. This type of bottleneck is most often because of *blocking* input/output (IO) operations, which happen when your running tasks send more IO than the storage service can handle. This blocking creates a queue of IO requests waiting to be processed until after current IOs are processed. The blocks are because of *storage throttling*, which isn't a physical limit, but rather a limit imposed by the storage service by a service level agreement (SLA). This limit ensures that no single client or tenant can monopolize the service. The SLA limits the number of IOs per second (IOPS) for Azure Storage - for details, see [Azure Storage Scalability and Performance Targets](https://docs.microsoft.com/azure/storage/storage-scalability-targets).
7171

72-
If you are using Azure Storage, for information on monitoring storage-related issues, including throttling, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](https://docs.microsoft.com/azure/storage/storage-monitoring-diagnosing-troubleshooting).
72+
If you're using Azure Storage, for information on monitoring storage-related issues, including throttling, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](https://docs.microsoft.com/azure/storage/storage-monitoring-diagnosing-troubleshooting).
7373

74-
If your cluster's backing store is Azure Data Lake Storage (ADLS), your throttling is most likely due to bandwidth limits. Throttling in this case could be identified by observing throttling errors in task logs. For ADLS, see the throttling section for the appropriate service in these articles:
74+
If your cluster's backing store is Azure Data Lake Storage (ADLS), your throttling is most likely because of bandwidth limits. Throttling in this case could be identified by observing throttling errors in task logs. For ADLS, see the throttling section for the appropriate service in these articles:
7575

7676
* [Performance tuning guidance for Apache Hive on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-hive.md)
7777
* [Performance tuning guidance for MapReduce on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-mapreduce.md)
123 KB
Loading
79 KB
Loading
21.6 KB
Loading
103 KB
Loading
122 KB
Loading
95.3 KB
Loading

0 commit comments

Comments
 (0)