You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-autoscale-clusters.md
+45-49Lines changed: 45 additions & 49 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,38 +7,25 @@ ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
8
ms.topic: conceptual
9
9
ms.custom: hdinsightactive,seoapr2020
10
-
ms.date: 04/07/2020
10
+
ms.date: 04/29/2020
11
11
---
12
12
13
13
# Automatically scale Azure HDInsight clusters
14
14
15
-
> [!Important]
16
-
> The Azure HDInsight Autoscale feature was released for general availability on November 7th, 2019 for Spark and Hadoop clusters and included improvements not available in the preview version of the feature. If you created a Spark cluster prior to November 7th, 2019 and want to use the Autoscale feature on your cluster, the recommended path is to create a new cluster, and enable Autoscale on the new cluster.
17
-
>
18
-
> Autoscale for Interactive Query (LLAP) and HBase clusters is still in preview. Autoscale is only available on Spark, Hadoop, Interactive Query, and HBase clusters.
19
-
20
-
Azure HDInsight's cluster Autoscale feature automatically scales the number of worker nodes in a cluster up and down. Other types of nodes in the cluster can't be scaled currently. During the creation of a new HDInsight cluster, a minimum and maximum number of worker nodes can be set. Autoscale then monitors the resource requirements of the analytics load and scales the number of worker nodes up or down. There's no additional charge for this feature.
15
+
Azure HDInsight's free Autoscale feature can automatically increase or decrease the number of worker nodes in your cluster based on previously set criteria. You set a minimum and maximum number of nodes during cluster creation, establish the scaling criteria using a day-time schedule or specific performance metrics, and the HDInsight platform does the rest.
21
16
22
-
## Cluster compatibility
23
-
24
-
The following table describes the cluster types and versions that are compatible with the Autoscale feature.
25
-
26
-
| Version | Spark | Hive | LLAP | HBase | Kafka | Storm | ML |
27
-
|---|---|---|---|---|---|---|---|
28
-
| HDInsight 3.6 without ESP | Yes | Yes | Yes | Yes*| No | No | No |
29
-
| HDInsight 4.0 without ESP | Yes | Yes | Yes | Yes*| No | No | No |
30
-
| HDInsight 3.6 with ESP | Yes | Yes | Yes | Yes*| No | No | No |
31
-
| HDInsight 4.0 with ESP | Yes | Yes | Yes | Yes*| No | No | No |
17
+
## How it works
32
18
33
-
\* HBase clusters can only be configured for schedule-based scaling, not load-based.
19
+
The Autoscale feature uses two types of conditions to trigger scaling events: thresholds for various cluster performance metrics (called *load-based scaling*) and time-based triggers (called *schedule-based scaling*). Load-based scaling changes the number of nodes in your cluster, within a range that you set, to ensure optimal CPU usage and minimize running cost. Schedule-based scaling changes the number of nodes in your cluster based on operations that you associate with specific dates and times.
34
20
35
-
##How it works
21
+
### Choosing load-based or schedule-based scaling
36
22
37
-
You can choose load-based scaling or schedule-based scaling for your HDInsight cluster. Load-based scaling changes the number of nodes in your cluster, within a range that you set, to ensure optimal CPU usage and minimize running cost.
23
+
Consider the following factors when choosing a scaling type:
38
24
39
-
Schedule-based scaling changes the number of nodes in your cluster based on conditions that take effect at specific times. These conditions scale the cluster to an intended number of nodes.
25
+
* Load variance: does the load of the cluster follow a consistent pattern at specific times, on specific days? If not, load based scheduling is a better option.
26
+
* SLA requirements: Autoscale scaling is reactive instead of predictive. Will there be a sufficient delay between when the load starts to increase and when the cluster needs to be at its target size? If there are strict SLA requirements and the load is a fixed known pattern, 'schedule based' is a better option.
40
27
41
-
### Metrics monitoring
28
+
### Cluster metrics
42
29
43
30
Autoscale continuously monitors the cluster and collects the following metrics:
44
31
@@ -51,7 +38,7 @@ Autoscale continuously monitors the cluster and collects the following metrics:
51
38
|Used Memory per Node|The load on a worker node. A worker node on which 10 GB of memory is used, is considered under more load than a worker with 2 GB of used memory.|
52
39
|Number of Application Masters per Node|The number of Application Master (AM) containers running on a worker node. A worker node that is hosting two AM containers, is considered more important than a worker node that is hosting zero AM containers.|
53
40
54
-
The above metrics are checked every 60 seconds. Autoscale makes decisions based on these metrics.
41
+
The above metrics are checked every 60 seconds. You can setup scaling operations for your cluster using any of these metrics.
55
42
56
43
### Load-based scale conditions
57
44
@@ -66,6 +53,24 @@ For scale-up, Autoscale issues a scale-up request to add the required number of
66
53
67
54
For scale-down, Autoscale issues a request to remove a certain number of nodes. The scale-down is based on the number of AM containers per node. And the current CPU and memory requirements. The service also detects which nodes are candidates for removal based on current job execution. The scale down operation first decommissions the nodes, and then removes them from the cluster.
68
55
56
+
### Cluster compatibility
57
+
58
+
> [!Important]
59
+
> The Azure HDInsight Autoscale feature was released for general availability on November 7th, 2019 for Spark and Hadoop clusters and included improvements not available in the preview version of the feature. If you created a Spark cluster prior to November 7th, 2019 and want to use the Autoscale feature on your cluster, the recommended path is to create a new cluster, and enable Autoscale on the new cluster.
60
+
>
61
+
> Autoscale for Interactive Query (LLAP) and HBase clusters is still in preview. Autoscale is only available on Spark, Hadoop, Interactive Query, and HBase clusters.
62
+
63
+
The following table describes the cluster types and versions that are compatible with the Autoscale feature.
64
+
65
+
| Version | Spark | Hive | LLAP | HBase | Kafka | Storm | ML |
66
+
|---|---|---|---|---|---|---|---|
67
+
| HDInsight 3.6 without ESP | Yes | Yes | Yes | Yes*| No | No | No |
68
+
| HDInsight 4.0 without ESP | Yes | Yes | Yes | Yes*| No | No | No |
69
+
| HDInsight 3.6 with ESP | Yes | Yes | Yes | Yes*| No | No | No |
70
+
| HDInsight 4.0 with ESP | Yes | Yes | Yes | Yes*| No | No | No |
71
+
72
+
\* HBase clusters can only be configured for schedule-based scaling, not load-based.
73
+
69
74
## Get started
70
75
71
76
### Create a cluster with load-based Autoscaling
@@ -200,32 +205,7 @@ Use the appropriate parameters in the request payload. The json payload below co
200
205
201
206
See the previous section on [enabling load-based autoscale](#load-based-autoscaling) for a full description of all payload parameters.
202
207
203
-
## Guidelines
204
-
205
-
### Choosing load-based or schedule-based scaling
206
-
207
-
Consider the following factors before making a decision on which mode to choose:
208
-
209
-
* Enable Autoscale during cluster creation.
210
-
* The minimum number of nodes should be at least three.
211
-
* Load variance: does the load of the cluster follow a consistent pattern at specific times, on specific days. If not, load based scheduling is a better option.
212
-
* SLA requirements: Autoscale scaling is reactive instead of predictive. Will there be a sufficient delay between when the load starts to increase and when the cluster needs to be at its target size? If there are strict SLA requirements and the load is a fixed known pattern, 'schedule based' is a better option.
213
-
214
-
### Consider the latency of scale up or scale down operations
215
-
216
-
It can take 10 to 20 minutes for a scaling operation to complete. When setting up a customized schedule, plan for this delay. For example, if you need the cluster size to be 20 at 9:00 AM, set the schedule trigger to an earlier time such as 8:30 AM so that the scaling operation has completed by 9:00 AM.
217
-
218
-
### Preparation for scaling down
219
-
220
-
During cluster scaling down process, Autoscale will decommission the nodes to meet the target size. If tasks are running on those nodes, Autoscale will wait until the tasks are completed. Since each worker node also serves a role in HDFS, the temp data will be shifted to the remaining nodes. So you should make sure there's enough space on the remaining nodes to host all the temp data.
221
-
222
-
The running jobs will continue. The pending jobs will wait for scheduling with fewer available worker nodes.
223
-
224
-
### Minimum cluster size
225
-
226
-
Don't scale your cluster down to fewer than three nodes. Scaling your cluster to fewer than three nodes can result in it getting stuck in safe mode because of insufficient file replication. For more information, see [Getting stuck in safe mode](./hdinsight-scaling-best-practices.md#getting-stuck-in-safe-mode).
227
-
228
-
## Monitoring
208
+
## Monitoring Autoscale activities
229
209
230
210
### Cluster status
231
211
@@ -253,6 +233,22 @@ Select **Metrics** under **Monitoring**. Then select **Add metric** and **Number
### Consider the latency of scale up or scale down operations
239
+
240
+
It can take 10 to 20 minutes for a scaling operation to complete. When setting up a customized schedule, plan for this delay. For example, if you need the cluster size to be 20 at 9:00 AM, set the schedule trigger to an earlier time such as 8:30 AM so that the scaling operation has completed by 9:00 AM.
241
+
242
+
### Preparation for scaling down
243
+
244
+
During cluster scaling down process, Autoscale will decommission the nodes to meet the target size. If tasks are running on those nodes, Autoscale will wait until the tasks are completed. Since each worker node also serves a role in HDFS, the temp data will be shifted to the remaining nodes. So you should make sure there's enough space on the remaining nodes to host all the temp data.
245
+
246
+
The running jobs will continue. The pending jobs will wait for scheduling with fewer available worker nodes.
247
+
248
+
### Minimum cluster size
249
+
250
+
Don't scale your cluster down to fewer than three nodes. Scaling your cluster to fewer than three nodes can result in it getting stuck in safe mode because of insufficient file replication. For more information, see [Getting stuck in safe mode](./hdinsight-scaling-best-practices.md#getting-stuck-in-safe-mode).
251
+
256
252
## Next steps
257
253
258
254
Read about guidelines for scaling clusters manually in [Scaling guidelines](hdinsight-scaling-best-practices.md)
Copy file name to clipboardExpand all lines: articles/hdinsight/kafka/apache-kafka-get-started.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,19 +7,19 @@ ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
8
ms.topic: quickstart
9
9
ms.custom: mvc
10
-
ms.date: 02/24/2020
10
+
ms.date: 04/29/2020
11
11
#Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
12
12
---
13
13
14
14
# Quickstart: Create Apache Kafka cluster in Azure HDInsight using Azure portal
15
15
16
16
[Apache Kafka](./apache-kafka-introduction.md) is an open-source, distributed streaming platform. It's often used as a message broker, as it provides functionality similar to a publish-subscribe message queue.
17
17
18
-
In this quickstart, you learn how to create an Apache Kafka cluster using the Azure portal. You also learn how to use included utilities to send and receive messages using Apache Kafka. For in depth explanations of available configurations, see [Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md). For additional information regarding the use of the portal to create clusters, see [Create clusters in the portal](../hdinsight-hadoop-create-linux-clusters-portal.md).
18
+
In this Quickstart, you learn how to create an Apache Kafka cluster using the Azure portal. You also learn how to use included utilities to send and receive messages using Apache Kafka. For in depth explanations of available configurations, see [Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md). For additional information regarding the use of the portal to create clusters, see [Create clusters in the portal](../hdinsight-hadoop-create-linux-clusters-portal.md).
The Apache Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Apache Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
22
+
The Apache Kafka API can only be accessed by resources inside the same virtual network. In this Quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Apache Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document. For more general information on planning virtual networks for HDInsight, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md).
23
23
24
24
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
25
25
@@ -68,13 +68,13 @@ To create an Apache Kafka cluster on HDInsight, use the following steps:
68
68
|Primary storage type|Use the default value **Azure Storage**.|
69
69
|Selection method|Use the default value **Select from list**.|
70
70
|Primary storage account|Use the drop-down list to select an existing storage account, or select **Create new**. If you create a new account, the name must be between 3 and 24 characters in length, and can include numbers and lowercase letters only|
71
-
|Container|Use the autopopulated value.|
71
+
|Container|Use the auto-populated value.|
72
72
73
73

74
74
75
75
Select the **Security + networking** tab.
76
76
77
-
1. For this quickstart, leave the default security settings. To learn more about Enterprise Security package, visit [Configure a HDInsight cluster with Enterprise Security Package by using Azure Active Directory Domain Services](../domain-joined/apache-domain-joined-configure-using-azure-adds.md). To learn how to use your own key for Apache Kafka Disk Encryption, visit [Customer-managed key disk encryption](../disk-encryption.md)
77
+
1. For this Quickstart, leave the default security settings. To learn more about Enterprise Security package, visit [Configure a HDInsight cluster with Enterprise Security Package by using Azure Active Directory Domain Services](../domain-joined/apache-domain-joined-configure-using-azure-adds.md). To learn how to use your own key for Apache Kafka Disk Encryption, visit [Customer-managed key disk encryption](../disk-encryption.md)
78
78
79
79
If you would like to connect your cluster to a virtual network, select a virtual network from the **Virtual network** dropdown.
80
80
@@ -163,7 +163,7 @@ In this section, you get the host information from the Apache Ambari REST API on
163
163
```
164
164
165
165
> [!Note]
166
-
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
166
+
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
167
167
168
168
1. To verify that the environment variable is set correctly, use the following command:
169
169
@@ -182,7 +182,7 @@ In this section, you get the host information from the Apache Ambari REST API on
182
182
```
183
183
184
184
> [!Note]
185
-
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
185
+
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
186
186
187
187
1. To verify that the environment variable is set correctly, use the following command:
188
188
@@ -210,21 +210,21 @@ Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
210
210
211
211
* Each partition is replicated across three worker nodes in the cluster.
212
212
213
-
If you created the cluster in an Azure region that provides three fault domains, use a replication factor of 3. Otherwise, use a replication factor of 4.
213
+
* If you created the cluster in an Azure region that provides three fault domains, use a replication factor of 3. Otherwise, use a replication factor of 4.
214
214
215
-
In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
215
+
* In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
216
216
217
-
For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/windows/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
217
+
* For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/windows/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
218
218
219
-
Apache Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
219
+
* Apache Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
220
220
221
-
To ensure high availability, use the [Apache Kafka partition rebalance tool](https://github.com/hdinsight/hdinsight-kafka-tools). This tool must be ran from an SSH connection to the head node of your Apache Kafka cluster.
221
+
* To ensure high availability, use the [Apache Kafka partition rebalance tool](https://github.com/hdinsight/hdinsight-kafka-tools). This tool must be ran from an SSH connection to the head node of your Apache Kafka cluster.
222
222
223
-
For the highest availability of your Apache Kafka data, you should rebalance the partition replicas for your topic when:
223
+
* For the highest availability of your Apache Kafka data, you should rebalance the partition replicas for your topic when:
0 commit comments