Skip to content

Commit 03e1e52

Browse files
committed
seo updates
1 parent 5cd4102 commit 03e1e52

File tree

2 files changed

+37
-39
lines changed

2 files changed

+37
-39
lines changed

articles/hdinsight/hdinsight-autoscale-clusters.md

Lines changed: 22 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -7,36 +7,16 @@ ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.topic: conceptual
99
ms.custom: hdinsightactive,seoapr2020
10-
ms.date: 04/07/2020
10+
ms.date: 04/29/2020
1111
---
1212

1313
# Automatically scale Azure HDInsight clusters
1414

15-
> [!Important]
16-
> The Azure HDInsight Autoscale feature was released for general availability on November 7th, 2019 for Spark and Hadoop clusters and included improvements not available in the preview version of the feature. If you created a Spark cluster prior to November 7th, 2019 and want to use the Autoscale feature on your cluster, the recommended path is to create a new cluster, and enable Autoscale on the new cluster.
17-
>
18-
> Autoscale for Interactive Query (LLAP) and HBase clusters is still in preview. Autoscale is only available on Spark, Hadoop, Interactive Query, and HBase clusters.
19-
20-
Azure HDInsight's cluster Autoscale feature automatically scales the number of worker nodes in a cluster up and down. Other types of nodes in the cluster can't be scaled currently. During the creation of a new HDInsight cluster, a minimum and maximum number of worker nodes can be set. Autoscale then monitors the resource requirements of the analytics load and scales the number of worker nodes up or down. There's no additional charge for this feature.
21-
22-
## Cluster compatibility
23-
24-
The following table describes the cluster types and versions that are compatible with the Autoscale feature.
25-
26-
| Version | Spark | Hive | LLAP | HBase | Kafka | Storm | ML |
27-
|---|---|---|---|---|---|---|---|
28-
| HDInsight 3.6 without ESP | Yes | Yes | Yes | Yes* | No | No | No |
29-
| HDInsight 4.0 without ESP | Yes | Yes | Yes | Yes* | No | No | No |
30-
| HDInsight 3.6 with ESP | Yes | Yes | Yes | Yes* | No | No | No |
31-
| HDInsight 4.0 with ESP | Yes | Yes | Yes | Yes* | No | No | No |
32-
33-
\* HBase clusters can only be configured for schedule-based scaling, not load-based.
15+
Azure HDInsight's free Autoscale feature can automatically increase or decrease the number of worker nodes in your cluster based on previously set criteria. You set a minimum and maximum number of nodes during cluster creation, establish the scaling criteria using a day-time schedule or specific performance metrics, and the HDInsight platform does the rest.
3416

3517
## How it works
3618

37-
You can choose load-based scaling or schedule-based scaling for your HDInsight cluster. Load-based scaling changes the number of nodes in your cluster, within a range that you set, to ensure optimal CPU usage and minimize running cost.
38-
39-
Schedule-based scaling changes the number of nodes in your cluster based on conditions that take effect at specific times. These conditions scale the cluster to an intended number of nodes.
19+
The Autoscale feature uses two types of conditions to trigger scaling events: thresholds for various cluster performance metrics (called *load-based scaling*) and time-based triggers (called *schedule-based scaling*). Load-based scaling changes the number of nodes in your cluster, within a range that you set, to ensure optimal CPU usage and minimize running cost. Schedule-based scaling changes the number of nodes in your cluster based on operations that you associate with specific dates and times.
4020

4121
### Metrics monitoring
4222

@@ -51,7 +31,7 @@ Autoscale continuously monitors the cluster and collects the following metrics:
5131
|Used Memory per Node|The load on a worker node. A worker node on which 10 GB of memory is used, is considered under more load than a worker with 2 GB of used memory.|
5232
|Number of Application Masters per Node|The number of Application Master (AM) containers running on a worker node. A worker node that is hosting two AM containers, is considered more important than a worker node that is hosting zero AM containers.|
5333

54-
The above metrics are checked every 60 seconds. Autoscale makes decisions based on these metrics.
34+
The above metrics are checked every 60 seconds. You can setup scaling operations for your cluster using any of these metrics.
5535

5636
### Load-based scale conditions
5737

@@ -114,6 +94,24 @@ Your subscription has a capacity quota for each region. The total number of core
11494
11595
For more information on HDInsight cluster creation using the Azure portal, see [Create Linux-based clusters in HDInsight using the Azure portal](hdinsight-hadoop-create-linux-clusters-portal.md).
11696

97+
## Cluster compatibility
98+
99+
> [!Important]
100+
> The Azure HDInsight Autoscale feature was released for general availability on November 7th, 2019 for Spark and Hadoop clusters and included improvements not available in the preview version of the feature. If you created a Spark cluster prior to November 7th, 2019 and want to use the Autoscale feature on your cluster, the recommended path is to create a new cluster, and enable Autoscale on the new cluster.
101+
>
102+
> Autoscale for Interactive Query (LLAP) and HBase clusters is still in preview. Autoscale is only available on Spark, Hadoop, Interactive Query, and HBase clusters.
103+
104+
The following table describes the cluster types and versions that are compatible with the Autoscale feature.
105+
106+
| Version | Spark | Hive | LLAP | HBase | Kafka | Storm | ML |
107+
|---|---|---|---|---|---|---|---|
108+
| HDInsight 3.6 without ESP | Yes | Yes | Yes | Yes* | No | No | No |
109+
| HDInsight 4.0 without ESP | Yes | Yes | Yes | Yes* | No | No | No |
110+
| HDInsight 3.6 with ESP | Yes | Yes | Yes | Yes* | No | No | No |
111+
| HDInsight 4.0 with ESP | Yes | Yes | Yes | Yes* | No | No | No |
112+
113+
\* HBase clusters can only be configured for schedule-based scaling, not load-based.
114+
117115
### Create a cluster with a Resource Manager template
118116

119117
#### Load-based autoscaling

articles/hdinsight/kafka/apache-kafka-get-started.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,19 @@ ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.topic: quickstart
99
ms.custom: mvc
10-
ms.date: 02/24/2020
10+
ms.date: 04/29/2020
1111
#Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
1212
---
1313

1414
# Quickstart: Create Apache Kafka cluster in Azure HDInsight using Azure portal
1515

1616
[Apache Kafka](./apache-kafka-introduction.md) is an open-source, distributed streaming platform. It's often used as a message broker, as it provides functionality similar to a publish-subscribe message queue.
1717

18-
In this quickstart, you learn how to create an Apache Kafka cluster using the Azure portal. You also learn how to use included utilities to send and receive messages using Apache Kafka. For in depth explanations of available configurations, see [Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md). For additional information regarding the use of the portal to create clusters, see [Create clusters in the portal](../hdinsight-hadoop-create-linux-clusters-portal.md).
18+
In this Quickstart, you learn how to create an Apache Kafka cluster using the Azure portal. You also learn how to use included utilities to send and receive messages using Apache Kafka. For in depth explanations of available configurations, see [Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md). For additional information regarding the use of the portal to create clusters, see [Create clusters in the portal](../hdinsight-hadoop-create-linux-clusters-portal.md).
1919

2020
[!INCLUDE [delete-cluster-warning](../../../includes/hdinsight-delete-cluster-warning.md)]
2121

22-
The Apache Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Apache Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
22+
The Apache Kafka API can only be accessed by resources inside the same virtual network. In this Quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Apache Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document. For more general information on planning virtual networks for HDInsight, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md).
2323

2424
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
2525

@@ -68,13 +68,13 @@ To create an Apache Kafka cluster on HDInsight, use the following steps:
6868
|Primary storage type|Use the default value **Azure Storage**.|
6969
|Selection method|Use the default value **Select from list**.|
7070
|Primary storage account|Use the drop-down list to select an existing storage account, or select **Create new**. If you create a new account, the name must be between 3 and 24 characters in length, and can include numbers and lowercase letters only|
71-
|Container|Use the autopopulated value.|
71+
|Container|Use the auto-populated value.|
7272

7373
![HDInsight Linux get started provide cluster storage values](./media/apache-kafka-get-started/azure-portal-cluster-storage.png "Provide storage values for creating an HDInsight cluster")
7474

7575
Select the **Security + networking** tab.
7676

77-
1. For this quickstart, leave the default security settings. To learn more about Enterprise Security package, visit [Configure a HDInsight cluster with Enterprise Security Package by using Azure Active Directory Domain Services](../domain-joined/apache-domain-joined-configure-using-azure-adds.md). To learn how to use your own key for Apache Kafka Disk Encryption, visit [Customer-managed key disk encryption](../disk-encryption.md)
77+
1. For this Quickstart, leave the default security settings. To learn more about Enterprise Security package, visit [Configure a HDInsight cluster with Enterprise Security Package by using Azure Active Directory Domain Services](../domain-joined/apache-domain-joined-configure-using-azure-adds.md). To learn how to use your own key for Apache Kafka Disk Encryption, visit [Customer-managed key disk encryption](../disk-encryption.md)
7878

7979
If you would like to connect your cluster to a virtual network, select a virtual network from the **Virtual network** dropdown.
8080

@@ -163,7 +163,7 @@ In this section, you get the host information from the Apache Ambari REST API on
163163
```
164164
165165
> [!Note]
166-
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
166+
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
167167
168168
1. To verify that the environment variable is set correctly, use the following command:
169169
@@ -182,7 +182,7 @@ In this section, you get the host information from the Apache Ambari REST API on
182182
```
183183
184184
> [!Note]
185-
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
185+
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
186186
187187
1. To verify that the environment variable is set correctly, use the following command:
188188
@@ -210,21 +210,21 @@ Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
210210
211211
* Each partition is replicated across three worker nodes in the cluster.
212212
213-
If you created the cluster in an Azure region that provides three fault domains, use a replication factor of 3. Otherwise, use a replication factor of 4.
213+
* If you created the cluster in an Azure region that provides three fault domains, use a replication factor of 3. Otherwise, use a replication factor of 4.
214214
215-
In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
215+
* In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
216216
217-
For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/windows/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
217+
* For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/windows/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
218218
219-
Apache Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
219+
* Apache Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
220220
221-
To ensure high availability, use the [Apache Kafka partition rebalance tool](https://github.com/hdinsight/hdinsight-kafka-tools). This tool must be ran from an SSH connection to the head node of your Apache Kafka cluster.
221+
* To ensure high availability, use the [Apache Kafka partition rebalance tool](https://github.com/hdinsight/hdinsight-kafka-tools). This tool must be ran from an SSH connection to the head node of your Apache Kafka cluster.
222222
223-
For the highest availability of your Apache Kafka data, you should rebalance the partition replicas for your topic when:
223+
* For the highest availability of your Apache Kafka data, you should rebalance the partition replicas for your topic when:
224224
225-
* You create a new topic or partition
225+
* You create a new topic or partition
226226
227-
* You scale up a cluster
227+
* You scale up a cluster
228228
229229
* **To list topics**, use the following command:
230230

0 commit comments

Comments
 (0)