You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-custom-ambari-db.md
+10-7Lines changed: 10 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,13 @@ description: Learn how to create HDInsight clusters with your own custom Apache
4
4
ms.service: azure-hdinsight
5
5
ms.custom: hdinsightactive
6
6
ms.topic: how-to
7
-
ms.date: 09/06/2024
7
+
ms.date: 12/27/2024
8
8
---
9
9
# Set up HDInsight clusters with a custom Ambari DB
10
10
11
11
Apache Ambari simplifies the management and monitoring of an Apache Hadoop cluster. Ambari provides an easy to use web UI and REST API. Ambari is included on HDInsight clusters, and is used to monitor the cluster and make configuration changes.
12
12
13
-
In normal cluster creation, as described in other articles such as [Set up clusters in HDInsight](hdinsight-hadoop-provision-linux-clusters.md), Ambari is deployed in an [S0 Azure SQL Database](/azure/azure-sql/database/resource-limits-dtu-single-databases#standard-service-tier)that is managed by HDInsight and is not accessible to users.
13
+
In normal cluster creation, as described in other articles such as [Set up clusters in HDInsight](hdinsight-hadoop-provision-linux-clusters.md), Ambari is deployed in an [S0 Azure SQL Database](/azure/azure-sql/database/resource-limits-dtu-single-databases#standard-service-tier) managed by HDInsight and isn't accessible to users.
14
14
15
15
The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. The deployment is done with an Azure Resource Manager template. This feature has the following benefits:
16
16
@@ -25,11 +25,11 @@ The remainder of this article discusses the following points:
25
25
26
26
## Custom Ambari DB requirements
27
27
28
-
You can deploy a custom Ambari DB with all cluster types and versions. Multiple clusters cannot use the same Ambari DB.
28
+
You can deploy a custom Ambari DB with all cluster types and versions. Multiple clusters can't use the same Ambari DB.
29
29
30
30
The custom Ambari DB has the following other requirements:
31
31
32
-
- The name of the database cannot contain hyphens or spaces
32
+
- The name of the database can't contain hyphens or spaces
33
33
- You must have an existing Azure SQL DB server and database.
34
34
- The database that you provide for Ambari setup must be empty. There should be no tables in the default dbo schema.
35
35
- The user used to connect to the database should have **SELECT, CREATE TABLE, INSERT, UPDATE, DELETE, ALTER ON SCHEMA and REFERENCES ON SCHEMA** permissions on the database.
@@ -50,7 +50,11 @@ When you host your Apache Ambari DB in an external database, remember the follow
50
50
51
51
- You're responsible for the extra costs of the Azure SQL DB that holds Ambari.
52
52
- Back up your custom Ambari DB periodically. Azure SQL Database generates backups automatically, but the backup retention time-frame varies. For more information, see [Learn about automatic SQL Database backups](/azure/azure-sql/database/automated-backups-overview).
53
-
- Don't change the custom Ambari DB password after the HDInsight cluster reaches the **Running** state. It is not supported.
53
+
- Don't change the custom Ambari DB password after the HDInsight cluster reaches the **Running** state. It isn't supported.
54
+
55
+
> [!NOTE]
56
+
> You can use Managed Identity to authenticate with SQL database for Ambari. For more information, see [Use Managed Identity for SQL Database authentication in Azure HDInsight](./use-managed-identity-for-sql-database-authentication-in-azure-hdinsight.md)
57
+
54
58
55
59
## Deploy clusters with a custom Ambari DB
56
60
@@ -69,10 +73,9 @@ az deployment group create --name HDInsightAmbariDBDeployment \
69
73
70
74
71
75
> [!WARNING]
72
-
> Please use the following recommended SQL DB and Headnode VM for your HDInsight cluster. Please don't use default Ambari DB (S0) for any production environment.
76
+
> Use the following recommended SQL DB and Headnode VM for your HDInsight cluster. Don't use default Ambari DB (S0) for any production environment.
73
77
>
74
78
75
-
76
79
## Database and Headnode sizing
77
80
78
81
The following table provides guidelines on which Azure SQL DB tier to select based on the size of your HDInsight cluster.
# Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more
@@ -16,9 +16,9 @@ Learn how to set up and configure Apache Hadoop, Apache Spark, Apache Kafka, Int
16
16
A Hadoop cluster consists of several virtual machines (nodes) that are used for distributed processing of tasks. Azure HDInsight handles implementation details of installation and configuration of individual nodes, so you only have to provide general configuration information.
17
17
18
18
> [!IMPORTANT]
19
-
> HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it is no longer in use. Learn how to [delete a cluster.](hdinsight-delete-cluster.md)
19
+
> HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it's no longer in use. Learn how to [delete a cluster.](hdinsight-delete-cluster.md)
20
20
21
-
If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](interactive-query/apache-hive-warehouse-connector.md).
21
+
If you're using multiple clusters together, you want to create a virtual network, and if you're using a Spark cluster you also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](interactive-query/apache-hive-warehouse-connector.md).
22
22
23
23
## Cluster setup methods
24
24
@@ -64,7 +64,7 @@ You don't need to specify the cluster location explicitly: The cluster is in the
64
64
Azure HDInsight currently provides the following cluster types, each with a set of components to provide certain functionalities.
65
65
66
66
> [!IMPORTANT]
67
-
> HDInsight clusters are available in various types, each for a single workload or technology. There is no supported method to create a cluster that combines multiple types, such HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
67
+
> HDInsight clusters are available in various types, each for a single workload or technology. There's no supported method to create a cluster that combines multiple types, such HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
68
68
69
69
| Cluster type | Functionality |
70
70
| --- | --- |
@@ -82,7 +82,7 @@ Choose the version of HDInsight for this cluster. For more information, see [Sup
82
82
83
83
With HDInsight clusters, you can configure two user accounts during cluster creation:
84
84
85
-
* Cluster login username: The default username is *admin*. It uses the basic configuration on the Azure portal. Sometimes it's called "Cluster user," or "HTTP user."
85
+
* Cluster login username: The default username is *admin*. It uses the basic configuration on the Azure portal. Also called as "Cluster user," or "HTTP user."
86
86
* Secure Shell (SSH) username: Used to connect to the cluster through SSH. For more information, see [Use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-unix.md).
87
87
88
88
The HTTP username has the following restrictions:
@@ -113,14 +113,14 @@ HDInsight clusters can use the following storage options:
113
113
For more information on storage options with HDInsight, see [Compare storage options for use with Azure HDInsight clusters](hdinsight-hadoop-compare-storage-options.md).
114
114
115
115
> [!WARNING]
116
-
> Using an additional storage account in a different location from the HDInsight cluster is not supported.
116
+
> Using more storage account in a different location from the HDInsight cluster isn't supported.
117
117
118
-
During configuration, for the default storage endpoint you specify a blob container of an Azure Storage account or Data Lake Storage. The default storage contains application and system logs. Optionally, you can specify additional linked Azure Storage accounts and Data Lake Storage accounts that the cluster can access. The HDInsight cluster and the dependent storage accounts must be in the same Azure location.
118
+
During configuration, for the default storage endpoint you specify a blob container of an Azure Storage account or Data Lake Storage. The default storage contains application and system logs. Optionally, you can specify more linked Azure Storage accounts and Data Lake Storage accounts that the cluster can access. The HDInsight cluster and the dependent storage accounts must be in the same Azure location.
> Enabling secure storage transfer after creating a cluster can result in errors using your storage account and is not recommended. It is better to create a new cluster using a storage account with secure transfer already enabled.
123
+
> Enabling secure storage transfer after creating a cluster can result in errors using your storage account and isn't recommended. It's better to create a new cluster using a storage account with secure transfer already enabled.
124
124
125
125
> [!Note]
126
126
> Azure HDInsight does not automatically transfer, move or copy your data stored in Azure Storage from one region to another.
@@ -132,27 +132,33 @@ You can create optional Hive or Apache Oozie metastores. However, not all cluste
132
132
For more information, see [Use external metadata stores in Azure HDInsight](./hdinsight-use-external-metadata-stores.md).
133
133
134
134
> [!IMPORTANT]
135
-
> When you create a custom metastore, don't use dashes, hyphens, or spaces in the database name. This can cause the cluster creation process to fail.
135
+
> When you create a custom metastore, don't use dashes, hyphens, or spaces in the database name. This such characters can cause the cluster creation process to fail.
136
136
137
137
#### SQL database for Hive
138
138
139
139
If you want to retain your Hive tables after you delete an HDInsight cluster, use a custom metastore. You can then attach the metastore to another HDInsight cluster.
140
140
141
141
An HDInsight metastore that is created for one HDInsight cluster version can't be shared across different HDInsight cluster versions. For a list of HDInsight versions, see [Supported HDInsight versions](hdinsight-component-versioning.md#supported-hdinsight-versions).
142
142
143
+
You can use Managed Identity to authenticate with SQL database for Hive. For more information, see [Use Managed Identity for SQL Database authentication in Azure HDInsight](./use-managed-identity-for-sql-database-authentication-in-azure-hdinsight.md)
144
+
143
145
> [!IMPORTANT]
144
146
> The default metastore provides an Azure SQL Database with a **basic tier 5 DTU limit (not upgradeable)**! Suitable for basic testing purposes. For large or production workloads, we recommend migrating to an external metastore.
145
147
146
148
#### SQL database for Oozie
147
149
148
150
To increase performance when using Oozie, use a custom metastore. A metastore can also provide access to Oozie job data after you delete your cluster.
149
151
152
+
You can use Managed Identity to authenticate with SQL database for Oozie. For more information, see [Use Managed Identity for SQL Database authentication in Azure HDInsight](./use-managed-identity-for-sql-database-authentication-in-azure-hdinsight.md)
153
+
150
154
#### SQL database for Ambari
151
155
152
-
Ambari is used to monitor HDInsight clusters, make configuration changes, and store cluster management information as well as job history. The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. For more information, see [Custom Ambari DB](./hdinsight-custom-ambari-db.md).
156
+
Ambari is used to monitor HDInsight clusters, make configuration changes, and store cluster management information and job history. The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. For more information, see [Custom Ambari DB](./hdinsight-custom-ambari-db.md).
157
+
158
+
You can use Managed Identity to authenticate with SQL database for Ambari. For more information, see [Use Managed Identity for SQL Database authentication in Azure HDInsight](./use-managed-identity-for-sql-database-authentication-in-azure-hdinsight.md)
153
159
154
160
> [!IMPORTANT]
155
-
> You cannot reuse a custom Oozie metastore. To use a custom Oozie metastore, you must provide an empty Azure SQL Database when creating the HDInsight cluster.
161
+
> You can't reuse a custom Oozie metastore. To use a custom Oozie metastore, you must provide an empty Azure SQL Database when creating the HDInsight cluster.
156
162
157
163
## Security + networking
158
164
@@ -194,7 +200,7 @@ For more information, see [Managed identities in Azure HDInsight](./hdinsight-ma
194
200
195
201
:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/azure-portal-cluster-configuration-disk-attach.png" alt-text="HDInsight choose your node size.":::
196
202
197
-
You're billed for node usage for as long as the cluster exists. Billing starts when a cluster is created and stops when the cluster is deleted. Clusters can't be de-allocated or put on hold.
203
+
You're billed for node usage for as long as the cluster exists. Billing starts when a cluster is created and stops when the cluster is deleted. Clusters can't be deallocated or put on hold.
198
204
199
205
### Node configuration
200
206
@@ -208,7 +214,7 @@ Each cluster type has its own number of nodes, terminology for nodes, and defaul
208
214
209
215
For more information, see [Default node configuration and virtual machine sizes for clusters](hdinsight-supported-node-configuration.md) in "What are the Hadoop components and versions in HDInsight?"
210
216
211
-
The cost of HDInsight clusters is determined by the number of nodes and the virtual machines sizes for the nodes.
217
+
The cost of HDInsight clusters determined by the number of nodes and the virtual machines sizes for the nodes.
212
218
213
219
Different cluster types have different node types, numbers of nodes, and node sizes:
214
220
@@ -245,9 +251,9 @@ For more information, see [Sizes for virtual machines](/azure/virtual-machines/s
245
251
> The added disks are only configured for node manager local directories and **not for datanode directories**
246
252
247
253
248
-
HDInsight cluster comes with pre-defined disk space based on SKU. If you run some large applications, can lead to insufficient disk space, with disk full error - `LinkId=221672#ERROR_NOT_ENOUGH_DISK_SPACE` and job failures.
254
+
HDInsight cluster comes with predefined disk space based on SKU. If you run some large applications, can lead to insufficient disk space, with disk full error - `LinkId=221672#ERROR_NOT_ENOUGH_DISK_SPACE` and job failures.
249
255
250
-
More discs can be added to the cluster using the new feature **NodeManager**’s local directory. At the time of Hive and Spark cluster creation, the number of discs can be selected and added to the worker nodes. The selected disk, which will be of size 1TB each, would be part of **NodeManager**'s local directories.
256
+
More discs can be added to the cluster using the new feature **NodeManager**’s local directory. At the time of Hive and Spark cluster creation, the number of discs can be selected and added to the worker nodes. The selected disk, which can be of size 1 TB each, would be part of **NodeManager**'s local directories.
251
257
252
258
1. From **Configuration + pricing** tab
253
259
1. Select **Enable managed disk** option
@@ -260,18 +266,18 @@ You can verify the number of disks from **Review + create** tab, under **Cluster
260
266
261
267
HDInsight application is an application, that users can install on a Linux-based HDInsight cluster. You can use applications provided by Microsoft, third parties, or developed by you. For more information, see [Install third-party Apache Hadoop applications on Azure HDInsight](hdinsight-apps-install-applications.md).
262
268
263
-
Most of the HDInsight applications are installed on an empty edge node. An empty edge node is a Linux virtual machine with the same client tools installed and configured as in the head node. You can use the edge node for accessing the cluster, testing your client applications, and hosting your client applications. For more information, see [Use empty edge nodes in HDInsight](hdinsight-apps-use-edge-node.md).
269
+
Most of the HDInsight applications are installed on an empty edge node. An empty edge node is a Linux virtual machine with the same client tools installed and configured as in the head node. You can use the edge node for accessing the cluster, testing your client applications, and hosting your client applications. For more information, see [Use empty edge nodes in HDInsight](hdinsight-apps-use-edge-node.md).
264
270
265
271
### Script actions
266
272
267
-
You can install additional components or customize cluster configuration by using scripts during creation. Such scripts are invoked via **Script Action**, which is a configuration option that can be used from the Azure portal, HDInsight Windows PowerShell cmdlets, or the HDInsight .NET SDK. For more information, see [Customize HDInsight cluster using Script Action](hdinsight-hadoop-customize-cluster-linux.md).
273
+
You can install more components or customize cluster configuration by using scripts during creation. Such scripts are invoked via **Script Action**, which is a configuration option that can be used from the Azure portal, HDInsight Windows PowerShell cmdlets, or the HDInsight .NET SDK. For more information, see [Customize HDInsight cluster using Script Action](hdinsight-hadoop-customize-cluster-linux.md).
268
274
269
275
Some native Java components, like Apache Mahout and Cascading, can be run on the cluster as Java Archive (JAR) files. These JAR files can be distributed to Azure Storage and submitted to HDInsight clusters with Hadoop job submission mechanisms. For more information, see [Submit Apache Hadoop jobs programmatically](hadoop/submit-apache-hadoop-jobs-programmatically.md).
270
276
271
277
> [!NOTE]
272
278
> If you have issues deploying JAR files to HDInsight clusters, or calling JAR files on HDInsight clusters, contact [Microsoft Support](https://azure.microsoft.com/support/options/).
273
279
>
274
-
> Cascading is not supported by HDInsight and is not eligible for Microsoft Support. For lists of supported components, see [What's new in the cluster versions provided by HDInsight](hdinsight-component-versioning.md).
280
+
> Cascading not supported by HDInsight and not eligible for Microsoft Support. For lists of supported components, see [What's new in the cluster versions provided by HDInsight](hdinsight-component-versioning.md).
275
281
276
282
Sometimes, you want to configure the following configuration files during the creation process:
0 commit comments