You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-custom-ambari-db.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.date: 12/27/2024
10
10
11
11
Apache Ambari simplifies the management and monitoring of an Apache Hadoop cluster. Ambari provides an easy to use web UI and REST API. Ambari is included on HDInsight clusters, and is used to monitor the cluster and make configuration changes.
12
12
13
-
In normal cluster creation, as described in other articles such as [Set up clusters in HDInsight](hdinsight-hadoop-provision-linux-clusters.md), Ambari is deployed in an [S0 Azure SQL Database](/azure/azure-sql/database/resource-limits-dtu-single-databases#standard-service-tier)that is managed by HDInsight and is not accessible to users.
13
+
In normal cluster creation, as described in other articles such as [Set up clusters in HDInsight](hdinsight-hadoop-provision-linux-clusters.md), Ambari is deployed in an [S0 Azure SQL Database](/azure/azure-sql/database/resource-limits-dtu-single-databases#standard-service-tier) managed by HDInsight and isn't accessible to users.
14
14
15
15
The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. The deployment is done with an Azure Resource Manager template. This feature has the following benefits:
16
16
@@ -25,11 +25,11 @@ The remainder of this article discusses the following points:
25
25
26
26
## Custom Ambari DB requirements
27
27
28
-
You can deploy a custom Ambari DB with all cluster types and versions. Multiple clusters cannot use the same Ambari DB.
28
+
You can deploy a custom Ambari DB with all cluster types and versions. Multiple clusters can't use the same Ambari DB.
29
29
30
30
The custom Ambari DB has the following other requirements:
31
31
32
-
- The name of the database cannot contain hyphens or spaces
32
+
- The name of the database can't contain hyphens or spaces
33
33
- You must have an existing Azure SQL DB server and database.
34
34
- The database that you provide for Ambari setup must be empty. There should be no tables in the default dbo schema.
35
35
- The user used to connect to the database should have **SELECT, CREATE TABLE, INSERT, UPDATE, DELETE, ALTER ON SCHEMA and REFERENCES ON SCHEMA** permissions on the database.
@@ -50,7 +50,7 @@ When you host your Apache Ambari DB in an external database, remember the follow
50
50
51
51
- You're responsible for the extra costs of the Azure SQL DB that holds Ambari.
52
52
- Back up your custom Ambari DB periodically. Azure SQL Database generates backups automatically, but the backup retention time-frame varies. For more information, see [Learn about automatic SQL Database backups](/azure/azure-sql/database/automated-backups-overview).
53
-
- Don't change the custom Ambari DB password after the HDInsight cluster reaches the **Running** state. It is not supported.
53
+
- Don't change the custom Ambari DB password after the HDInsight cluster reaches the **Running** state. It isn't supported.
54
54
55
55
> [!NOTE]
56
56
> You can use Managed Identity to authenticate with SQL database for Ambari. For more information, see [Use Managed Identity for SQL Database authentication in Azure HDInsight](./use-managed-identity-for-sql-database-authentication-in-azure-hdinsight.md)
@@ -73,7 +73,7 @@ az deployment group create --name HDInsightAmbariDBDeployment \
73
73
74
74
75
75
> [!WARNING]
76
-
> Please use the following recommended SQL DB and Headnode VM for your HDInsight cluster. Please don't use default Ambari DB (S0) for any production environment.
76
+
> Use the following recommended SQL DB and Headnode VM for your HDInsight cluster. Don't use default Ambari DB (S0) for any production environment.
# Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more
@@ -16,9 +16,9 @@ Learn how to set up and configure Apache Hadoop, Apache Spark, Apache Kafka, Int
16
16
A Hadoop cluster consists of several virtual machines (nodes) that are used for distributed processing of tasks. Azure HDInsight handles implementation details of installation and configuration of individual nodes, so you only have to provide general configuration information.
17
17
18
18
> [!IMPORTANT]
19
-
> HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it is no longer in use. Learn how to [delete a cluster.](hdinsight-delete-cluster.md)
19
+
> HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it's no longer in use. Learn how to [delete a cluster.](hdinsight-delete-cluster.md)
20
20
21
-
If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](interactive-query/apache-hive-warehouse-connector.md).
21
+
If you're using multiple clusters together, you want to create a virtual network, and if you're using a Spark cluster you also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](interactive-query/apache-hive-warehouse-connector.md).
22
22
23
23
## Cluster setup methods
24
24
@@ -64,7 +64,7 @@ You don't need to specify the cluster location explicitly: The cluster is in the
64
64
Azure HDInsight currently provides the following cluster types, each with a set of components to provide certain functionalities.
65
65
66
66
> [!IMPORTANT]
67
-
> HDInsight clusters are available in various types, each for a single workload or technology. There is no supported method to create a cluster that combines multiple types, such HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
67
+
> HDInsight clusters are available in various types, each for a single workload or technology. There's no supported method to create a cluster that combines multiple types, such HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
68
68
69
69
| Cluster type | Functionality |
70
70
| --- | --- |
@@ -82,7 +82,7 @@ Choose the version of HDInsight for this cluster. For more information, see [Sup
82
82
83
83
With HDInsight clusters, you can configure two user accounts during cluster creation:
84
84
85
-
* Cluster login username: The default username is *admin*. It uses the basic configuration on the Azure portal. Sometimes it's called "Cluster user," or "HTTP user."
85
+
* Cluster login username: The default username is *admin*. It uses the basic configuration on the Azure portal. Also called as "Cluster user," or "HTTP user."
86
86
* Secure Shell (SSH) username: Used to connect to the cluster through SSH. For more information, see [Use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-unix.md).
87
87
88
88
The HTTP username has the following restrictions:
@@ -113,14 +113,14 @@ HDInsight clusters can use the following storage options:
113
113
For more information on storage options with HDInsight, see [Compare storage options for use with Azure HDInsight clusters](hdinsight-hadoop-compare-storage-options.md).
114
114
115
115
> [!WARNING]
116
-
> Using an additional storage account in a different location from the HDInsight cluster is not supported.
116
+
> Using more storage account in a different location from the HDInsight cluster isn't supported.
117
117
118
-
During configuration, for the default storage endpoint you specify a blob container of an Azure Storage account or Data Lake Storage. The default storage contains application and system logs. Optionally, you can specify additional linked Azure Storage accounts and Data Lake Storage accounts that the cluster can access. The HDInsight cluster and the dependent storage accounts must be in the same Azure location.
118
+
During configuration, for the default storage endpoint you specify a blob container of an Azure Storage account or Data Lake Storage. The default storage contains application and system logs. Optionally, you can specify more linked Azure Storage accounts and Data Lake Storage accounts that the cluster can access. The HDInsight cluster and the dependent storage accounts must be in the same Azure location.
> Enabling secure storage transfer after creating a cluster can result in errors using your storage account and is not recommended. It is better to create a new cluster using a storage account with secure transfer already enabled.
123
+
> Enabling secure storage transfer after creating a cluster can result in errors using your storage account and isn't recommended. It's better to create a new cluster using a storage account with secure transfer already enabled.
124
124
125
125
> [!Note]
126
126
> Azure HDInsight does not automatically transfer, move or copy your data stored in Azure Storage from one region to another.
@@ -132,7 +132,7 @@ You can create optional Hive or Apache Oozie metastores. However, not all cluste
132
132
For more information, see [Use external metadata stores in Azure HDInsight](./hdinsight-use-external-metadata-stores.md).
133
133
134
134
> [!IMPORTANT]
135
-
> When you create a custom metastore, don't use dashes, hyphens, or spaces in the database name. This can cause the cluster creation process to fail.
135
+
> When you create a custom metastore, don't use dashes, hyphens, or spaces in the database name. This such characters can cause the cluster creation process to fail.
136
136
137
137
#### SQL database for Hive
138
138
@@ -153,12 +153,12 @@ You can use Managed Identity to authenticate with SQL database for Oozie. For mo
153
153
154
154
#### SQL database for Ambari
155
155
156
-
Ambari is used to monitor HDInsight clusters, make configuration changes, and store cluster management information as well as job history. The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. For more information, see [Custom Ambari DB](./hdinsight-custom-ambari-db.md).
156
+
Ambari is used to monitor HDInsight clusters, make configuration changes, and store cluster management information and job history. The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. For more information, see [Custom Ambari DB](./hdinsight-custom-ambari-db.md).
157
157
158
158
You can use Managed Identity to authenticate with SQL database for Ambari. For more information, see [Use Managed Identity for SQL Database authentication in Azure HDInsight](./use-managed-identity-for-sql-database-authentication-in-azure-hdinsight.md)
159
159
160
160
> [!IMPORTANT]
161
-
> You cannot reuse a custom Oozie metastore. To use a custom Oozie metastore, you must provide an empty Azure SQL Database when creating the HDInsight cluster.
161
+
> You can't reuse a custom Oozie metastore. To use a custom Oozie metastore, you must provide an empty Azure SQL Database when creating the HDInsight cluster.
162
162
163
163
## Security + networking
164
164
@@ -253,7 +253,7 @@ For more information, see [Sizes for virtual machines](/azure/virtual-machines/s
253
253
254
254
HDInsight cluster comes with predefined disk space based on SKU. If you run some large applications, can lead to insufficient disk space, with disk full error - `LinkId=221672#ERROR_NOT_ENOUGH_DISK_SPACE` and job failures.
255
255
256
-
More discs can be added to the cluster using the new feature **NodeManager**’s local directory. At the time of Hive and Spark cluster creation, the number of discs can be selected and added to the worker nodes. The selected disk, which will be of size 1TB each, would be part of **NodeManager**'s local directories.
256
+
More discs can be added to the cluster using the new feature **NodeManager**’s local directory. At the time of Hive and Spark cluster creation, the number of discs can be selected and added to the worker nodes. The selected disk, which can be of size 1 TB each, would be part of **NodeManager**'s local directories.
257
257
258
258
1. From **Configuration + pricing** tab
259
259
1. Select **Enable managed disk** option
@@ -266,18 +266,18 @@ You can verify the number of disks from **Review + create** tab, under **Cluster
266
266
267
267
HDInsight application is an application, that users can install on a Linux-based HDInsight cluster. You can use applications provided by Microsoft, third parties, or developed by you. For more information, see [Install third-party Apache Hadoop applications on Azure HDInsight](hdinsight-apps-install-applications.md).
268
268
269
-
Most of the HDInsight applications are installed on an empty edge node. An empty edge node is a Linux virtual machine with the same client tools installed and configured as in the head node. You can use the edge node for accessing the cluster, testing your client applications, and hosting your client applications. For more information, see [Use empty edge nodes in HDInsight](hdinsight-apps-use-edge-node.md).
269
+
Most of the HDInsight applications are installed on an empty edge node. An empty edge node is a Linux virtual machine with the same client tools installed and configured as in the head node. You can use the edge node for accessing the cluster, testing your client applications, and hosting your client applications. For more information, see [Use empty edge nodes in HDInsight](hdinsight-apps-use-edge-node.md).
270
270
271
271
### Script actions
272
272
273
-
You can install additional components or customize cluster configuration by using scripts during creation. Such scripts are invoked via **Script Action**, which is a configuration option that can be used from the Azure portal, HDInsight Windows PowerShell cmdlets, or the HDInsight .NET SDK. For more information, see [Customize HDInsight cluster using Script Action](hdinsight-hadoop-customize-cluster-linux.md).
273
+
You can install more components or customize cluster configuration by using scripts during creation. Such scripts are invoked via **Script Action**, which is a configuration option that can be used from the Azure portal, HDInsight Windows PowerShell cmdlets, or the HDInsight .NET SDK. For more information, see [Customize HDInsight cluster using Script Action](hdinsight-hadoop-customize-cluster-linux.md).
274
274
275
275
Some native Java components, like Apache Mahout and Cascading, can be run on the cluster as Java Archive (JAR) files. These JAR files can be distributed to Azure Storage and submitted to HDInsight clusters with Hadoop job submission mechanisms. For more information, see [Submit Apache Hadoop jobs programmatically](hadoop/submit-apache-hadoop-jobs-programmatically.md).
276
276
277
277
> [!NOTE]
278
278
> If you have issues deploying JAR files to HDInsight clusters, or calling JAR files on HDInsight clusters, contact [Microsoft Support](https://azure.microsoft.com/support/options/).
279
279
>
280
-
> Cascading is not supported by HDInsight and is not eligible for Microsoft Support. For lists of supported components, see [What's new in the cluster versions provided by HDInsight](hdinsight-component-versioning.md).
280
+
> Cascading not supported by HDInsight and not eligible for Microsoft Support. For lists of supported components, see [What's new in the cluster versions provided by HDInsight](hdinsight-component-versioning.md).
281
281
282
282
Sometimes, you want to configure the following configuration files during the creation process:
0 commit comments