You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-faq.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,15 +36,15 @@ For more information, see [Capacity planning for HDInsight clusters](https://doc
36
36
37
37
Common capacity issue errors and mitigation techniques are provided in this section.
38
38
39
-
### Error: The deployment would exceed the quota of '800'
39
+
####Error: The deployment would exceed the quota of '800'
40
40
41
41
Azure has a quota limit of 800 deployments per resource group. Different quotas are applied per resource group, subscription, account, or other scopes. For example, your subscription may be configured to limit the number of cores for a region. If you try to deploy a virtual machine that has more cores than the permitted amount, you receive an error message that states that the quota was exceeded.
42
42
43
43
To resolve this issue, delete the deployments that are no longer needed by using the Azure portal, CLI, or PowerShell.
44
44
45
45
For more information, see [Resolve errors for resource quotas](https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-quota-errors).
46
46
47
-
### Error: The maximum node exceeded the available cores in this region
47
+
####Error: The maximum node exceeded the available cores in this region
48
48
49
49
Your subscription may be configured to limit the number of cores for a region. If you try to deploy a resource that has more cores than the permitted amount, you receive an error message that states that the quota was exceeded.
50
50
@@ -58,7 +58,7 @@ To request a quota increase, follow these steps:
58
58
* Subscription: The subscription that you want to modify
59
59
* Quota type: HDInsight
60
60
61
-
For more information, see [Create a support ticket to increase core](https://docs.microsoft.com/azure/hdinsight/hdinsight-capacity-planning#quotas).
61
+
For more information, see [Create a support ticket to increase cores](https://docs.microsoft.com/azure/hdinsight/hdinsight-capacity-planning#quotas).
62
62
63
63
### What are the various types of nodes in an HDInsight cluster?
64
64
@@ -85,7 +85,7 @@ For a list of supported components see [What are the Apache Hadoop components an
85
85
86
86
Support for individual components can also vary by cluster type. For example, Spark is not supported on a Kafka cluster, and vice-versa.
87
87
88
-
For applications or services outside the cluster creation process, please contact the respective vendor or service provider for support. You can also use community sites for support for these actions. Many community sites are available. Examples include [MSDN forum for HDInsight](https://social.msdn.microsoft.com/Forums/home?forum=hdinsight) and [Stack Overflow](https://stackoverflow.com/). Apache projects also have project sites on the [Apache website](https://apache.org/). One example is [Hadoop](https://hadoop.apache.org/).
88
+
For applications or services outside the cluster creation process, please contact the respective vendor or service provider for support. You can also use community sites for support for these actions. Many community sites are available. Examples include [MSDN forum for HDInsight](https://social.msdn.microsoft.com/Forums/home?forum=hdinsight) and [Stack Overflow](https://stackoverflow.com/questions/tagged/hdinsight). Apache projects also have project sites on the [Apache website](https://apache.org/). One example is [Hadoop](https://hadoop.apache.org/).
89
89
90
90
For more questions that are related to Azure support, review the [Azure Support FAQ](https://azure.microsoft.com/en-us/support/faq/).
91
91
@@ -106,7 +106,7 @@ No, it is not possible to run Apache Kafka and Apache Spark on the same HDInsigh
3. In the User Settings window, select the new timezone from the Timezone drop down.
109
+
3. In the User Settings window, select the new timezone from the Timezone drop down, and then click Save.
110
110
111
111

112
112
@@ -124,7 +124,7 @@ For a default metastore: The default metastore is part of the cluster lifecycle.
124
124
125
125
For a custom metastore: The lifecycle of the metastore is not tied to a cluster’s lifecycle. Therefore, you can create and delete clusters without losing metadata. Metadata such as your Hive schemas persists even after you delete and re-create the HDInsight cluster.
126
126
127
-
For more information, see Use external metadata stores in Azure HDInsight.
127
+
For more information, see [Use external metadata stores in Azure HDInsight](https://docs.microsoft.com/azure/hdinsight/hdinsight-use-external-metadata-stores).
128
128
129
129
### Does migrating a Hive metastore also migrate the default policies of the Ranger database?
130
130
@@ -136,11 +136,11 @@ Yes, you can migrate a Hive metastore from an ESP to a non-ESP cluster.
136
136
137
137
### How can I estimate the size of a Hive metastore database?
138
138
139
-
A Hive metastore is used to store the metadata for data sources that is used by the Hive server. Therefore, the size requirements are affected by the number of data sources you may have to use for the Hive and by how complex the data sources are. Therefore, the size cannot be estimated upfront. As outlined in [Hive metastore best practices](https://docs.microsoft.com/azure/hdinsight/hdinsight-use-external-metadata-stores#hive-metastore-best-practices), you can start at an S2 tier. This provides 50 DTU and 250 GB of storage. If you encounter a bottleneck, you can scale up the database.
139
+
A Hive metastore is used to store the metadata for data sources that are used by the Hive server. Therefore, the size requirements are affected by the number of data sources you may have to use for the Hive and by how complex the data sources are. Therefore, the size cannot be estimated upfront. As outlined in [Hive metastore best practices](https://docs.microsoft.com/azure/hdinsight/hdinsight-use-external-metadata-stores#hive-metastore-best-practices), you can start at an S2 tier. This provides 50 DTU and 250 GB of storage. If you encounter a bottleneck, you can scale up the database.
140
140
141
141
### Do you support any other database other than Azure SQL Database as an external metastore ?
142
142
143
-
No, Microsoft supports only Azure SQL database as an external custom metastore.
143
+
No, Microsoft supports only Azure SQL Database as an external custom metastore.
144
144
145
145
### Can I share a metastore across multiple clusters?
146
146
@@ -204,7 +204,7 @@ For more information, see:
204
204
205
205
### How can I pull login activity shown in Ranger?
206
206
207
-
For auditing requirements, Microsoft recommends enabling Azure Monitor logs as described in [Azure Monitor logs](https://docs.microsoft.com/azure/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial).
207
+
For auditing requirements, Microsoft recommends enabling Azure Monitor logs as described in [Use Azure Monitor logs to monitor HDInsight clusters](https://docs.microsoft.com/azure/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial).
208
208
209
209
### Can I disable Clamscan on my cluster?
210
210
@@ -251,7 +251,7 @@ To audit blob storage accounts, you have to configuring monitoring for the blob
251
251
You can transfer files between a blob container and an HDInsight head node by running a shell script that resembles the following on your head node:
252
252
253
253
```
254
-
for i in `cat filenames.txt
254
+
for i in cat filenames.txt
255
255
256
256
do
257
257
hadoop fs -get $i <local destination>
@@ -263,7 +263,7 @@ done
263
263
264
264
### Are there any Ranger plugins for storage?
265
265
266
-
Currently, no Ranger plugins exist for blob storage, Azure Data Lake Storage (ADLS) Gen1, or Azure Data Lake Storage Gen2. For ESP clusters, you should use ADLS As a best practice. This is because you can, at least, set fine-grain permissions manually at the file system level by using HDFS tools. Also, ESP clusters will do some of the file system access control by using AAD at the cluster level when you use ADLS.
266
+
Currently, no Ranger plugins exist for blob storage, Azure Data Lake Storage (ADLS) Gen1, or Azure Data Lake Storage Gen2. For ESP clusters, you should use ADLS as a best practice. This is because you can, at least, set fine-grain permissions manually at the file system level by using HDFS tools. Also, ESP clusters will do some of the file system access control by using AAD at the cluster level when you use ADLS.
267
267
268
268
You should be able to use the Azure Storage Explorer to assign data access policies to security groups where your users are located by using the procedures that are documented in the following articles:
269
269
@@ -340,6 +340,12 @@ For information about how to cancel your subscription, see [Cancel your Azure su
340
340
For information about your subscription after its cancelled, see
341
341
[What happens after I cancel my subscription?](https://docs.microsoft.com/azure/billing/billing-how-to-cancel-azure-subscription#what-happens-after-i-cancel-my-subscription)
342
342
343
+
## Hive
344
+
345
+
### Why does the Hive version appear as 1.2.1000 instead of 2.1 in the Ambari UI even though I am running an HDInsight 3.6 cluster?
346
+
347
+
Although only 1.2 appears in the Ambari UI, HDInsight 3.6 contains both Hive 1.2 and Hive 2.1.
348
+
343
349
## Other FAQ
344
350
345
351
### What does HDInsight offer in terms of real-time stream processing capabilities?
@@ -349,9 +355,3 @@ For information about the integration capabilities of stream processing in Azure
349
355
### Is there a way to dynamically terminate the head node of the cluster when the cluster is idle for a specific period?
350
356
351
357
No, you cannot dynamically terminate the head node of the cluster. You can use Azure Data Factory for this scenario.
352
-
353
-
## Hive
354
-
355
-
### Why does the Hive version appear as 1.2.1000 instead of 2.1 in the Ambari UI even though I am running an HDInsight 3.6 cluster?
356
-
357
-
Although only 1.2 appears in the Ambari UI, HDInsight 3.6 contains both Hive 1.2 and Hive 2.1.
0 commit comments