You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Query Apache Hive through the JDBC driver in HDInsight
@@ -145,7 +145,7 @@ at java.util.concurrent.FutureTask.get(FutureTask.java:206)
145
145
146
146
**Symptoms**: HDInsight unexpectedly disconnects the connection when trying to download a huge amount of data (say several GBs) through JDBC/ODBC.
147
147
148
-
**Cause**: The limitation on Gateway nodes causes this error. When getting data from JDBC/ODBC, all data needs to pass through the Gateway node. However, a gateway isn't designed to download a huge amount of data, so the Gateway might close the connection if it can't handle the traffic.
148
+
**Cause**: The limitation on Gateway nodes causes this error. When you get data from JDBC/ODBC, all data needs to pass through the Gateway node. However, a gateway isn't designed to download a huge amount of data, so the Gateway might close the connection if it can't handle the traffic.
149
149
150
150
**Resolution**: Avoid using JDBC/ODBC driver to download huge amounts of data. Copy data directly from blob storage instead.
Copy file name to clipboardExpand all lines: articles/hdinsight/hadoop/apache-hadoop-run-custom-programs.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: When and how to run custom Apache MapReduce programs on Azure HDIns
4
4
ms.service: azure-hdinsight
5
5
ms.topic: how-to
6
6
ms.custom: hdinsightactive
7
-
ms.date: 02/12/2024
7
+
ms.date: 02/03/2025
8
8
---
9
9
10
10
# Run custom MapReduce programs
@@ -13,28 +13,28 @@ Apache Hadoop-based big data systems such as HDInsight enable data processing us
13
13
14
14
| Query mechanism | Advantages | Considerations |
15
15
| --- | --- | --- |
16
-
|**Apache Hive using HiveQL**| <ul><li>An excellent solution for batch processing and analysis of large amounts of immutable data, for data summarization, and for on demand querying. It uses a familiar SQL-like syntax.</li><li>It can be used to produce persistent tables of data that can be easily partitioned and indexed.</li><li>Multiple external tables and views can be created over the same data.</li><li>It supports a simple data warehouse implementation that provides massive scale-out and fault-tolerance capabilities for data storage and processing.</li></ul> | <ul><li>It requires the source data to have at least some identifiable structure.</li><li>It isn't suitable for real-time queries and row level updates. It's best used for batch jobs over large sets of data.</li><li>It might not be able to carry out some types of complex processing tasks.</li></ul> |
16
+
|**Apache Hive using HiveQL**| <ul><li>An excellent solution for batch processing and analysis of large amounts of immutable data, for data summarization, and for on demand querying. It uses a familiar SQL-like syntax.</li><li>It can be used to that produce persistent tables of data that can be easily partitioned and indexed.</li><li>Multiple external tables and views can be created over the same data.</li><li>It supports a simple data warehouse implementation that provides massive scale-out and fault-tolerance capabilities for data storage and processing.</li></ul> | <ul><li>It requires the source data to have at least some identifiable structure.</li><li>It isn't suitable for real-time queries and row level updates. It's best used for batch jobs over large sets of data.</li><li>It might not be able to carry out some types of complex processing tasks.</li></ul> |
17
17
|**Apache Pig using Pig Latin**| <ul><li>An excellent solution for manipulating data as sets, merging and filtering datasets, applying functions to records or groups of records, and for restructuring data by defining columns, by grouping values, or by converting columns to rows.</li><li>It can use a workflow-based approach as a sequence of operations on data.</li></ul> | <ul><li>SQL users may find Pig Latin is less familiar and more difficult to use than HiveQL.</li><li>The default output is usually a text file and so can be more difficult to use with visualization tools such as Excel. Typically you'll layer a Hive table over the output.</li></ul> |
18
-
|**Custom map/reduce**| <ul><li>It provides full control over the map and reduces phases, and execution.</li><li>It allows queries to be optimized to achieve maximum performance from the cluster, or to minimize the load on the servers and the network.</li><li>The components can be written in a range of well-known languages.</li></ul> | <ul><li>It's more difficult than using Pig or Hive because you must create your own map and reduce components.</li><li>Processes that require joining sets of data are more difficult to implement.</li><li>Even though there are test frameworks available, debugging code is more complex than a normal application because the code runs as a batch job under the control of the Hadoop job scheduler.</li></ul> |
18
+
|**Custom MapReduce**| <ul><li>It provides full control over the map and reduces phases, and execution.</li><li>It allows queries to be optimized to achieve maximum performance from the cluster, or to minimize the load on the servers and the network.</li><li>The components can be written in a range of well-known languages.</li></ul> | <ul><li>It's more difficult than using Pig or Hive because you must create your own map and reduce components.</li><li>Processes that require joining sets of data are more difficult to implement.</li><li>Even though there are test frameworks available, debugging code is more complex than a normal application because the code runs as a batch job under the control of the Hadoop job scheduler.</li></ul> |
19
19
|`Apache HCatalog`| <ul><li>It abstracts the path details of storage, making administration easier and removing the need for users to know where the data is stored.</li><li>It enables notification of events such as data availability, allowing other tools such as Oozie to detect when operations have occurred.</li><li>It exposes a relational view of data, including partitioning by key, and makes the data easy to access.</li></ul> | <ul><li>It supports RCFile, CSV text, JSON text, SequenceFile, and ORC file formats by default, but you may need to write a custom SerDe for other formats.</li><li>`HCatalog` isn't thread-safe.</li><li>There are some restrictions on the data types for columns when using the `HCatalog` loader in Pig scripts. For more information, see [HCatLoader Data Types](https://cwiki.apache.org/confluence/display/Hive/HCatalog%20LoadStore#HCatalogLoadStore-HCatLoaderDataTypes) in the Apache `HCatalog` documentation.</li></ul> |
20
20
21
21
Typically, you use the simplest of these approaches that can provide the results you require. For example, you may be able to achieve such results by using just Hive, but for more complex scenarios you may need to use Pig, or even write your own map and reduce components. You may also decide, after experimenting with Hive or Pig, that custom map and reduce components can provide better performance by allowing you to fine-tune and optimize the processing.
22
22
23
-
## Custom map/reduce components
23
+
## Custom MapReduce components
24
24
25
-
Map/reduce code consists of two separate functions implemented as **map** and **reduce** components. The **map** component is run in parallel on multiple cluster nodes, each node applying the mapping to the node's own subset of the data. The **reduce** component collates and summarizes the results from all the map functions. For more information on these two components, see [Use MapReduce in Hadoop on HDInsight](hdinsight-use-mapreduce.md).
25
+
MapReduce code consists of two separate functions implemented as **map** and **reduce** components. The **map** component is run in parallel on multiple cluster nodes, each node applying the mapping to the node's own subset of the data. The **reduce** component collates and summarizes the results from all the map functions. For more information on these two components, see [Use MapReduce in Hadoop on HDInsight](hdinsight-use-mapreduce.md).
26
26
27
27
In most HDInsight processing scenarios, it's simpler and more efficient to use a higher-level abstraction such as Pig or Hive. You can also create custom map and reduce components for use within Hive scripts to perform more sophisticated processing.
28
28
29
-
Custom map/reduce components are typically written in Java. Hadoop provides a streaming interface that also allows components to be used that are developed in other languages such as C#, F#, Visual Basic, Python, and JavaScript.
29
+
Custom MapReduce components are typically written in Java. Hadoop provides a streaming interface that also allows components to be used that are developed in other languages such as C#, F#, Visual Basic, Python, and JavaScript.
30
30
31
31
* For a walkthrough on developing custom Java MapReduce programs, see [Develop Java MapReduce programs for Hadoop on HDInsight](apache-hadoop-develop-deploy-java-mapreduce-linux.md).
32
32
33
33
Consider creating your own map and reduce components for the following conditions:
34
34
35
35
* You need to process data that is completely unstructured by parsing the data and using custom logic to obtain structured information from it.
36
36
* You want to perform complex tasks that are difficult (or impossible) to express in Pig or Hive without resorting to creating a UDF. For example, you might need to use an external geocoding service to convert latitude and longitude coordinates or IP addresses in the source data to geographical location names.
37
-
* You want to reuse your existing .NET, Python, or JavaScript code in map/reduce components by using the Hadoop streaming interface.
37
+
* You want to reuse your existing .NET, Python, or JavaScript code in MapReduce components by using the Hadoop streaming interface.
@@ -60,9 +60,9 @@ When an application is installed on a cluster (either on an existing cluster, or
60
60
61
61
The installation script must have the following characteristics:
62
62
* The script is idempotent. Multiple calls to the script produce the same result.
63
-
* The script is properly versioned. Use a different location for the script when you are upgrading or testing changes. This ensures that customers who are installing the application are not affected by your updates or testing.
63
+
* The script is properly versioned. Use a different location for the script when you're upgrading or testing changes. This ensures that customers who are installing the application aren't affected by your updates or testing.
64
64
* The script has adequate logging at each point. Usually, script logs are the only way to debug application installation issues.
65
-
* Calls to external services or resources have adequate retries so that the installation is not affected by transient network issues.
65
+
* Calls to external services or resources have adequate retries so that the installation isn't affected by transient network issues.
66
66
* If your script starts services on the nodes, services are monitored and configured to start automatically if a node reboot occurs.
67
67
68
68
## Package the application
@@ -83,7 +83,7 @@ To publish an HDInsight application:
83
83
2. In the left menu, select **Solution templates**.
84
84
3. Enter a title, and then select **Create a new solution template**.
85
85
4. If you haven't already registered your organization, select **Create Dev Center account and join the Azure program**. For more information, see [Create a Microsoft Developer account](../marketplace/overview.md).
86
-
5. Select **Define some Topologies to get Started**. A solution template is a "parent" to all its topologies. You can define multiple topologies in one offer or solution template. When an offer is pushed to staging, it is pushed with all its topologies.
86
+
5. Select **Define some Topologies to get Started**. A solution template is a "parent" to all its topologies. You can define multiple topologies in one offer or solution template. When an offer is pushed to staging, it's pushed with all its topologies.
87
87
6. Enter a topology name, and then select **+**.
88
88
7. Enter a new version, and then select **+**.
89
89
8. Upload the .zip file you created when you packaged the application.
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-upgrade-cluster.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,14 +5,14 @@ description: Learn guidelines to migrate your Azure HDInsight cluster to a newer
5
5
ms.service: azure-hdinsight
6
6
ms.topic: how-to
7
7
ms.custom: hdinsightactive
8
-
ms.date: 02/21/2024
8
+
ms.date: 02/03/2025
9
9
---
10
10
# Migrate HDInsight cluster to a newer version
11
11
12
12
To take advantage of the latest HDInsight features, we recommend that HDInsight clusters be regularly migrated to latest version. HDInsight doesn't support in-place upgrades where an existing cluster is upgraded to a newer component version. You must create a new cluster with the desired component and platform version and then migrate your applications to use the new cluster. Follow the below guidelines to migrate your HDInsight cluster versions.
13
13
14
14
> [!NOTE]
15
-
> If you are creating a Hive cluster with a primary storage container, copy it from an existing HDInsight cluster. Don'tt copy the complete content. Copy only the data folders which are configured.
15
+
> If you're creating a Hive cluster with a primary storage container, copy it from an existing HDInsight cluster. Don't copy the complete content. Copy only the data folders which are configured.
16
16
17
17
## Migration tasks
18
18
@@ -24,9 +24,9 @@ The workflow to upgrade HDInsight Cluster is as follows.
24
24
3. Copy existing jobs, data sources, and sinks to the new environment.
25
25
4. Perform validation testing to make sure that your jobs work as expected on the new cluster.
26
26
27
-
Once you've verified that everything works as expected, schedule downtime for the migration. During this downtime, do the following actions:
27
+
Once you have verified that everything works as expected, schedule downtime for the migration. During this downtime, do the following actions:
28
28
29
-
1. Back up any transient data stored locally on the cluster nodes. For example, if you've data stored directly on a head node.
29
+
1. Back up any transient data stored locally on the cluster nodes. For example, if you have data stored directly on a head node.
30
30
1.[Delete the existing cluster](./hdinsight-delete-cluster.md).
31
31
1. Create a cluster in the same VNET subnet with latest (or supported) HDI version using the same default data store that the previous cluster used. This allows the new cluster to continue working against your existing production data.
32
32
1. Import any transient data you backed up.
@@ -58,7 +58,7 @@ As mentioned above, Microsoft recommends that HDInsight clusters be regularly mi
58
58
***Third-party software**. Customers have the ability to install third-party software on their HDInsight clusters; however, we'll recommend recreating the cluster if it breaks the existing functionality.
59
59
***Multiple workloads on the same cluster**. In HDInsight 4.0, the Hive Warehouse Connector needs separate clusters for Spark and Interactive Query workloads. [Follow these steps to set up both clusters in Azure HDInsight](interactive-query/apache-hive-warehouse-connector.md). Similarly, integrating [Spark with HBASE](hdinsight-using-spark-query-hbase.md) requires two different clusters.
60
60
***Custom Ambari DB password changed**. The Ambari DB password is set during cluster creation and there's no current mechanism to update it. If a customer deploys the cluster with a [custom Ambari DB](hdinsight-custom-ambari-db.md), they have the ability to change the DB password on the SQL DB; however, there's no way to update this password for a running HDInsight cluster.
61
-
***Modifying HDInsight Load Balancers**. The HDInsight load balancers that are automatically deployed for Ambari and SSH access **should not** be modified or deleted. If you modify the HDInsight load balancer(s) and it breaks the cluster functionality, you will be advised to redeploy the cluster.
61
+
***Modifying HDInsight Load Balancers**. The HDInsight load balancers that are automatically deployed for Ambari and SSH access **should not** be modified or deleted. If you modify the HDInsight load balancers and it breaks the cluster functionality, you will be advised to redeploy the cluster.
62
62
***Reusing Ranger 4.X Databases in 5.X**. HDInsight 5.1 has [Apache Ranger version 2.3.0](https://cwiki.apache.org/confluence/display/RANGER/Apache+Ranger+2.3.0+-+Release+Notes) which is major version upgrade from 1.2.0 in HDInsight 4.X clusters. Reuse of an HDInsight 4.X Ranger database in HDInsight 5.1 would prevent the Ranger service from starting due to differences in the DB schema. You would need to create an empty Ranger database to successfully deploy HDInsight 5.1 ESP clusters.
* You can specify how often you want this cluster to automatically check if update. Default: -s “*/1 * * * *” -h 0 (In this example, the Spark cron runs every minute, while the HBase cron doesn't run)
108
+
* You can specify how often you want this cluster to automatically check if update. Default: `-s “*/1 * * * *” -h 0` (In this example, the Spark cron job runs every minute, while the HBase cron doesn't run)
109
109
* Since HBase cron isn't set up by default, you need to rerun this script when perform scaling to your HBase cluster. If your HBase cluster scales often, you may choose to set up HBase cron job automatically. For example: `-s '*/1 * * * *' -h '*/30 * * * *' -d "securehadooprc"` configures the script to perform checks every 30 minutes. This will run HBase cron schedule periodically to automate downloading of new HBase information on the common storage account to local node.
110
110
111
111
>[!NOTE]
112
-
>These scripts works only on HDI 5.0 and HDI 5.1 clusters.
112
+
>These scripts work only on HDI 5.0 and HDI 5.1 clusters.
0 commit comments