|
1 | 1 | ---
|
2 | 2 | title: Upload data for Apache Hadoop jobs in HDInsight
|
3 | 3 | description: Learn how to upload and access data for Apache Hadoop jobs in HDInsight using the Azure classic CLI, Azure Storage Explorer, Azure PowerShell, the Hadoop command line, or Sqoop.
|
4 |
| -keywords: etl hadoop, getting data into hadoop, hadoop load data |
5 | 4 | author: hrasheed-msft
|
6 |
| -ms.reviewer: jasonh |
7 | 5 | ms.author: hrasheed
|
| 6 | +ms.reviewer: jasonh |
8 | 7 | ms.service: hdinsight
|
9 |
| -ms.custom: hdinsightactive,hdiseo17may2017 |
| 8 | +ms.custom: hdiseo17may2017 |
10 | 9 | ms.topic: conceptual
|
11 |
| -ms.date: 02/08/2019 |
| 10 | +ms.date: 06/03/2019 |
12 | 11 | ---
|
| 12 | + |
13 | 13 | # Upload data for Apache Hadoop jobs in HDInsight
|
14 | 14 |
|
15 |
| -Azure HDInsight provides a full-featured Hadoop distributed file system (HDFS) over Azure Storage and Azure Data Lake Storage (Gen1 and Gen2). Azure Storage and Data Lake Storage Gen1 and Gen2 are designed as HDFS extensions to provide a seamless experience to customers. They enable the full set of components in the Hadoop ecosystem to operate directly on the data it manages. Azure Storage, Data Lake Storage Gen1, and Gen2 are distinct file systems that are optimized for storage of data and computations on that data. For information about the benefits of using Azure Storage, see [Use Azure Storage with HDInsight][hdinsight-storage], [Use Data Lake Storage Gen1 with HDInsight](hdinsight-hadoop-use-data-lake-store.md), and [Use Data Lake Storage Gen2 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen2.md). |
| 15 | +Azure HDInsight provides a full-featured Hadoop distributed file system (HDFS) over Azure Storage and Azure Data Lake Storage (Gen1 and Gen2). Azure Storage and Data Lake Storage Gen1 and Gen2 are designed as HDFS extensions to provide a seamless experience to customers. They enable the full set of components in the Hadoop ecosystem to operate directly on the data it manages. Azure Storage, Data Lake Storage Gen1, and Gen2 are distinct file systems that are optimized for storage of data and computations on that data. For information about the benefits of using Azure Storage, see [Use Azure Storage with HDInsight](hdinsight-hadoop-use-blob-storage.md), [Use Data Lake Storage Gen1 with HDInsight](hdinsight-hadoop-use-data-lake-store.md), and [Use Data Lake Storage Gen2 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen2.md). |
16 | 16 |
|
17 | 17 | ## Prerequisites
|
18 | 18 |
|
19 | 19 | Note the following requirements before you begin:
|
20 | 20 |
|
21 |
| -* An Azure HDInsight cluster. For instructions, see [Get started with Azure HDInsight][hdinsight-get-started] or [Create HDInsight clusters](hdinsight-hadoop-provision-linux-clusters.md). |
| 21 | +* An Azure HDInsight cluster. For instructions, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md) or [Create HDInsight clusters](hdinsight-hadoop-provision-linux-clusters.md). |
22 | 22 | * Knowledge of the following articles:
|
23 | 23 |
|
24 |
| - - [Use Azure Storage with HDInsight][hdinsight-storage] |
| 24 | + - [Use Azure Storage with HDInsight](hdinsight-hadoop-use-blob-storage.md) |
25 | 25 | - [Use Data Lake Storage Gen1 with HDInsight](hdinsight-hadoop-use-data-lake-store.md)
|
26 | 26 | - [Use Data Lake Storage Gen2 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen2.md)
|
27 | 27 |
|
@@ -58,11 +58,11 @@ For example, `hadoop fs -copyFromLocal data.txt /example/data/data.txt`
|
58 | 58 |
|
59 | 59 | Because the default file system for HDInsight is in Azure Storage, /example/data.txt is actually in Azure Storage. You can also refer to the file as:
|
60 | 60 |
|
61 |
| - wasb:///example/data/data.txt |
| 61 | + wasbs:///example/data/data.txt |
62 | 62 |
|
63 | 63 | or
|
64 | 64 |
|
65 |
| - wasb://<ContainerName>@<StorageAccountName>.blob.core.windows.net/example/data/davinci.txt |
| 65 | + wasbs://<ContainerName>@<StorageAccountName>.blob.core.windows.net/example/data/davinci.txt |
66 | 66 |
|
67 | 67 | For a list of other Hadoop commands that work with files, see [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html)
|
68 | 68 |
|
@@ -98,7 +98,7 @@ The Azure Data Factory service is a fully managed service for composing data sto
|
98 | 98 | ### <a id="sqoop"></a>Apache Sqoop
|
99 | 99 | Sqoop is a tool designed to transfer data between Hadoop and relational databases. You can use it to import data from a relational database management system (RDBMS), such as SQL Server, MySQL, or Oracle into the Hadoop distributed file system (HDFS), transform the data in Hadoop with MapReduce or Hive, and then export the data back into an RDBMS.
|
100 | 100 |
|
101 |
| -For more information, see [Use Sqoop with HDInsight][hdinsight-use-sqoop]. |
| 101 | +For more information, see [Use Sqoop with HDInsight](hadoop/hdinsight-use-sqoop.md). |
102 | 102 |
|
103 | 103 | ### Development SDKs
|
104 | 104 | Azure Storage can also be accessed using an Azure SDK from the following programming languages:
|
@@ -146,28 +146,21 @@ hadoop -fs -D fs.azure.write.request.size=4194304 -copyFromLocal test_large_file
|
146 | 146 |
|
147 | 147 | You can also increase the value of `fs.azure.write.request.size` globally by using Apache Ambari. The following steps can be used to change the value in the Ambari Web UI:
|
148 | 148 |
|
149 |
| -1. In your browser, go to the Ambari Web UI for your cluster. This is https://CLUSTERNAME.azurehdinsight.net, where **CLUSTERNAME** is the name of your cluster. |
| 149 | +1. In your browser, go to the Ambari Web UI for your cluster. This is `https://CLUSTERNAME.azurehdinsight.net`, where `CLUSTERNAME` is the name of your cluster. |
150 | 150 |
|
151 | 151 | When prompted, enter the admin name and password for the cluster.
|
152 | 152 | 2. From the left side of the screen, select **HDFS**, and then select the **Configs** tab.
|
153 | 153 | 3. In the **Filter...** field, enter `fs.azure.write.request.size`. This displays the field and current value in the middle of the page.
|
154 | 154 | 4. Change the value from 262144 (256 KB) to the new value. For example, 4194304 (4 MB).
|
155 | 155 |
|
156 |
| - |
| 156 | +  |
157 | 157 |
|
158 | 158 | For more information on using Ambari, see [Manage HDInsight clusters using the Apache Ambari Web UI](hdinsight-hadoop-manage-ambari.md).
|
159 | 159 |
|
160 | 160 | ## Next steps
|
161 | 161 | Now that you understand how to get data into HDInsight, read the following articles to learn how to perform analysis:
|
162 | 162 |
|
163 |
| -* [Get started with Azure HDInsight][hdinsight-get-started] |
164 |
| -* [Submit Apache Hadoop jobs programmatically][hdinsight-submit-jobs] |
165 |
| -* [Use Apache Hive with HDInsight][hdinsight-use-hive] |
166 |
| -* [Use Apache Pig with HDInsight][hdinsight-use-pig] |
167 |
| - |
168 |
| -[hdinsight-use-sqoop]:hadoop/hdinsight-use-sqoop.md |
169 |
| -[hdinsight-storage]: hdinsight-hadoop-use-blob-storage.md |
170 |
| -[hdinsight-submit-jobs]:hadoop/submit-apache-hadoop-jobs-programmatically.md |
171 |
| -[hdinsight-get-started]:hadoop/apache-hadoop-linux-tutorial-get-started.md |
172 |
| -[hdinsight-use-hive]:hadoop/hdinsight-use-hive.md |
173 |
| -[hdinsight-use-pig]:hadoop/hdinsight-use-pig.md |
| 163 | +* [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md) |
| 164 | +* [Submit Apache Hadoop jobs programmatically](hadoop/submit-apache-hadoop-jobs-programmatically.md) |
| 165 | +* [Use Apache Hive with HDInsight](hadoop/hdinsight-use-hive.md) |
| 166 | +* [Use Apache Pig with HDInsight](hadoop/hdinsight-use-pig.md) |
0 commit comments