You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this tutorial, you learn how to create a[Apache Hadoop](https://hadoop.apache.org/) cluster, on demand, in Azure HDInsight using Azure Data Factory. You then use data pipelines in Azure Data Factory to run Hive jobs and delete the cluster. By the end of this tutorial, you learn how to operationalize a big data job run where cluster creation, job run, and cluster deletion are performed on a schedule.
17
+
In this tutorial, you learn how to create an[Apache Hadoop](./hadoop/apache-hadoop-introduction.md) cluster, on demand, in Azure HDInsight using Azure Data Factory. You then use data pipelines in Azure Data Factory to run Hive jobs and delete the cluster. By the end of this tutorial, you learn how to operationalize a big data job run where cluster creation, job run, and cluster deletion are performed on a schedule.
18
18
19
19
This tutorial covers the following tasks:
20
20
@@ -38,7 +38,7 @@ If you don't have an Azure subscription, [create a free account](https://azure.m
38
38
39
39
## Create preliminary Azure objects
40
40
41
-
In this section, you create various objects that will be used for the HDInsight cluster you create on-demand. The created storage account will contain the sample [HiveQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual) script, `partitionweblogs.hql`, that you use to simulate a sample [Apache Hive](https://hive.apache.org/) job that runs on the cluster.
41
+
In this section, you create various objects that will be used for the HDInsight cluster you create on-demand. The created storage account will contain the sample [HiveQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual) script, `partitionweblogs.hql`, that you use to simulate a sample Apache Hive job that runs on the cluster.
42
42
43
43
This section uses an Azure PowerShell script to create the storage account and copy over the required files within the storage account. The Azure PowerShell sample script in this section performs the following tasks:
44
44
@@ -48,7 +48,7 @@ This section uses an Azure PowerShell script to create the storage account and c
48
48
4. Creates a Blob container in the storage account
49
49
5. Copies the sample HiveQL script (**partitionweblogs.hql**) the Blob container. The script is available at [https://hditutorialdata.blob.core.windows.net/adfhiveactivity/script/partitionweblogs.hql](https://hditutorialdata.blob.core.windows.net/adfhiveactivity/script/partitionweblogs.hql). The sample script is already available in another public Blob container. The PowerShell script below makes a copy of these files into the Azure Storage account it creates.
50
50
51
-
**To create a storage account and copy the files using Azure PowerShell:**
51
+
### Create storage account and copy files
52
52
53
53
> [!IMPORTANT]
54
54
> Specify names for the Azure resource group and the Azure storage account that will be created by the script.
Write-host "`nScript completed" -ForegroundColor Green
148
148
```
149
149
150
-
**To verify the storage account creation**
150
+
### Verify storage account
151
151
152
152
1. Sign on to the [Azure portal](https://portal.azure.com).
153
153
1. From the left, navigate to **All services** > **General** > **Resource groups**.
@@ -163,18 +163,18 @@ Write-host "`nScript completed" -ForegroundColor Green
163
163
164
164
In Azure Data Factory, a data factory can have one or more data pipelines. A data pipeline has one or more activities. There are two types of activities:
165
165
166
-
-[Data Movement Activities](../data-factory/copy-activity-overview.md) - You use data movement activities to move data from a source data store to a destination data store.
167
-
-[Data Transformation Activities](../data-factory/transform-data.md). You use data transformation activities to transform/process data. HDInsight Hive Activity is one of the transformation activities supported by Data Factory. You use the Hive transformation activity in this tutorial.
166
+
*[Data Movement Activities](../data-factory/copy-activity-overview.md). You use data movement activities to move data from a source data store to a destination data store.
167
+
*[Data Transformation Activities](../data-factory/transform-data.md). You use data transformation activities to transform/process data. HDInsight Hive Activity is one of the transformation activities supported by Data Factory. You use the Hive transformation activity in this tutorial.
168
168
169
169
In this article, you configure the Hive activity to create an on-demand HDInsight Hadoop cluster. When the activity runs to process data, here is what happens:
170
170
171
-
1. An HDInsight Hadoop cluster is automatically created for you just-in-time to process the slice.
171
+
1. An HDInsight Hadoop cluster is automatically created for you just-in-time to process the slice.
172
172
173
173
2. The input data is processed by running a HiveQL script on the cluster. In this tutorial, the HiveQL script associated with the hive activity performs the following actions:
174
174
175
-
- Uses the existing table (*hivesampletable*) to create another table **HiveSampleOut**.
176
-
- Populates the **HiveSampleOut** table with only specific columns from the original *hivesampletable*.
177
-
175
+
* Uses the existing table (*hivesampletable*) to create another table **HiveSampleOut**.
176
+
* Populates the **HiveSampleOut** table with only specific columns from the original *hivesampletable*.
177
+
178
178
3. The HDInsight Hadoop cluster is deleted after the processing is complete and the cluster is idle for the configured amount of time (timeToLive setting). If the next data slice is available for processing with in this timeToLive idle time, the same cluster is used to process the slice.
179
179
180
180
## Create a data factory
@@ -190,13 +190,13 @@ In this article, you configure the Hive activity to create an on-demand HDInsigh
190
190
|Property |Value |
191
191
|---------|---------|
192
192
|Name | Enter a name for the data factory. This name must be globally unique.|
193
-
|Subscription | Select your Azure subscription. |
194
-
|Resource group | Select **Use existing** and then select the resource group you created using the PowerShell script. |
195
193
|Version | Leave at **V2**. |
194
+
|Subscription | Select your Azure subscription. |
195
+
|Resource group | Select the resource group you created using the PowerShell script. |
196
196
|Location | The location is automatically set to the location you specified while creating the resource group earlier. For this tutorial, the location is set to **East US**. |
197
197
|Enable GIT|Uncheck this box.|
198
198
199
-

199
+

200
200
201
201
4. Select **Create**. Creating a data factory might take anywhere between 2 to 4 minutes.
202
202
@@ -210,8 +210,8 @@ In this article, you configure the Hive activity to create an on-demand HDInsigh
210
210
211
211
In this section, you author two linked services within your data factory.
212
212
213
-
- An **Azure Storage linked service** that links an Azure storage account to the data factory. This storage is used by the on-demand HDInsight cluster. It also contains the Hive script that is run on the cluster.
214
-
- An **on-demand HDInsight linked service**. Azure Data Factory automatically creates an HDInsight cluster and runs the Hive script. It then deletes the HDInsight cluster after the cluster is idle for a preconfigured time.
213
+
* An **Azure Storage linked service** that links an Azure storage account to the data factory. This storage is used by the on-demand HDInsight cluster. It also contains the Hive script that is run on the cluster.
214
+
* An **on-demand HDInsight linked service**. Azure Data Factory automatically creates an HDInsight cluster and runs the Hive script. It then deletes the HDInsight cluster after the cluster is idle for a preconfigured time.
215
215
216
216
### Create an Azure Storage linked service
217
217
@@ -278,26 +278,26 @@ In this section, you author two linked services within your data factory.
278
278
279
279

280
280
281
-
2. In the **Activities** toolbox, expand **HDInsight**, and drag the **Hive** activity to the pipeline designer surface. In the **General** tab, provide a name for the activity.
281
+
1. In the **Activities** toolbox, expand **HDInsight**, and drag the **Hive** activity to the pipeline designer surface. In the **General** tab, provide a name for the activity.
282
282
283
283

284
284
285
-
3. Make sure you have the Hive activity selected, select the **HDI Cluster** tab, and from the **HDInsight Linked Service** drop-down list, select the linked service you created earlier, **HDInsightLinkedService**, for HDInsight.
285
+
1. Make sure you have the Hive activity selected, select the **HDI Cluster** tab, and from the **HDInsight Linked Service** drop-down list, select the linked service you created earlier, **HDInsightLinkedService**, for HDInsight.
286
286
287
287

288
288
289
-
4. Select the **Script** tab and complete the following steps:
289
+
1. Select the **Script** tab and complete the following steps:
290
290
291
291
1. For **Script Linked Service**, select **HDIStorageLinkedService** from the drop-down list. This value is the storage linked service you created earlier.
292
292
293
293
1. For **File Path**, select **Browse Storage** and navigate to the location where the sample Hive script is available. If you ran the PowerShell script earlier, this location should be `adfgetstarted/hivescripts/partitionweblogs.hql`.
294
294
295
295

296
296
297
-
1. Under **Advanced** > **Parameters**, select **Auto-fill from script**. This option looks for any parameters in the Hive script that require values at runtime.
298
-
297
+
1. Under **Advanced** > **Parameters**, select **Auto-fill from script**. This option looks for any parameters in the Hive script that require values at runtime.
298
+
299
299
1. In the **value** text box, add the existing folder in the format `wasbs://adfgetstarted@<StorageAccount>.blob.core.windows.net/outputfolder/`. The path is case-sensitive. This is the path where the output of the script will be stored. The `wasbs` schema is necessary because storage accounts now have secure transfer required enabled by default.
300
-
300
+
301
301

302
302
303
303
1. Select **Validate** to validate the pipeline. Select the **>>** (right arrow) button to close the validation window.
@@ -314,7 +314,7 @@ In this section, you author two linked services within your data factory.
314
314
315
315

316
316
317
-
2. Select **Finish** in the pop-up side bar.
317
+
2. Select **OK** in the pop-up side bar.
318
318
319
319
## Monitor a pipeline
320
320
@@ -332,11 +332,11 @@ In this section, you author two linked services within your data factory.
332
332
333
333
1. To verify the output, in the Azure portal navigate to the storage account that you used for this tutorial. You should see the following folders or containers:
334
334
335
-
- You see an **adfgerstarted/outputfolder** that contains the output of the Hive script that was run as part of the pipeline.
335
+
* You see an **adfgerstarted/outputfolder** that contains the output of the Hive script that was run as part of the pipeline.
336
336
337
-
- You see an **adfhdidatafactory-\<linked-service-name>-\<timestamp>** container. This container is the default storage location of the HDInsight cluster that was created as part of the pipeline run.
337
+
* You see an **adfhdidatafactory-\<linked-service-name>-\<timestamp>** container. This container is the default storage location of the HDInsight cluster that was created as part of the pipeline run.
338
338
339
-
- You see an **adfjobs** container that has the Azure Data Factory job logs.
339
+
* You see an **adfjobs** container that has the Azure Data Factory job logs.
340
340
341
341

342
342
@@ -359,7 +359,8 @@ Alternatively, you can delete the entire resource group that you created for thi
359
359
1. Enter the resource group name to confirm deletion, and then select **Delete**.
360
360
361
361
## Next steps
362
+
362
363
In this article, you learned how to use Azure Data Factory to create on-demand HDInsight cluster and run [Apache Hive](https://hive.apache.org/) jobs. Advance to the next article to learn how to create HDInsight clusters with custom configuration.
363
364
364
365
> [!div class="nextstepaction"]
365
-
>[Create Azure HDInsight clusters with custom configuration](hdinsight-hadoop-provision-linux-clusters.md)
366
+
>[Create Azure HDInsight clusters with custom configuration](hdinsight-hadoop-provision-linux-clusters.md)
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql-data-warehouse/load-data-from-azure-blob-storage-using-polybase.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ Log in to the [Azure portal](https://portal.azure.com/).
40
40
41
41
## Create a blank database
42
42
43
-
A SQL pool is created with a defined set of [compute resources]memory-concurrency-limits.md). The database is created within an [Azure resource group](../../azure-resource-manager/management/overview.md) and in an [Azure SQL logical server](../../sql-database/sql-database-features.md).
43
+
A SQL pool is created with a defined set of [compute resources](memory-concurrency-limits.md). The database is created within an [Azure resource group](../../azure-resource-manager/management/overview.md) and in an [Azure SQL logical server](../../sql-database/sql-database-features.md).
0 commit comments