You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-for-vscode.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,19 +1,19 @@
1
1
---
2
2
title: Azure HDInsight for Visual Studio Code
3
-
description: Learn how to use the Spark & Hive Tools (Azure HDInsight) for Visual Studio Code to create and submit queries and scripts.
3
+
description: Learn how to use the Spark & Hive Tools (Azure HDInsight) for Visual Studio Code. Use the tools to create and submit queries and scripts.
4
4
author: hrasheed-msft
5
5
ms.author: hrasheed
6
6
ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
8
ms.topic: conceptual
9
-
ms.date: 10/11/2019
9
+
ms.date: 04/07/2020
10
10
---
11
11
12
12
# Use Spark & Hive Tools for Visual Studio Code
13
13
14
-
Learn how to use Spark & Hive Tools for Visual Studio Codeto create and submit Apache Hive batch jobs, interactive Hive queries, and PySpark scripts for Apache Spark. First we'll describe how to install Spark & Hive Tools in Visual Studio Code, and then we'll walk through how to submit jobs to Spark & Hive Tools.
14
+
Learn how to use Apache Spark & Hive Tools for Visual Studio Code. Use the tools to create and submit Apache Hive batch jobs, interactive Hive queries, and PySpark scripts for Apache Spark. First we'll describe how to install Spark & Hive Tools in Visual Studio Code. Then we'll walk through how to submit jobs to Spark & Hive Tools.
15
15
16
-
Spark & Hive Tools can be installed on platforms that are supported by Visual Studio Code, which include Windows, Linux, and macOS. Note the following prerequisites for different platforms.
16
+
Spark & Hive Tools can be installed on platforms that are supported by Visual Studio Code. Note the following prerequisites for different platforms.
17
17
18
18
## Prerequisites
19
19
@@ -45,7 +45,7 @@ After you meet the prerequisites, you can install Spark & Hive Tools for Visual
45
45
46
46
To open a work folder and to create a file in Visual Studio Code, follow these steps:
47
47
48
-
1. From the menu bar, navigate to to **File** > **Open Folder...** > **C:\HD\HDexample**, and then select the **Select Folder** button. The folder appears in the **Explorer** view on the left.
48
+
1. From the menu bar, navigate to **File** > **Open Folder...** > **C:\HD\HDexample**, and then select the **Select Folder** button. The folder appears in the **Explorer** view on the left.
49
49
50
50
2. In **Explorer** view, select the **HDexample** folder, and then select the **New File** icon next to the work folder:
51
51
@@ -65,13 +65,13 @@ For a national cloud user, follow these steps to set the Azure environment first
65
65
66
66
## Connect to an Azure account
67
67
68
-
Before you can submit scripts to your clusters from Visual Studio Code, you must either connect to your Azure account or link a cluster (using Apache Ambari username and password credentials or a domain-joined account). Follow these steps to connect to Azure:
68
+
Before you can submit scripts to your clusters from Visual Studio Code, you must either connect to your Azure account or link a cluster. Use Apache Ambari username and password credentials or a domain-joined account. Follow these steps to connect to Azure:
69
69
70
70
1. From the menu bar, navigate to **View** > **Command Palette...**, and enter **Azure: Sign In**:
71
71
72
72

73
73
74
-
2. Follow the sign-in instructions to sign in to Azure. After you're connected, your Azure account name is shown on the status bar at the bottom of the Visual Studio Code window.
74
+
2. Follow the sign-in instructions to sign in to Azure. After you're connected, your Azure account name shows on the status bar at the bottom of the Visual Studio Code window.
75
75
76
76
## Link a cluster
77
77
@@ -255,7 +255,7 @@ After you submit a Python job, submission logs appear in the **OUTPUT** window i
255
255
256
256
## Apache Livy configuration
257
257
258
-
[Apache Livy](https://livy.incubator.apache.org/) configuration is supported. You can configure it in the **.VSCode\settings.json** file in the workspace folder. Currently, Livy configuration only supports Python script. For more details, see [Livy README](https://github.com/cloudera/livy/blob/master/README.rst ).
258
+
[Apache Livy](https://livy.incubator.apache.org/) configuration is supported. You can configure it in the **.VSCode\settings.json** file in the workspace folder. Currently, Livy configuration only supports Python script. For more information, see [Livy README](https://github.com/cloudera/livy/blob/master/README.rst ).
259
259
260
260
<a id="triggerlivyconf"></a>**How to trigger Livy configuration**
261
261
@@ -265,7 +265,7 @@ Method 1
265
265
3. Select **Edit in settings.json**for the relevant search result.
266
266
267
267
Method 2
268
-
Submit a file, and notice that the .vscode folder is automatically added to the work folder. You can see the Livy configuration by selecting **.vscode\settings.json**.
268
+
Submit a file, and notice that the `.vscode` folder is automatically added to the work folder. You can see the Livy configuration by selecting **.vscode\settings.json**.
269
269
270
270
+ The project settings:
271
271
@@ -280,7 +280,7 @@ Submit a file, and notice that the .vscode folder is automatically added to the
280
280
Request body
281
281
282
282
| name | description |type|
283
-
|:-|:-|:-|
283
+
|---|---|---|
284
284
|file| File containing the application to execute | Path (required) |
285
285
| proxyUser | User to impersonate when running the job | String |
286
286
| className | Application Java/Spark main class| String |
@@ -302,9 +302,9 @@ Submit a file, and notice that the .vscode folder is automatically added to the
302
302
The created Batch object.
303
303
304
304
| name | description |type|
305
-
|:-|:-| :-|
306
-
|id| Session id| Int |
307
-
| appId | Application id of this session | String |
305
+
|---|---|---|
306
+
|ID| Session ID| Int |
307
+
| appId | Application ID of this session | String |
308
308
| appInfo | Detailed application info | Map of key=val |
309
309
| log | Log lines | List of strings |
310
310
| state |Batch state | String |
@@ -338,8 +338,8 @@ You can preview Hive Table in your clusters directly through the **Azure HDInsig
338
338
339
339
-MESSAGES panel
340
340
1. When the number of rows in the table is greater than 100, you see the following message: "The first 100 rows are displayed for Hive table."
341
-
2. When the number of rows in the table is less than or equal to 100, you see a message like the following: "60 rows are displayed for Hive table."
342
-
3. When there's no content in the table, you see the following message: "0 rows are displayed for Hive table."
341
+
2. When the number of rows in the table is less than or equal to 100, you see the following message: "60 rows are displayed for Hive table."
342
+
3. When there's no content in the table, you see the following message: "`0 rows are displayed for Hive table.`"
343
343
344
344
>[!NOTE]
345
345
>
@@ -362,7 +362,7 @@ Spark & Hive for Visual Studio Code also supports the following features:
362
362
363
363
## Reader-only role
364
364
365
-
Users who are assigned the reader-only role for the cluster can no longer submit jobs to the HDInsight cluster, nor can they view the Hive database. Contact the cluster administrator to upgrade your role to [**HDInsight Cluster Operator**](https://docs.microsoft.com/azure/hdinsight/hdinsight-migrate-granular-access-cluster-configurations#add-the-hdinsight-cluster-operator-role-assignment-to-a-user) in the [Azure portal](https://ms.portal.azure.com/). If you have valid Ambari credentials, you can manually link the cluster by using the following guidance.
365
+
Users who are assigned the reader-only role for the cluster can't submit jobs to the HDInsight cluster, nor view the Hive database. Contact the cluster administrator to upgrade your role to [**HDInsight Cluster Operator**](https://docs.microsoft.com/azure/hdinsight/hdinsight-migrate-granular-access-cluster-configurations#add-the-hdinsight-cluster-operator-role-assignment-to-a-user) in the [Azure portal](https://ms.portal.azure.com/). If you have valid Ambari credentials, you can manually link the cluster by using the following guidance.
366
366
367
367
### Browse the HDInsight cluster
368
368
@@ -391,11 +391,11 @@ When submitting job to an HDInsight cluster, you're prompted to link the cluster
391
391
392
392
### Browse a Data Lake Storage Gen2 account
393
393
394
-
When you select the Azure HDInsight explorer to expand a Data Lake Storage Gen2 account, you're prompted to enter the storage access key if your Azure account has no access to Gen2 storage. After the access key is validated, the Data Lake Storage Gen2 account is auto-expanded.
394
+
Select the Azure HDInsight explorer to expand a Data Lake Storage Gen2 account. You're prompted to enter the storage access key if your Azure account has no access to Gen2 storage. After the access key is validated, the Data Lake Storage Gen2 account is auto-expanded.
395
395
396
396
### Submit jobs to an HDInsight cluster with Data Lake Storage Gen2
397
397
398
-
When you submit a job to an HDInsight cluster by using Data Lake Storage Gen2, you're prompted to enter the storage access key if your Azure account has no write access to Gen2 storage. After the access key is validated, the job will be successfully submitted.
398
+
Submit a job to an HDInsight cluster using Data Lake Storage Gen2. You're prompted to enter the storage access key if your Azure account has no write access to Gen2 storage. After the access key is validated, the job will be successfully submitted.
399
399
400
400

0 commit comments