Skip to content

Commit 26ede45

Browse files
committed
freshness41
1 parent 2afe35d commit 26ede45

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

articles/hdinsight/hdinsight-for-vscode.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
---
22
title: Azure HDInsight for Visual Studio Code
3-
description: Learn how to use the Spark & Hive Tools (Azure HDInsight) for Visual Studio Code to create and submit queries and scripts.
3+
description: Learn how to use the Spark & Hive Tools (Azure HDInsight) for Visual Studio Code. Use the tools to create and submit queries and scripts.
44
author: hrasheed-msft
55
ms.author: hrasheed
66
ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.topic: conceptual
9-
ms.date: 10/11/2019
9+
ms.date: 04/07/2020
1010
---
1111

1212
# Use Spark & Hive Tools for Visual Studio Code
1313

14-
Learn how to use Spark & Hive Tools for Visual Studio Code to create and submit Apache Hive batch jobs, interactive Hive queries, and PySpark scripts for Apache Spark. First we'll describe how to install Spark & Hive Tools in Visual Studio Code, and then we'll walk through how to submit jobs to Spark & Hive Tools.
14+
Learn how to use Apache Spark & Hive Tools for Visual Studio Code. Use the tools to create and submit Apache Hive batch jobs, interactive Hive queries, and PySpark scripts for Apache Spark. First we'll describe how to install Spark & Hive Tools in Visual Studio Code. Then we'll walk through how to submit jobs to Spark & Hive Tools.
1515

16-
Spark & Hive Tools can be installed on platforms that are supported by Visual Studio Code, which include Windows, Linux, and macOS. Note the following prerequisites for different platforms.
16+
Spark & Hive Tools can be installed on platforms that are supported by Visual Studio Code. Note the following prerequisites for different platforms.
1717

1818
## Prerequisites
1919

@@ -45,7 +45,7 @@ After you meet the prerequisites, you can install Spark & Hive Tools for Visual
4545

4646
To open a work folder and to create a file in Visual Studio Code, follow these steps:
4747

48-
1. From the menu bar, navigate to to **File** > **Open Folder...** > **C:\HD\HDexample**, and then select the **Select Folder** button. The folder appears in the **Explorer** view on the left.
48+
1. From the menu bar, navigate to **File** > **Open Folder...** > **C:\HD\HDexample**, and then select the **Select Folder** button. The folder appears in the **Explorer** view on the left.
4949

5050
2. In **Explorer** view, select the **HDexample** folder, and then select the **New File** icon next to the work folder:
5151

@@ -65,13 +65,13 @@ For a national cloud user, follow these steps to set the Azure environment first
6565

6666
## Connect to an Azure account
6767

68-
Before you can submit scripts to your clusters from Visual Studio Code, you must either connect to your Azure account or link a cluster (using Apache Ambari username and password credentials or a domain-joined account). Follow these steps to connect to Azure:
68+
Before you can submit scripts to your clusters from Visual Studio Code, you must either connect to your Azure account or link a cluster. Use Apache Ambari username and password credentials or a domain-joined account. Follow these steps to connect to Azure:
6969

7070
1. From the menu bar, navigate to **View** > **Command Palette...**, and enter **Azure: Sign In**:
7171

7272
![Spark & Hive Tools for Visual Studio Code login](./media/hdinsight-for-vscode/hdinsight-for-vscode-extension-login.png)
7373

74-
2. Follow the sign-in instructions to sign in to Azure. After you're connected, your Azure account name is shown on the status bar at the bottom of the Visual Studio Code window.
74+
2. Follow the sign-in instructions to sign in to Azure. After you're connected, your Azure account name shows on the status bar at the bottom of the Visual Studio Code window.
7575

7676
## Link a cluster
7777

@@ -255,7 +255,7 @@ After you submit a Python job, submission logs appear in the **OUTPUT** window i
255255

256256
## Apache Livy configuration
257257

258-
[Apache Livy](https://livy.incubator.apache.org/) configuration is supported. You can configure it in the **.VSCode\settings.json** file in the workspace folder. Currently, Livy configuration only supports Python script. For more details, see [Livy README](https://github.com/cloudera/livy/blob/master/README.rst ).
258+
[Apache Livy](https://livy.incubator.apache.org/) configuration is supported. You can configure it in the **.VSCode\settings.json** file in the workspace folder. Currently, Livy configuration only supports Python script. For more information, see [Livy README](https://github.com/cloudera/livy/blob/master/README.rst ).
259259

260260
<a id="triggerlivyconf"></a>**How to trigger Livy configuration**
261261

@@ -265,7 +265,7 @@ Method 1
265265
3. Select **Edit in settings.json** for the relevant search result.
266266

267267
Method 2
268-
Submit a file, and notice that the .vscode folder is automatically added to the work folder. You can see the Livy configuration by selecting **.vscode\settings.json**.
268+
Submit a file, and notice that the `.vscode` folder is automatically added to the work folder. You can see the Livy configuration by selecting **.vscode\settings.json**.
269269

270270
+ The project settings:
271271

@@ -280,7 +280,7 @@ Submit a file, and notice that the .vscode folder is automatically added to the
280280
Request body
281281

282282
| name | description | type |
283-
| :- | :- | :- |
283+
| --- | --- | --- |
284284
| file | File containing the application to execute | Path (required) |
285285
| proxyUser | User to impersonate when running the job | String |
286286
| className | Application Java/Spark main class | String |
@@ -302,9 +302,9 @@ Submit a file, and notice that the .vscode folder is automatically added to the
302302
The created Batch object.
303303

304304
| name | description | type |
305-
| :- | :- | :- |
306-
| id | Session id | Int |
307-
| appId | Application id of this session | String |
305+
| --- | ---| --- |
306+
| ID | Session ID | Int |
307+
| appId | Application ID of this session | String |
308308
| appInfo | Detailed application info | Map of key=val |
309309
| log | Log lines | List of strings |
310310
| state |Batch state | String |
@@ -338,8 +338,8 @@ You can preview Hive Table in your clusters directly through the **Azure HDInsig
338338

339339
- MESSAGES panel
340340
1. When the number of rows in the table is greater than 100, you see the following message: "The first 100 rows are displayed for Hive table."
341-
2. When the number of rows in the table is less than or equal to 100, you see a message like the following: "60 rows are displayed for Hive table."
342-
3. When there's no content in the table, you see the following message: "0 rows are displayed for Hive table."
341+
2. When the number of rows in the table is less than or equal to 100, you see the following message: "60 rows are displayed for Hive table."
342+
3. When there's no content in the table, you see the following message: "`0 rows are displayed for Hive table.`"
343343

344344
>[!NOTE]
345345
>
@@ -362,7 +362,7 @@ Spark & Hive for Visual Studio Code also supports the following features:
362362

363363
## Reader-only role
364364

365-
Users who are assigned the reader-only role for the cluster can no longer submit jobs to the HDInsight cluster, nor can they view the Hive database. Contact the cluster administrator to upgrade your role to [**HDInsight Cluster Operator**](https://docs.microsoft.com/azure/hdinsight/hdinsight-migrate-granular-access-cluster-configurations#add-the-hdinsight-cluster-operator-role-assignment-to-a-user) in the [Azure portal](https://ms.portal.azure.com/). If you have valid Ambari credentials, you can manually link the cluster by using the following guidance.
365+
Users who are assigned the reader-only role for the cluster can't submit jobs to the HDInsight cluster, nor view the Hive database. Contact the cluster administrator to upgrade your role to [**HDInsight Cluster Operator**](https://docs.microsoft.com/azure/hdinsight/hdinsight-migrate-granular-access-cluster-configurations#add-the-hdinsight-cluster-operator-role-assignment-to-a-user) in the [Azure portal](https://ms.portal.azure.com/). If you have valid Ambari credentials, you can manually link the cluster by using the following guidance.
366366

367367
### Browse the HDInsight cluster
368368

@@ -391,11 +391,11 @@ When submitting job to an HDInsight cluster, you're prompted to link the cluster
391391

392392
### Browse a Data Lake Storage Gen2 account
393393

394-
When you select the Azure HDInsight explorer to expand a Data Lake Storage Gen2 account, you're prompted to enter the storage access key if your Azure account has no access to Gen2 storage. After the access key is validated, the Data Lake Storage Gen2 account is auto-expanded.
394+
Select the Azure HDInsight explorer to expand a Data Lake Storage Gen2 account. You're prompted to enter the storage access key if your Azure account has no access to Gen2 storage. After the access key is validated, the Data Lake Storage Gen2 account is auto-expanded.
395395

396396
### Submit jobs to an HDInsight cluster with Data Lake Storage Gen2
397397

398-
When you submit a job to an HDInsight cluster by using Data Lake Storage Gen2, you're prompted to enter the storage access key if your Azure account has no write access to Gen2 storage. After the access key is validated, the job will be successfully submitted.
398+
Submit a job to an HDInsight cluster using Data Lake Storage Gen2. You're prompted to enter the storage access key if your Azure account has no write access to Gen2 storage. After the access key is validated, the job will be successfully submitted.
399399

400400
![Spark & Hive Tools for Visual Studio Code AccessKey](./media/hdinsight-for-vscode/hdi-azure-hdinsight-azure-accesskey.png)
401401

0 commit comments

Comments
 (0)