You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert

74
74
75
75
1. Enter **Cluster Name**, **User Name** and **Password**, then click OK button to link cluster. Optionally, enter Storage Account, Storage Key and then select Storage Container for storage explorer to work in the left tree view

78
78
79
79
> [!NOTE]
80
80
> We use the linked storage key, username and password if the cluster both logged in Azure subscription and Linked a cluster.
81
-
> 
81
+
> 
82
82
83
83
1. You can see a Linked cluster in **HDInsight** node after clicking OK button, if the input information are right. Now you can submit an application to this linked cluster.

90
90
91
91
## Set up a Spark Scala project for an HDInsight Spark cluster
92
92
@@ -98,7 +98,7 @@ You can link a normal cluster by using the Ambari managed username. Similarly, f
98
98
99
99
1. The Scala project creation wizard automatically detects whether you installed the Scala plug-in. Select **OK** to continue downloading the Scala plug-in, and then follow the instructions to restart Eclipse.
1. In the **New HDInsight Scala Project** dialog box, provide the following values, and then select **Next**:
104
104
* Enter a name for the project.
@@ -115,10 +115,10 @@ You can link a normal cluster by using the Ambari managed username. Similarly, f
115
115
116
116
1. In the **Select a wizard** dialog box, expand **Scala Wizards**, select **Scala Object**, and then select **Next**.
117
117
118
-

118
+

119
119
1. In the **Create New File** dialog box, enter a name for the object, and then select **Finish**.
120
120
121
-

121
+

122
122
1. Paste the following code in the text editor:
123
123
124
124
```scala
@@ -151,11 +151,11 @@ You can link a normal cluster by using the Ambari managed username. Similarly, f
151
151
*In the **Mainclassname** drop-down list, the submission wizard displays all objectnames from your project. Select or enter one that you want to run. If you selected an artifact from a hard drive, you must enter the main classname manually.
152
152
*Because the application code in this example does not require any command-line arguments or reference JARs or files, you can leave the remaining text boxes empty.
1. The**SparkSubmission** tab should start displaying the progress. You can stop the application by selecting the red button in the **SparkSubmission** window. You can also view the logs forthis specific application run by selecting the globe icon (denoted by the blue box in the image).
1. Select the **Jobs** node. IfJava version is lower than **1.8**, HDInsightTools automatically reminder you install the **E(fx)clipse** plug-in. Select**OK** to continue, and then follow the wizard to install it from the EclipseMarketplace and restart Eclipse.
170
+
1. Select the **Jobs** node. IfJava version is lower than **1.8**, HDInsightTools automatically reminder you install the **E(fx)clipse** plug-in. Select**OK** to continue, and then follow the wizard to install it from the EclipseMarketplace and restart Eclipse.
1. Open the JobView from the **Jobs** node. In the right pane, the **SparkJobView** tab displays all the applications that were run on the cluster. Select the name of the application for which you want to see more details.
*Open the Spark history UI and the ApacheHadoopYARNUI (at the application level) by selecting the hyperlinks at the top of the window.
189
189
@@ -240,7 +240,7 @@ To resolve this error, you need [download the executable](https://public-repo-1.
240
240
241
241
1. The template adds a sample code (**LogQuery**) under the **src** folder that you can run locally on your computer.
242
242
243
-

243
+

244
244
245
245
1. Right-click the **LogQuery** application, point to **RunAs**, and then select **1ScalaApplication**. Output like this appears on the **Console**tab:
246
246
@@ -303,7 +303,7 @@ When users submit job to a cluster with reader-only role permission, Ambari cred
303
303
304
304
When link a cluster, I would suggest you to provide credential of storage.

307
307
308
308
There are two modes to submit the jobs. If storage credential is provided, batch mode will be used to submit the job. Otherwise, interactive mode will be used. If the cluster is busy, you might get the error below.
0 commit comments