You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-spark-livy-rest-interface.md
+35-30Lines changed: 35 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,37 +5,35 @@ author: hrasheed-msft
5
5
ms.author: hrasheed
6
6
ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
-
ms.custom: hdinsightactive,hdiseo17may2017
9
8
ms.topic: conceptual
10
-
ms.date: 06/11/2019
9
+
ms.custom: hdinsightactive,hdiseo17may2017
10
+
ms.date: 02/28/2020
11
11
---
12
12
13
13
# Use Apache Spark REST API to submit remote jobs to an HDInsight Spark cluster
14
14
15
-
Learn how to use [Apache Livy](https://livy.incubator.apache.org/), the [Apache Spark](https://spark.apache.org/) REST API, which is used to submit remote jobs to an Azure HDInsight Spark cluster. For detailed documentation, see [https://livy.incubator.apache.org/](https://livy.incubator.apache.org/).
15
+
Learn how to use [Apache Livy](https://livy.incubator.apache.org/), the Apache Spark REST API, which is used to submit remote jobs to an Azure HDInsight Spark cluster. For detailed documentation, see [Apache Livy](https://livy.incubator.apache.org/docs/latest/rest-api.html).
16
16
17
17
You can use Livy to run interactive Spark shells or submit batch jobs to be run on Spark. This article talks about using Livy to submit batch jobs. The snippets in this article use cURL to make REST API calls to the Livy Spark endpoint.
18
18
19
19
## Prerequisites
20
20
21
-
* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
22
-
23
-
*[cURL](https://curl.haxx.se/). This article uses cURL to demonstrate how to make REST API calls against an HDInsight Spark cluster.
21
+
An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
24
22
25
23
## Submit an Apache Livy Spark batch job
26
24
27
25
Before you submit a batch job, you must upload the application jar on the cluster storage associated with the cluster. You can use [AzCopy](../../storage/common/storage-use-azcopy.md), a command-line utility, to do so. There are various other clients you can use to upload data. You can find more about them at [Upload data for Apache Hadoop jobs in HDInsight](../hdinsight-upload-data.md).
28
26
29
27
```cmd
30
-
curl -k --user "<hdinsight user>:<user password>" -v -H "Content-Type: application/json" -X POST -d '{ "file":"<path to application jar>", "className":"<classname in jar>" }' 'https://<spark_cluster_name>.azurehdinsight.net/livy/batches' -H "X-Requested-By: admin"
28
+
curl -k --user "admin:password" -v -H "Content-Type: application/json" -X POST -d '{ "file":"<path to application jar>", "className":"<classname in jar>" }' 'https://<spark_cluster_name>.azurehdinsight.net/livy/batches' -H "X-Requested-By: admin"
31
29
```
32
30
33
31
### Examples
34
32
35
-
* If the jar file is on the cluster storage (WASB)
33
+
* If the jar file is on the cluster storage (WASBS)
Livy provides high-availability for Spark jobs running on the cluster. Here is a couple of examples.
86
84
87
-
* If the Livy service goes down after you have submitted a job remotely to a Spark cluster, the job continues to run in the background. When Livy is back up, it restores the status of the job and reports it back.
88
-
* Jupyter notebooks for HDInsight are powered by Livy in the backend. If a notebook is running a Spark job and the Livy service gets restarted, the notebook continues to run the code cells.
85
+
* If the Livy service goes down after you've submitted a job remotely to a Spark cluster, the job continues to run in the background. When Livy is back up, it restores the status of the job and reports it back.
86
+
* Jupyter notebooks for HDInsight are powered by Livy in the backend. If a notebook is running a Spark job and the Livy service gets restarted, the notebook continues to run the code cells.
89
87
90
88
## Show me an example
91
89
92
-
In this section, we look at examples to use Livy Spark to submit batch job, monitor the progress of the job, and then delete it. The application we use in this example is the one developed in the article [Create a standalone Scala application and to run on HDInsight Spark cluster](apache-spark-create-standalone-application.md). The steps here assume that:
90
+
In this section, we look at examples to use Livy Spark to submit batch job, monitor the progress of the job, and then delete it. The application we use in this example is the one developed in the article [Create a standalone Scala application and to run on HDInsight Spark cluster](apache-spark-create-standalone-application.md). The steps here assume:
93
91
94
-
* You have already copied over the application jar to the storage account associated with the cluster.
95
-
* You have CuRL installed on the computer where you are trying these steps.
92
+
* You've already copied over the application jar to the storage account associated with the cluster.
93
+
* You've CuRL installed on the computer where you're trying these steps.
96
94
97
95
Perform the following steps:
98
96
99
-
1.Let us first verify that Livy Spark is running on the cluster. We can do so by getting a list of running batches. If you are running a job using Livy for the first time, the output should return zero.
97
+
1.For ease of use, set environment variables. This example is based on a Windows environment, revise variables as needed for your environment. Replace `CLUSTERNAME`, and `PASSWORD` with the appropriate values.
100
98
101
99
```cmd
102
-
curl -k --user "admin:mypassword1!" -v -X GET "https://mysparkcluster.azurehdinsight.net/livy/batches"
100
+
set clustername=CLUSTERNAME
101
+
set password=PASSWORD
102
+
```
103
+
104
+
1. Verify that Livy Spark is running on the cluster. We can do so by getting a list of running batches. If you're running a job using Livy for the first time, the output should return zero.
105
+
106
+
```cmd
107
+
curl -k --user "admin:%password%" -v -X GET "https://%clustername%.azurehdinsight.net/livy/batches"
103
108
```
104
109
105
110
You should get an output similar to the following snippet:
@@ -118,16 +123,16 @@ Perform the following steps:
118
123
119
124
Notice how the last line in the output says **total:0**, which suggests no running batches.
120
125
121
-
2. Let us now submit a batch job. The following snippet uses an input file (input.txt) to pass the jar name and the class name as parameters. If you are running these steps from a Windows computer, using an input file is the recommended approach.
126
+
1. Let us now submit a batch job. The following snippet uses an input file (input.txt) to pass the jar name and the class name as parameters. If you're running these steps from a Windows computer, using an input file is the recommended approach.
You should see an output similar to the following snippet:
@@ -189,11 +194,11 @@ Perform the following steps:
189
194
{"msg":"deleted"}* Connection #0 to host mysparkcluster.azurehdinsight.net left intact
190
195
```
191
196
192
-
The last line of the output shows that the batch was successfully deleted. Deleting a job, while it is running, also kills the job. If you delete a job that has completed, successfully or otherwise, it deletes the job information completely.
197
+
The last line of the output shows that the batch was successfully deleted. Deleting a job, while it's running, also kills the job. If you delete a job that has completed, successfully or otherwise, it deletes the job information completely.
193
198
194
199
## Updates to Livy configuration starting with HDInsight 3.5 version
195
200
196
-
HDInsight 3.5 clusters and above, by default, disable use of local file paths to access sample data files or jars. We encourage you to use the `wasb://` path instead to access jars or sample data files from the cluster.
201
+
HDInsight 3.5 clusters and above, by default, disable use of local file paths to access sample data files or jars. We encourage you to use the `wasbs://` path instead to access jars or sample data files from the cluster.
197
202
198
203
## Submitting Livy jobs for a cluster within an Azure virtual network
199
204
@@ -203,4 +208,4 @@ If you connect to an HDInsight Spark cluster from within an Azure Virtual Networ
203
208
204
209
* [Apache Livy REST API documentation](https://livy.incubator.apache.org/docs/latest/rest-api.html)
205
210
* [Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
206
-
* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
211
+
* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
0 commit comments