You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-spark-job-debugging.md
+17-16Lines changed: 17 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,37 +5,38 @@ author: hrasheed-msft
5
5
ms.author: hrasheed
6
6
ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
-
ms.custom: hdinsightactive
9
8
ms.topic: conceptual
10
-
ms.date: 12/05/2018
9
+
ms.custom: hdinsightactive
10
+
ms.date: 11/29/2019
11
11
---
12
12
13
13
# Debug Apache Spark jobs running on Azure HDInsight
14
14
15
15
In this article, you learn how to track and debug [Apache Spark](https://spark.apache.org/) jobs running on HDInsight clusters using the [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) UI, Spark UI, and the Spark History Server. You start a Spark job using a notebook available with the Spark cluster, **Machine learning: Predictive analysis on food inspection data using MLLib**. You can use the following steps to track an application that you submitted using any other approach as well, for example, **spark-submit**.
16
16
17
-
## Prerequisites
17
+
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
18
18
19
-
You must have the following:
19
+
## Prerequisites
20
20
21
-
* An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/documentation/videos/get-azure-free-trial-for-testing-hadoop-in-hdinsight/).
22
21
* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
22
+
23
23
* You should have started running the notebook, **[Machine learning: Predictive analysis on food inspection data using MLLib](apache-spark-machine-learning-mllib-ipython.md)**. For instructions on how to run this notebook, follow the link.
24
24
25
25
## Track an application in the YARN UI
26
26
27
-
1. Launch the YARN UI. Click**Yarn** under **Cluster dashboards**.
27
+
1. Launch the YARN UI. Select**Yarn** under **Cluster dashboards**.
> Alternatively, you can also launch the YARN UI from the Ambari UI. To launch the Ambari UI, click**Ambari home** under **Cluster dashboards**. From the Ambari UI, click **YARN**, click**Quick Links**, click the active Resource Manager, and then click**Resource Manager UI**.
32
+
> Alternatively, you can also launch the YARN UI from the Ambari UI. To launch the Ambari UI, select**Ambari home** under **Cluster dashboards**. From the Ambari UI, navigate to **YARN** >**Quick Links** > the active Resource Manager >**Resource Manager UI**.
33
33
34
-
2. Because you started the Spark job using Jupyter notebooks, the application has the name **remotesparkmagics** (this is the name for all applications that are started from the notebooks). Click the application ID against the application name to get more information about the job. This launches the application view.
34
+
2. Because you started the Spark job using Jupyter notebooks, the application has the name **remotesparkmagics** (this is the name for all applications that are started from the notebooks). Select the application ID against the application name to get more information about the job. This launches the application view.
35
35
36
36

37
37
38
38
For such applications that are launched from the Jupyter notebooks, the status is always **RUNNING** until you exit the notebook.
39
+
39
40
3. From the application view, you can drill down further to find out the containers associated with the application and the logs (stdout/stderr). You can also launch the Spark UI by clicking the linking corresponding to the **Tracking URL**, as shown below.
40
41
41
42

@@ -44,15 +45,15 @@ You must have the following:
44
45
45
46
In the Spark UI, you can drill down into the Spark jobs that are spawned by the application you started earlier.
46
47
47
-
1. To launch the Spark UI, from the application view, click the link against the **Tracking URL**, as shown in the screen capture above. You can see all the Spark jobs that are launched by the application running in the Jupyter notebook.
48
+
1. To launch the Spark UI, from the application view, select the link against the **Tracking URL**, as shown in the screen capture above. You can see all the Spark jobs that are launched by the application running in the Jupyter notebook.
48
49
49
50

50
51
51
-
2.Click the **Executors** tab to see processing and storage information for each executor. You can also retrieve the call stack by clicking on the **Thread Dump** link.
52
+
2.Select the **Executors** tab to see processing and storage information for each executor. You can also retrieve the call stack by selecting the **Thread Dump** link.
52
53
53
54

54
55
55
-
3.Click the **Stages** tab to see the stages associated with the application.
56
+
3.Select the **Stages** tab to see the stages associated with the application.
56
57
57
58

58
59
@@ -77,8 +78,8 @@ In the Spark UI, you can drill down into the Spark jobs that are spawned by the
77
78
78
79
6. Other tabs in the Spark UI provide useful information about the Spark instance as well.
79
80
80
-
* Storage tab - If your application creates an RDDs, you can find information about those in the Storage tab.
81
-
* Environment tab - This tab provides a lot of useful information about your Spark instance such as the:
81
+
* Storage tab - If your application creates an RDD, you can find information about those in the Storage tab.
82
+
* Environment tab - This tab provides useful information about your Spark instance such as the:
82
83
* Scala version
83
84
* Event log directory associated with the cluster
84
85
* Number of executor cores for the application
@@ -88,14 +89,14 @@ In the Spark UI, you can drill down into the Spark jobs that are spawned by the
88
89
89
90
Once a job is completed, the information about the job is persisted in the Spark History Server.
90
91
91
-
1. To launch the Spark History Server, from the Overview blade, click**Spark history server** under **Cluster dashboards**.
92
+
1. To launch the Spark History Server, from the **Overview** page, select**Spark history server** under **Cluster dashboards**.
92
93
93
94

94
95
95
96
> [!TIP]
96
-
> Alternatively, you can also launch the Spark History Server UI from the Ambari UI. To launch the Ambari UI, from the Overview blade, click**Ambari home** under **Cluster dashboards**. From the Ambari UI, click **Spark**, click**Quick Links**, and then click **Spark History Server UI**.
97
+
> Alternatively, you can also launch the Spark History Server UI from the Ambari UI. To launch the Ambari UI, from the Overview blade, select**Ambari home** under **Cluster dashboards**. From the Ambari UI, navigate to **Spark2** >**Quick Links** > **Spark2 History Server UI**.
97
98
98
-
2. You see all the completed applications listed. Click an application ID to drill down into an application for more info.
99
+
2. You see all the completed applications listed. Select an application ID to drill down into an application for more info.
99
100
100
101

0 commit comments