You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-spark-job-debugging.md
+6-25Lines changed: 6 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,12 +7,12 @@ ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
8
ms.topic: conceptual
9
9
ms.custom: hdinsightactive
10
-
ms.date: 11/29/2019
10
+
ms.date: 04/23/2020
11
11
---
12
12
13
13
# Debug Apache Spark jobs running on Azure HDInsight
14
14
15
-
In this article, you learn how to track and debug [Apache Spark](https://spark.apache.org/) jobs running on HDInsight clustersusing the [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) UI, Spark UI, and the Spark History Server. You start a Spark job using a notebook available with the Spark cluster, **Machine learning: Predictive analysis on food inspection data using MLLib**. You can use the following steps to track an application that you submitted using any other approach as well, for example, **spark-submit**.
15
+
In this article, you learn how to track and debug Apache Spark jobs running on HDInsight clusters. Debug using the Apache Hadoop YARN UI, Spark UI, and the Spark History Server. You start a Spark job using a notebook available with the Spark cluster, **Machine learning: Predictive analysis on food inspection data using MLLib**. Use the following steps to track an application that you submitted using any other approach as well, for example, **spark-submit**.
16
16
17
17
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
18
18
@@ -31,7 +31,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
31
31
> [!TIP]
32
32
> Alternatively, you can also launch the YARN UI from the Ambari UI. To launch the Ambari UI, select **Ambari home** under **Cluster dashboards**. From the Ambari UI, navigate to **YARN** > **Quick Links** > the active Resource Manager > **Resource Manager UI**.
33
33
34
-
2. Because you started the Spark job using Jupyter notebooks, the application has the name **remotesparkmagics** (this is the name for all applications that are started from the notebooks). Select the application ID against the application name to get more information about the job. This launches the application view.
34
+
2. Because you started the Spark job using Jupyter notebooks, the application has the name **remotesparkmagics** (the name for all applications started from the notebooks). Select the application ID against the application name to get more information about the job. This action launches the application view.
35
35
36
36

37
37
@@ -71,19 +71,18 @@ In the Spark UI, you can drill down into the Spark jobs that are spawned by the
This displays the Spark events in the form of a timeline. The timeline view is available at three levels, across jobs, within a job, and within a stage. The image above captures the timeline view for a given stage.
74
+
This image displays the Spark events in the form of a timeline. The timeline view is available at three levels, across jobs, within a job, and within a stage. The image above captures the timeline view for a given stage.
75
75
76
76
> [!TIP]
77
77
> If you select the **Enable zooming** check box, you can scroll left and right across the timeline view.
78
78
79
79
6. Other tabs in the Spark UI provide useful information about the Spark instance as well.
80
80
81
-
* Storage tab - If your application creates an RDD, you can find information about those in the Storage tab.
81
+
* Storage tab - If your application creates an RDD, you can find information in the Storage tab.
82
82
* Environment tab - This tab provides useful information about your Spark instance such as the:
83
83
* Scala version
84
84
* Event log directory associated with the cluster
85
85
* Number of executor cores for the application
86
-
* Etc.
87
86
88
87
## Find information about completed jobs using the Spark History Server
89
88
@@ -104,22 +103,4 @@ Once a job is completed, the information about the job is persisted in the Spark
104
103
105
104
*[Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
106
105
*[Debug Apache Spark Jobs using extended Spark History Server](apache-azure-spark-history-server.md)
107
-
108
-
### For data analysts
109
-
110
-
*[Apache Spark with Machine Learning: Use Spark in HDInsight for analyzing building temperature using HVAC data](apache-spark-ipython-notebook-machine-learning.md)
111
-
*[Apache Spark with Machine Learning: Use Spark in HDInsight to predict food inspection results](apache-spark-machine-learning-mllib-ipython.md)
112
-
*[Website log analysis using Apache Spark in HDInsight](apache-spark-custom-library-website-log-analysis.md)
113
-
*[Application Insight telemetry data analysis using Apache Spark in HDInsight](apache-spark-analyze-application-insight-logs.md)
114
-
115
-
116
-
### For Spark developers
117
-
118
-
*[Create a standalone application using Scala](apache-spark-create-standalone-application.md)
119
-
*[Run jobs remotely on an Apache Spark cluster using Apache Livy](apache-spark-livy-rest-interface.md)
120
-
*[Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications](apache-spark-intellij-tool-plugin.md)
121
-
*[Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md)
122
-
*[Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
123
-
*[Kernels available for Jupyter notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
124
-
*[Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
125
-
*[Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md)
106
+
*[Debug Apache Spark applications with Azure Toolkit for IntelliJ through SSH](apache-spark-intellij-tool-debug-remotely-through-ssh.md)
0 commit comments