You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-azure-spark-history-server.md
+58-64Lines changed: 58 additions & 64 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,14 +5,14 @@ author: hrasheed-msft
5
5
ms.author: hrasheed
6
6
ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
-
ms.custom: hdinsightactive,hdiseo17may2017
9
8
ms.topic: conceptual
10
-
ms.date: 09/04/2019
9
+
ms.custom: hdinsightactive,hdiseo17may2017
10
+
ms.date: 11/25/2019
11
11
---
12
12
13
13
# Use extended Apache Spark History Server to debug and diagnose Apache Spark applications
14
14
15
-
This article provides guidance on how to use extended Apache Spark History Server to debug and diagnose completed and running Spark applications. The extension includes data tab and graph tab and diagnosis tab. On the **Data** tab, users can check the input and output data of the Spark job. On the **Graph** tab, users can check the data flow and replay the job graph. On the **Diagnosis** tab, user can refer to **Data Skew**, **Time Skew** and **Executor Usage Analysis**.
15
+
This article provides guidance on how to use extended Apache Spark History Server to debug and diagnose completed and running Spark applications. The extension includes data tab and graph tab and diagnosis tab. On the **Data** tab, users can check the input and output data of the Spark job. On the **Graph** tab, users can check the data flow and replay the job graph. On the **Diagnosis** tab, user can refer to **Data Skew**, **Time Skew**, and **Executor Usage Analysis**.
16
16
17
17
## Get access to Apache Spark History Server
18
18
@@ -21,59 +21,55 @@ Apache Spark History Server is the web UI for completed and running Spark applic
21
21
### Open the Apache Spark History Server Web UI from Azure portal
22
22
23
23
1. From the [Azure portal](https://portal.azure.com/), open the Spark cluster. For more information, see [List and show clusters](../hdinsight-administer-use-portal-linux.md#showClusters).
24
-
2. From **Quick Links**, click **Cluster Dashboard**, and then click **Spark History Server**. When prompted, enter the admin credentials for the Spark cluster.
24
+
2. From **Cluster dashboards**, select **Spark history server**. When prompted, enter the admin credentials for the Spark cluster.
25
25
26
-

26
+

27
27
28
28
### Open the Spark History Server Web UI by URL
29
29
30
-
Open the Spark History Server by browsing to the following URL, replace `<ClusterName>` with Spark cluster name of customer.
Open the Spark History Server by browsing to `https://CLUSTERNAME.azurehdinsight.net/sparkhistory` where CLUSTERNAME is the name of your Spark cluster.
35
31
36
-
The Spark History Server web UI looks like:
32
+
The Spark History Server web UI may look similar to:
37
33
38
34

39
35
40
36
## Data tab in Spark History Server
41
37
42
-
Select job ID then click**Data** on the tool menu to get the data view.
38
+
Select job ID then select**Data** on the tool menu to get the data view.
43
39
44
-
+Check the **Inputs**, **Outputs**, and **Table Operations** by selecting the tabs separately.
40
+
+Review the **Inputs**, **Outputs**, and **Table Operations** by selecting the tabs separately.
45
41
46
42

47
43
48
-
+ Copy all rows by clicking button **Copy**.
44
+
+ Copy all rows by selecting button **Copy**.
49
45
50
46

51
47
52
-
+ Save all data as CSV file by clicking button **csv**.
48
+
+ Save all data as CSV file by selecting button **csv**.
53
49
54
50

55
51
56
52
+ Search by entering keywords in field **Search**, the search result will display immediately.
57
53
58
54

59
55
60
-
+Click the column header to sort table, click the plus sign to expand a row to show more details, or click the minus sign to collapse a row.
56
+
+Select the column header to sort table, select the plus sign to expand a row to show more details, or select the minus sign to collapse a row.
61
57
62
58

63
59
64
-
+ Download single file by clicking button **Partial Download** that place at the right, then the selected file will be downloaded to local, if the file does not exist any more, it will open a new tab to show the error messages.
60
+
+ Download single file by selecting button **Partial Download** that place at the right, then the selected file will be downloaded to local, if the file doesn't exist anymore, it will open a new tab to show the error messages.
65
61
66
62

67
63
68
-
+ Copy full path or relative path by selecting the **Copy Full Path**, **Copy Relative Path** that expands from download menu. For azure data lake storage files, **Open in Azure Storage Explorer** will launch Azure Storage Explorer, and locate to the folder when sign-in.
64
+
+ Copy full path or relative path by selecting the **Copy Full Path**, **Copy Relative Path** that expands from download menu. For azure data lake storage files, **Open in Azure Storage Explorer** will launch Azure Storage Explorer, and locate to the folder when signin.
69
65
70
66

71
67
72
-
+Click the number below the table to navigate pages when too many rows to display in one page.
68
+
+Select the number below the table to navigate pages when too many rows to display in one page.
73
69
74
70

75
71
76
-
+ Hover on the question mark beside Data to show the tooltip, or click the question mark to get more information.
72
+
+ Hover on the question mark beside Data to show the tooltip, or select the question mark to get more information.
77
73
78
74

79
75
@@ -85,7 +81,7 @@ Select job ID then click **Data** on the tool menu to get the data view.
85
81
86
82
Select job ID then click **Graph** on the tool menu to get the job graph view.
87
83
88
-
+Check overview of your job by the generated job graph.
84
+
+Review overview of your job by the generated job graph.
89
85
90
86
+ By default, it will show all jobs, and it could be filtered by **Job ID**.
91
87
@@ -99,13 +95,15 @@ Select job ID then click **Graph** on the tool menu to get the job graph view.
99
95
100
96

101
97
102
-
+ Play back the job by clicking the **Playback** button and stop anytime by clicking the stop button. The task display in color to show different status when playback:
98
+
+ Play back the job by selecting the **Playback** button and stop anytime by selecting the stop button. The task display in color to show different status when playback:
103
99
104
-
+ Green for succeeded: The job has completed successfully.
105
-
+ Orange for retried: Instances of tasks that failed but do not affect the final result of the job. These tasks had duplicate or retry instances that may succeed later.
106
-
+ Blue for running: The task is running.
107
-
+ White for waiting or skipped: The task is waiting to run, or the stage has skipped.
108
-
+ Red for failed: The task has failed.
100
+
|Color |Description |
101
+
|---|---|
102
+
|Green|The job has completed successfully.|
103
+
|Orange|Instances of tasks that failed but don't affect the final result of the job. These tasks had duplicate or retry instances that may succeed later.|
104
+
|Blue|The task is running.|
105
+
|White|The task is waiting to run, or the stage has skipped.|
106
+
|Red|The task has failed.|
109
107
110
108

111
109
@@ -147,25 +145,25 @@ Select job ID then click **Graph** on the tool menu to get the job graph view.
147
145
> [!NOTE]
148
146
> For data size of read and write we use 1MB = 1000 KB = 1000 * 1000 Bytes.
149
147
150
-
+ Send feedback with issues by clicking**Provide us feedback**.
148
+
+ Send feedback with issues by selecting**Provide us feedback**.
151
149
152
150

153
151
154
152
## Diagnosis tab in Apache Spark History Server
155
153
156
-
Select job ID then click**Diagnosis** on the tool menu to get the job Diagnosis view. The diagnosis tab includes **Data Skew**, **Time Skew**, and **Executor Usage Analysis**.
154
+
Select job ID then select**Diagnosis** on the tool menu to get the job Diagnosis view. The diagnosis tab includes **Data Skew**, **Time Skew**, and **Executor Usage Analysis**.
157
155
158
-
+Check the**Data Skew**, **Time Skew**, and **Executor Usage Analysis** by selecting the tabs respectively.
156
+
+Review**Data Skew**, **Time Skew**, and **Executor Usage Analysis** by selecting the tabs respectively.
159
157
160
158

161
159
162
160
### Data Skew
163
161
164
-
Click**Data Skew** tab, the corresponding skewed tasks are displayed based on the specified parameters.
162
+
Select**Data Skew** tab, the corresponding skewed tasks are displayed based on the specified parameters.
165
163
166
-
+**Specify Parameters** - The first section displays the parameters which are used to detect Data Skew. The built-in rule is: Task Data Read is greater than 3 times of the average task data read, and the task data read is more than 10MB. If you want to define your own rule for skewed tasks, you can choose your parameters, the **Skewed Stage**, and **Skew Char** section will be refreshed accordingly.
164
+
+**Specify Parameters** - The first section displays the parameters, which are used to detect Data Skew. The built-in rule is: Task Data Read is greater than three times of the average task data read, and the task data read is more than 10 MB. If you want to define your own rule for skewed tasks, you can choose your parameters, the **Skewed Stage**, and **Skew Char** section will be refreshed accordingly.
167
165
168
-
+**Skewed Stage** - The second section displays stages which have skewed tasks meeting the criteria specified above. If there are more than one skewed task in a stage, the skewed stage table only displays the most skewed task (e.g. the largest data for data skew).
166
+
+**Skewed Stage** - The second section displays stages, which have skewed tasks meeting the criteria specified above. If there's more than one skewed task in a stage, the skewed stage table only displays the most skewed task (e.g. the largest data for data skew).
169
167
170
168

171
169
@@ -177,21 +175,21 @@ Click **Data Skew** tab, the corresponding skewed tasks are displayed based on t
177
175
178
176
The **Time Skew** tab displays skewed tasks based on task execution time.
179
177
180
-
+**Specify Parameters** - The first section displays the parameters which are used to detect Time Skew. The default criteria to detect time skew is: task execution time is greater than 3 times of average execution time and task execution time is greater than 30 seconds. You can change the parameters based on your needs. The **Skewed Stage** and **Skew Chart** display the corresponding stages and tasks information just like the **Data Skew** tab above.
178
+
+**Specify Parameters** - The first section displays the parameters, which are used to detect Time Skew. The default criteria to detect time skew is: task execution time is greater than three times of average execution time and task execution time is greater than 30 seconds. You can change the parameters based on your needs. The **Skewed Stage** and **Skew Chart** display the corresponding stages and tasks information just like the **Data Skew** tab above.
181
179
182
-
+Click**Time Skew**, then filtered result is displayed in **Skewed Stage** section according to the parameters set in section **Specify Parameters**. Click one item in **Skewed Stage** section, then the corresponding chart is drafted in section3, and the task details are displayed in right bottom panel.
180
+
+Select**Time Skew**, then filtered result is displayed in **Skewed Stage** section according to the parameters set in section **Specify Parameters**. Select one item in **Skewed Stage** section, then the corresponding chart is drafted in section3, and the task details are displayed in right bottom panel.
183
181
184
182

185
183
186
184
### Executor Usage Analysis
187
185
188
186
The Executor Usage Graph visualizes the Spark job actual executor allocation and running status.
189
187
190
-
+Click**Executor Usage Analysis**, then four types curves about executor usage are drafted, including **Allocated Executors**, **Running Executors**,**idle Executors**, and **Max Executor Instances**. Regarding allocated executors, each "Executor added" or "Executor removed" event will increase or decrease the allocated executors, you can check "Event Timeline" in the “Jobs" tab for more comparison.
188
+
+Select**Executor Usage Analysis**, then four types curves about executor usage are drafted, including **Allocated Executors**, **Running Executors**,**idle Executors**, and **Max Executor Instances**. Regarding allocated executors, each "Executor added" or "Executor removed" event will increase or decrease the allocated executors, you can check "Event Timeline" in the “Jobs" tab for more comparison.
If you have any feedback, or if you encounter any other problems when using this tool, send an email at ([[email protected]](mailto:[email protected])).
322
+
If you have any feedback, or come across any issues when using this tool, send an email at ([[email protected]](mailto:[email protected])).
0 commit comments