You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when a customer executes code or query.
2
+
title: Troubleshoot Spark application issues with Spark Advisor
3
+
description: Learn how to troubleshoot Spark application issues with Spark Advisor. The advisor automatically analyzes queries and commands, and offers advice.
4
4
services: synapse-analytics
5
5
author: jejiang
6
6
ms.author: jejiang
@@ -11,56 +11,51 @@ ms.subservice: spark
11
11
ms.date: 06/23/2022
12
12
---
13
13
14
-
# Spark Advisor
14
+
# Troubleshoot Spark application issues with Spark Advisor
15
15
16
-
Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when customer executes code or query. After applying the advice, you would have chance to improve your execution performance, decrease cost and fix the execution failures.
16
+
Spark Advisor is a system that automatically analyzes your code, queries, and commands, and advises you about them. By following this advice, you can improve your execution performance, fix execution failures, and decrease costs. This article helps you solve common problems with Spark Advisor.
17
17
18
+
## Advice on query hints
18
19
19
-
## Advice provided
20
-
21
-
### May return inconsistent results when using 'randomSplit'
22
-
Inconsistent or inaccurate results may be returned when working with the results of the 'randomSplit' method. Use Apache Spark (RDD) caching before using the 'randomSplit' method.
23
-
24
-
Method randomSplit() is equivalent to performing sample() on your data frame multiple times, with each sample refetching, partitioning, and sorting your data frame within partitions. The data distribution across partitions and sorting order is important for both randomSplit() and sample(). If either changes upon data refetch, there may be duplicates, or missing values across splits and the same sample using the same seed may produce different results.
25
-
26
-
These inconsistencies may not happen on every run, but to eliminate them completely, cache your data frame, repartition on a column(s), or apply aggregate functions such as groupBy.
27
-
28
-
### Table/view name is already in use
29
-
A view already exists with the same name as the created table, or a table already exists with the same name as the created view.
30
-
When this name is used in queries or applications, only the view will be returned no matter, which one created first. To avoid conflicts, rename either the table or the view.
31
-
32
-
## Hints related advise
33
-
### Unable to recognize a hint
34
-
The selected query contains a hint that isn't recognized. Verify that the hint is spelled correctly.
20
+
### May return inconsistent results when using 'randomsplit'
21
+
Verify that the hint is spelled correctly.
35
22
36
23
```scala
37
24
spark.sql("SELECT /*+ unknownHint */ * FROM t1")
38
25
```
39
26
40
-
### Unable to find a specified relation name(s)
41
-
Unable to find the relation(s) specified in the hint. Verify that the relation(s) are spelled correctly and accessible within the scope of the hint.
27
+
### Unable to find specified relation names
28
+
Verify that the relations are spelled correctly and are accessible within the scope of the hint.
42
29
43
30
```scala
44
31
spark.sql("SELECT /*+ BROADCAST(unknownTable) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str")
45
32
```
46
33
47
34
### A hint in the query prevents another hint from being applied
48
-
The selected query contains a hint that prevents another hint from being applied.
49
35
50
36
```scala
51
37
spark.sql("SELECT /*+ BROADCAST(t1), MERGE(t1, t2) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str")
52
38
```
53
39
54
-
##Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
55
-
This query contains the expression with Double type. We recommend that you enable the configuration 'spark.advise.divisionExprConvertRule.enable', which can help reduce the division expressions and to reduce the rounding error propagation.
40
+
### Reduce rounding error propagation caused by division.
41
+
This query contains the expression with the `double`type. We recommend that you enable the configuration `spark.advise.divisionExprConvertRule.enable`, which can help reduce the division expressions and the rounding error propagation.
56
42
57
43
```text
58
44
"t.a/t.b/t.c" convert into "t.a/(t.b * t.c)"
59
45
```
60
46
61
-
##Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
62
-
This query contains time consuming join due to "Or" condition within query. We recommend that you enable the configuration 'spark.advise.nonEqJoinConvertRule.enable', which can help to convert the join triggered by "Or" condition to SMJ or BHJ to accelerate this query.
47
+
### Improve query performance for non-equal join.
48
+
This query contains a time-consuming join because of an `Or` condition within the query. We recommend that you enable the configuration `spark.advise.nonEqJoinConvertRule.enable`. It can help convert the join triggered by the `Or` condition to shuffle sort merge join (SMJ) or broadcast hash join (BHJ) to accelerate this query.
63
49
64
-
## Next steps
50
+
### The use of the randomSplit method might return inconsistent results
51
+
Spark Advisor might return inconsistent or inaccurate results when you work with the results of the `randomSplit` method. Use Apache Spark resilient distributed dataset caching (RDD) before you use the `randomSplit` method.
52
+
53
+
The `randomSplit()` method is equivalent to performing a `sample()` action on your DataFrame multiple times, with each sample refetching, partitioning, and sorting your DataFrame within partitions. The data distribution across partitions and sort order is important for both `randomSplit()` and `sample()` methods. If either changes upon data refetch, there might be duplicates or missing values across splits, and the same sample that uses the same seed might produce different results.
54
+
55
+
These inconsistencies might not happen on every run. To eliminate them completely, cache your DataFrame, repartition on columns, or apply aggregate functions such as `groupBy`.
65
56
66
-
For more information on monitoring pipeline runs, see the [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md) article.
57
+
### A table or view name might already be in use
58
+
A view already exists with the same name as the created table, or a table already exists with the same name as the created view. When you use this name in queries or applications, Spark Advisor returns only the view, regardless of which one was created first. To avoid conflicts, rename either the table or the view.
59
+
60
+
## Next steps
61
+
For more information on monitoring pipeline runs, see [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md).
0 commit comments