You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/spark/apache-spark-troubleshoot-library-errors.md
+18-2Lines changed: 18 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,6 +36,22 @@ To view these logs:
36
36
4. Within the results, you'll see the logs related to the installation of your packages.
37
37

38
38
39
+
## Track installation failures
40
+
In certain cases, users can also inspect the full installation logs available in the Spark History Server to identify complicated dependency conflicts. The logs available through the Spark UI could be truncated and accessing the full installation logs through the Spark History Server would be useful in complex library installation scenarios.
41
+
42
+
To view the full installation logs:
43
+
1. Navigate to the Spark applications list in the **Monitor** tab.
44
+
2. Select the system Spark application job that corresponds to the failed pool update. These system jobs run under the *SystemReservedJob-LibraryManagement* title.
45
+

46
+
3. Select the highlighted **Spark history server** option which would open the Spark history server details page in a new tab.
47
+

48
+
4. In this page, you will see 2 attempts, select **Attempt 1** as shown below.
49
+

50
+
5. On the top navigation bar in the Spark history server page, switch to the **Executors** tab.
51
+

52
+
6. Download the **stdout** and **stderr** log files to access the full library management output and error logs.
53
+

54
+
39
55
## Validate your permissions
40
56
To install and update libraries, you must have the **Storage Blob Data Contributor** or **Storage Blob Data Owner** permissions on the primary Azure Data Lake Storage Gen2 Storage account that is linked to the Azure Synapse Analytics workspace.
41
57
@@ -93,10 +109,10 @@ To recreate the environment and validate your updates:
93
109
conda activate myenv
94
110
```
95
111
96
-
3. Use ``pip install -r <provide your req.txt file>`` to update the virtual environment with your specified packages. If the installation results in an error, then there may be a a conflict between what is pre-installed in the Synapse base runtime and what is specified in the provided requirements file. These dependency conflicts must be resolved in order to get the updated libraries on your serverless Apache Spark pool.
112
+
3. Use ``pip install -r <provide your req.txt file>`` to update the virtual environment with your specified packages. If the installation results in an error, then there may be a conflict between what is pre-installed in the Synapse base runtime and what is specified in the provided requirements file. These dependency conflicts must be resolved in order to get the updated libraries on your serverless Apache Spark pool.
97
113
98
114
>[!IMPORTANT]
99
115
>Issues may arrise when using pip and conda together. When combining pip and conda, it's best to follow these [recommended best practices](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment).
100
116
101
117
## Next steps
102
-
- View the default libraries: [Apache Spark version support](apache-spark-version-support.md)
118
+
- View the default libraries: [Apache Spark version support](apache-spark-version-support.md)
0 commit comments