You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Learn how to add and manage libraries used by Apache Spark in Azure Synapse Analytics. Libraries provide reusable code for use in your programs or projects.
3
+
description: Learn how to add and manage libraries used by Apache Spark in Azure Synapse Analytics.
# Manage libraries for Apache Spark in Azure Synapse Analytics
14
14
15
-
Libraries provide reusable code that you might want to include in your programs or projects.
15
+
Libraries provide reusable code that you might want to include in your programs or projects for Apache Spark in Azure Synapse Analytics (Azure Synapse Spark).
16
16
17
17
You might need to update your serverless Apache Spark pool environment for various reasons. For example, you might find that:
18
18
19
19
- One of your core dependencies released a new version.
20
20
- You need an extra package for training your machine learning model or preparing your data.
21
-
-You have found a better package and no longer need the older package.
21
+
-A better package is available, and you no longer need the older package.
22
22
- Your team has built a custom package that you need available in your Apache Spark pool.
23
23
24
-
To make thirdparty or locally built code available to your applications, install a library onto one of your serverless Apache Spark pools or notebook session.
24
+
To make third-party or locally built code available to your applications, install a library onto one of your serverless Apache Spark pools or a notebook session.
25
25
26
-
> [!IMPORTANT]
27
-
>
28
-
> - There are three levels of package installing on Synapse Analytics -- default level, Spark pool level and session level.
29
-
> - Apache Spark in Azure Synapse Analytics has a full Anaconda install plus extra libraries served as the default level installation which is fully managed by Synapse. The Spark pool level packages can be used by all running Artifacts, e.g., Notebook and Spark job definition attaching the corresponding Spark pool. The session level installation will create an environment for the specific Notebook session, the change of session level libraries will not be persisted between sessions.
30
-
> - You can upload custom libraries and a specific version of an open-source library that you would like to use in your Azure Synapse Analytics Workspace. The workspace packages can be installed in your Spark pools.
31
-
> - To be noted, the pool level library management can take certain amount of time depending on the size of packages and the complexity of required dependencies. The session level installation is suggested with experimental and quick iterative scenarios.
32
-
33
-
## Default Installation
26
+
## Overview of package levels
27
+
28
+
There are three levels of packages installed on Azure Synapse Analytics:
34
29
35
-
Defaultpackages include a full Anaconda install plus extra commonly used libraries. The full libraries list can be found at [Apache Spark version support](apache-spark-version-support.md).
30
+
-**Default**: Default packages include a full Anaconda installation, plus extra commonly used libraries. For a full list of libraries, see [Apache Spark version support](apache-spark-version-support.md).
36
31
37
-
When a Spark instance starts, these libraries are included automatically. More packages can be added at the Spark pool level or session level.
32
+
When a Spark instance starts, these libraries are included automatically. You can add more packages at the other levels.
33
+
-**Spark pool**: All running artifacts can use packages at the Spark pool level. For example, you can attach notebook and Spark job definitions to corresponding Spark pools.
38
34
39
-
## Workspace packages
35
+
You can upload custom libraries and a specific version of an open-source library that you want to use in your Azure Synapse Analytics workspace. The workspace packages can be installed in your Spark pools.
36
+
-**Session**: A session-level installation creates an environment for a specific notebook session. The change of session-level libraries isn't persisted between sessions.
37
+
38
+
> [!NOTE]
39
+
> Pool-level library management can take time, depending on the size of the packages and the complexity of required dependencies. We recommend the session-level installation for experimental and quick iterative scenarios.
40
+
41
+
## Manage workspace packages
40
42
41
43
When your team develops custom applications or models, you might develop various code artifacts like *.whl*, *.jar*, or *tar.gz* files to package your code.
42
44
43
-
In Synapse, workspace packages can be custom or private *.whl* or *.jar* files. You can upload these packages to your workspace and later assign them to a specific serverless Apache Spark pool. Once assigned, these workspace packages are installed automatically on all Spark pool sessions.
45
+
In Azure Synapse, workspace packages can be custom or private *.whl* or *.jar* files. You can upload these packages to your workspace and later assign them to a specific serverless Apache Spark pool. After you assign these workspace packages, they're installed automatically on all Spark pool sessions.
44
46
45
-
To learn more about how to manage workspace libraries, see the following article:
47
+
To learn more about how to manage workspace libraries, see [Manage workspace packages](./apache-spark-manage-workspace-packages.md).
In some cases, you might want to standardize the packages that are used on an Apache Spark pool. This standardization can be useful if multiple people on your team commonly install the same packages.
50
52
51
-
In some cases, you might want to standardize the packages that are used on an Apache Spark pool. This standardization can be useful if the same packages are commonly installed by multiple people on your team.
53
+
By using the pool management capabilities of Azure Synapse Analytics, you can configure the default set of libraries to install on a serverless Apache Spark pool. These libraries are installed on top of the [base runtime](./apache-spark-version-support.md).
52
54
53
-
Using the Azure Synapse Analytics pool management capabilities, you can configure the default set of libraries to install on a given serverless Apache Spark pool. These libraries are installed on top of the [base runtime](./apache-spark-version-support.md).
55
+
Currently, pool management is supported only for Python. For Python, Azure Synapse Spark pools use Conda to install and manage Python package dependencies.
54
56
55
-
Currently, pool management is only supported for Python. For Python, Synapse Spark pools use Conda to install and manage Python package dependencies. When specifying your pool-level libraries, you can now provide a *requirements.txt* or an*environment.yml* file. This environment configuration file is used every time a Spark instance is created from that Spark pool.
57
+
When you're specifying pool-level libraries, you can now provide a *requirements.txt* or *environment.yml* file. This environment configuration file is used every time a Spark instance is created from that Spark pool.
56
58
57
59
To learn more about these capabilities, see [Manage Spark pool packages](./apache-spark-manage-pool-packages.md).
58
60
59
61
> [!IMPORTANT]
60
-
>
61
-
> - If the package you are installing is large or takes a long time to install, this fact affects the Spark instance start up time.
62
+
> - If the package that you're installing is large or takes a long time to install, it might affect the Spark instance's startup time.
62
63
> - Altering the PySpark, Python, Scala/Java, .NET, or Spark version is not supported.
63
64
64
-
###Manage dependencies for DEP-enabled Synapse Spark pools
65
+
## Manage dependencies for DEP-enabled Azure Synapse Spark pools
65
66
66
67
> [!NOTE]
67
-
>
68
-
> - Installing packages from public repo is not supported within [DEP-enabled workspaces](../security/workspace-data-exfiltration-protection.md), you should upload all your dependencies as workspace libraries and install to your Spark pool.
69
-
>
70
-
Please follow the steps below if you have trouble to identify the required dependencies:
68
+
> Installing packages from a public repo is not supported within [DEP-enabled workspaces](../security/workspace-data-exfiltration-protection.md). Instead, upload all your dependencies as workspace libraries and install them to your Spark pool.
71
69
72
-
-**Step1: Run the following script to set up a local Python environment same with Synapse Spark environment**
73
-
The setup script requires [Synapse-Python38-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python38-CPU.yml) which is the list of libraries shipped in the default Python env in Synapse spark.
70
+
If you're having trouble identifying required dependencies, follow these steps:
1. Run the following script to set up a local Python environment that's the same as the Azure Synapse Spark environment. The script requires [Synapse-Python38-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python38-CPU.yml), which is the list of libraries shipped in the default Python environment in Azure Synapse Spark.
84
73
85
-
-**Step2: Run the following script to identify the required dependencies**
86
-
The below snippet can be used to pass your requirement.txt which has all the packages and version you intend to install in the spark 3.1/spark3.2 spark pool. It will print the names of the *new* wheel files/dependencies needed for your input library requirements. Note this will list out only the dependencies that are not already present in the spark pool by default.
1. Run the following script to identify the required dependencies.
85
+
The script can be used to pass your *requirements.txt* file, which has all the packages and versions that you intend to install in the Spark 3.1 or Spark 3.2 pool. It will print the names of the *new* wheel files/dependencies for your input library requirements.
95
86
87
+
```python
88
+
# Command to list wheels needed for your input libraries.
89
+
# This command will list only new dependencies that are
90
+
# not already part of the built-in Azure Synapse environment.
> This script will list only the dependencies that are not already present in the Spark pool by default.
96
96
97
-
## Session-scoped packages
97
+
## Manage session-scoped packages
98
98
99
-
Often, when doing interactive data analysis or machine learning, you might try newer packages or you might need packages that are currently unavailable on your Apache Spark pool. Instead of updating the pool configuration, users can now use session-scoped packages to add, manage, and update session dependencies.
99
+
When you're doing interactive data analysis or machine learning, you might try newer packages, or you might need packages that are currently unavailable on your Apache Spark pool. Instead of updating the pool configuration, you can use session-scoped packages to add, manage, and update session dependencies.
100
100
101
-
Session-scoped packages allow users to define package dependencies at the start of their session. When you install a session-scoped package, only the current session has access to the specified packages. As a result, these session-scoped packages don't affect other sessions or jobs using the same Apache Spark pool. In addition, these libraries are installed on top of the base runtime and poollevel packages.
101
+
Session-scoped packages allow users to define package dependencies at the start of their session. When you install a session-scoped package, only the current session has access to the specified packages. As a result, these session-scoped packages don't affect other sessions or jobs that use the same Apache Spark pool. In addition, these libraries are installed on top of the base runtime and pool-level packages.
102
102
103
103
To learn more about how to manage session-scoped packages, see the following articles:
104
104
105
-
-[Python session packages:](./apache-spark-manage-session-packages.md#session-scoped-python-packages) At the start of a session, provide a Conda *environment.yml* to install more Python packages from popular repositories.
105
+
-[Python session packages](./apache-spark-manage-session-packages.md#session-scoped-python-packages): At the start of a session, provide a Conda *environment.yml* file to install more Python packages from popular repositories.
106
106
107
-
-[Scala/Java session packages:](./apache-spark-manage-session-packages.md#session-scoped-java-or-scala-packages) At the start of your session, provide a list of *.jar* files to install using `%%configure`.
107
+
-[Scala/Java session packages](./apache-spark-manage-session-packages.md#session-scoped-java-or-scala-packages): At the start of your session, provide a list of *.jar* files to install by using `%%configure`.
108
108
109
-
-[R session packages:](./apache-spark-manage-session-packages.md#session-scoped-r-packages-preview) Within your session, you can install packages across all nodes within your Spark pool using `install.packages` or `devtools`.
109
+
-[R session packages](./apache-spark-manage-session-packages.md#session-scoped-r-packages-preview): Within your session, you can install packages across all nodes within your Spark pool by using `install.packages` or `devtools`.
110
110
111
-
## Manage your packages outside Synapse Analytics UI
111
+
## Manage your packages outside the Azure Synapse Analytics UI
112
112
113
-
If your team want to manage the libraries without visiting the package management UIs, you have the options to manage the workspace packages and poollevel package updates through Azure PowerShell cmdlets or REST APIs for Synapse Analytics.
113
+
If your team wants to manage libraries without visiting the package management UIs, you have the option to manage the workspace packages and pool-level package updates through Azure PowerShell cmdlets or REST APIs for Azure Synapse Analytics.
114
114
115
-
To learn more about Azure PowerShell cmdlets and package management REST APIs, see the following articles:
115
+
For more information, see the following articles:
116
116
117
-
-Azure PowerShell cmdlets for Synapse Analytics: [Manage your Spark pool libraries through Azure PowerShell cmdlets](apache-spark-manage-packages-outside-ui.md#manage-packages-through-azure-powershell-cmdlets)
118
-
-Package management REST APIs: [Manage your Spark pool libraries through REST APIs](apache-spark-manage-packages-outside-ui.md#manage-packages-through-rest-apis)
117
+
-[Manage your Spark pool libraries through REST APIs](apache-spark-manage-packages-outside-ui.md#manage-packages-through-rest-apis)
118
+
-[Manage your Spark pool libraries through Azure PowerShell cmdlets](apache-spark-manage-packages-outside-ui.md#manage-packages-through-azure-powershell-cmdlets)
119
119
120
120
## Next steps
121
121
122
-
- View the default libraries: [Apache Spark version support](apache-spark-version-support.md)
0 commit comments