You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ By using the pool management capabilities of Azure Synapse Analytics, you can co
54
54
55
55
Currently, pool management is supported only for Python. For Python, Azure Synapse Spark pools use Conda to install and manage Python package dependencies.
56
56
57
-
When you're specifying pool-level libraries, you can now provide a *requirements.txt* or an *environment.yml* file. This environment configuration file is used every time a Spark instance is created from that Spark pool.
57
+
When you're specifying pool-level libraries, you can now provide a *requirements.txt* or *environment.yml* file. This environment configuration file is used every time a Spark instance is created from that Spark pool.
58
58
59
59
To learn more about these capabilities, see [Manage Spark pool packages](./apache-spark-manage-pool-packages.md).
60
60
@@ -72,7 +72,7 @@ If you're having trouble identifying required dependencies, follow these steps:
72
72
1. Run the following script to set up a local Python environment that's the same as the Azure Synapse Spark environment. The script requires [Synapse-Python38-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python38-CPU.yml), which is the list of libraries shipped in the default Python environment in Azure Synapse Spark.
@@ -82,7 +82,7 @@ If you're having trouble identifying required dependencies, follow these steps:
82
82
```
83
83
84
84
1. Run the following script to identify the required dependencies.
85
-
The script can be used to pass your *requirement.txt* file, which has all the packages and versions that you intend to install in the Spark 3.1 or Spark 3.2 pool. It will print the names of the *new* wheel files/dependencies for your input library requirements.
85
+
The script can be used to pass your *requirements.txt* file, which has all the packages and versions that you intend to install in the Spark 3.1 or Spark 3.2 pool. It will print the names of the *new* wheel files/dependencies for your input library requirements.
86
86
87
87
```python
88
88
# Command to list wheels needed for your input libraries.
0 commit comments