You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl/use-apache-spark-work-files-lakehouse/includes/2-spark.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,9 +22,9 @@ Microsoft Fabric provides a *starter pool* in each workspace, enabling Spark job
22
22
Additionally, you can create custom Spark pools with specific node configurations that support your particular data processing needs.
23
23
24
24
> [!NOTE]
25
-
> The ability to customize Spark pool settings can be disabled by Fabric administrators at the Fabric Capacity level. For more information, see **[Capacity administration settings for Data Engineering and Data Science](/fabric/data-engineering/capacity-settings-overview)** in the Fabric documentation.
25
+
> The ability to customize Spark pool settings can be disabled by Fabric administrators at the Fabric Capacity level. For more information, see **[Capacity administration settings for Data Engineering and Data Science](/fabric/data-engineering/capacity-settings-overview?azure-portal=true)** in the Fabric documentation.
26
26
27
-
You can manage settings for the starter pool and create new Spark pools in the **Data Engineering/Science** section of the workspace settings.
27
+
You can manage settings for the starter pool and create new Spark pools in the **Admin portal** section of the workspace settings, under **Capacity settings**, then **Data Engineering/Science Settings.**
28
28
29
29

30
30
@@ -37,7 +37,7 @@ Specific configuration settings for Spark pools include:
37
37
If you create one or more custom Spark pools in a workspace, you can set one of them (or the starter pool) as the default pool to be used if a specific pool is not specified for a given Spark job.
38
38
39
39
> [!TIP]
40
-
> For more information about managing Spark pools in Microsoft Fabric, see **[Configuring starter pools in Microsoft Fabric](/fabric/data-engineering/configure-starter-pools)** and **[How to create custom Spark pools in Microsoft Fabric](/fabric/data-engineering/create-custom-spark-pools)** in the Microsoft Fabric documentation.
40
+
> For more information about managing Spark pools in Microsoft Fabric, see **[Configuring starter pools in Microsoft Fabric](/fabric/data-engineering/configure-starter-pools?azure-portal=true)** and **[How to create custom Spark pools in Microsoft Fabric](/fabric/data-engineering/create-custom-spark-pools?azure-portal=true)** in the Microsoft Fabric documentation.
41
41
42
42
## Runtimes and environments
43
43
@@ -50,7 +50,7 @@ In some cases, organizations may need to define multiple *environments* to suppo
50
50
Microsoft Fabric supports multiple Spark runtimes, and will continue to add support for new runtimes as they are released. You can use the workspace settings interface to specify the Spark runtime that is used by default environment when a Spark pool is started.
51
51
52
52
> [!TIP]
53
-
> For more information about Spark runtimes in Microsoft Fabric, see **[Apache Spark Runtimes in Fabric](/fabric/data-engineering/runtime)** in the Microsoft Fabric documentation.
53
+
> For more information about Spark runtimes in Microsoft Fabric, see **[Apache Spark Runtimes in Fabric](/fabric/data-engineering/runtime?azure-portal=true)** in the Microsoft Fabric documentation.
54
54
55
55
### Environments in Microsoft Fabric
56
56
@@ -71,7 +71,7 @@ When creating an environment, you can:
71
71
After creating at least one custom environment, you can specify it as the default environment in the workspace settings.
72
72
73
73
> [!TIP]
74
-
> For more information about using custom environments in Microsoft Fabric, see **[Create, configure, and use an environment in Microsoft Fabric](/fabric/data-engineering/create-and-use-environment)** in the Microsoft Fabric documentation.
74
+
> For more information about using custom environments in Microsoft Fabric, see **[Create, configure, and use an environment in Microsoft Fabric](/fabric/data-engineering/create-and-use-environment=azure-portal=true)** in the Microsoft Fabric documentation.
75
75
76
76
## Additional Spark configuration options
77
77
@@ -99,7 +99,7 @@ To enable the native execution engine for a specific script or notebook, you can
99
99
```
100
100
101
101
> [!TIP]
102
-
> For more information about the native execution engine, see **[Native execution engine for Fabric Spark](/fabric/data-engineering/native-execution-engine-overview)** in the Microsoft Fabric documentation.
102
+
> For more information about the native execution engine, see **[Native execution engine for Fabric Spark](/fabric/data-engineering/native-execution-engine-overview?azure-portal=true)** in the Microsoft Fabric documentation.
103
103
104
104
### High concurrency mode
105
105
@@ -108,7 +108,7 @@ When you run Spark code in Microsoft Fabric, a Spark session is initiated. You c
108
108
To enable high concurrency mode, use the **Data Engineering/Science** section of the workspace settings interface.
109
109
110
110
> [!TIP]
111
-
> For more information about high concurrency mode, see **[High concurrency mode in Apache Spark for Fabric](/fabric/data-engineering/high-concurrency-overview)** in the Microsoft Fabric documentation.
111
+
> For more information about high concurrency mode, see **[High concurrency mode in Apache Spark for Fabric](/fabric/data-engineering/high-concurrency-overview?azure-portal=true)** in the Microsoft Fabric documentation.
112
112
113
113
### Automatic MLFlow logging
114
114
@@ -119,5 +119,5 @@ MLFlow is an open source library that is used in data science workloads to manag
119
119
Administrators can manage Spark settings at a Fabric capacity level, enabling them to restrict and override Spark settings in workspaces within an organization.
120
120
121
121
> [!TIP]
122
-
> For more information about managing Spark configuration at the Fabric capacity level, see **[Configure and manage data engineering and data science settings for Fabric capacities](/fabric/data-engineering/capacity-settings-management)** in the Microsoft Fabric documentation.
122
+
> For more information about managing Spark configuration at the Fabric capacity level, see **[Configure and manage data engineering and data science settings for Fabric capacities](/fabric/data-engineering/capacity-settings-management?azure-portal-true)** in the Microsoft Fabric documentation.
0 commit comments