Skip to content

Commit cf747de

Browse files
committed
Updates
1 parent 4596ae0 commit cf747de

File tree

3 files changed

+2
-6
lines changed

3 files changed

+2
-6
lines changed

learn-pr/wwl/use-apache-spark-work-files-lakehouse/includes/2-spark.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ To enable the native execution engine for a specific script or notebook, you can
103103
104104
### High concurrency mode
105105

106-
When you run Spark code in Microsoft Fabric, a Spark session is initiated. You can optimize the efficiency of Spark resource usage by using *high concurrency mode* to share Spark sessions across multiple concurrent users or processes. When high concurrency mode is enabled for Notebooks, multiple users can run code in notebooks that use the same Spark session, while ensuring isolation of code to avoid variables in one notebook being affected by code in another notebook. You can also enable high concurrency mode for Spark jobs, enabling similar efficiencies for concurrent non-interactive Spark script execution.
106+
When you run Spark code in Microsoft Fabric, a Spark session is initiated. You can optimize the efficiency of Spark resource usage by using *high concurrency mode* to share Spark sessions across multiple concurrent users or processes. A notebook uses a Spark session for its execution. When high concurrency mode is enabled, multiple users can, for example, run code in notebooks that use the same Spark session, while ensuring isolation of code to avoid variables in one notebook being affected by code in another notebook. You can also enable high concurrency mode for Spark jobs, enabling similar efficiencies for concurrent non-interactive Spark script execution.
107107

108108
To enable high concurrency mode, use the **Data Engineering/Science** section of the workspace settings interface.
109109

learn-pr/wwl/use-apache-spark-work-files-lakehouse/includes/3-spark-code.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ To edit and run Spark code in Microsoft Fabric, you can use *notebooks*, or you
66

77
## Notebooks
88

9-
When you want to use Spark to explore and analyze data interactively, use a notebook. Notebooks enable you to combine text, images, and code written in multiple languages to create an interactive item that you can share with others and collaborate.
9+
When you want to use Spark to explore and analyze data interactively, use a notebook. Notebooks enable you to combine text, images, and code written in multiple languages to create an interactive item that you can share with others and collaborate on.
1010

1111
![Screenshot of a notebook in Microsoft Fabric.](../media/notebook.png)
1212

learn-pr/wwl/use-apache-spark-work-files-lakehouse/includes/4-dataframe.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
---
2-
ms.custom:
3-
- build-2023
4-
---
51
Natively, Spark uses a data structure called a *resilient distributed dataset* (RDD); but while you *can* write code that works directly with RDDs, the most commonly used data structure for working with structured data in Spark is the *dataframe*, which is provided as part of the *Spark SQL* library. Dataframes in Spark are similar to those in the ubiquitous *Pandas* Python library, but optimized to work in Spark's distributed processing environment.
62

73
> [!NOTE]

0 commit comments

Comments
 (0)