You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Azure Synapse Spark job definition Activity in a [pipeline](concepts-pipelines-activities.md) runs a Synapse Spark job definition in your Azure Synapse Analytics workspace. This article builds on the[data transformation activities](transform-data.md)article, which presents a general overview of data transformation and the supported transformation activities.
16
+
The Azure Synapse Spark job definition Activity in a [pipeline](concepts-pipelines-activities.md) runs a Synapse Spark job definition in your Azure Synapse Analytics workspace. This article builds on the[data transformation activities](transform-data.md)article, which presents a general overview of data transformation and the supported transformation activities.
17
17
18
18
## Set Apache Spark job definition canvas
19
19
@@ -72,7 +72,8 @@ To use a Spark job definition activity for Synapse in a pipeline, complete the f
72
72
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|
73
73
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
74
74
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
75
-
|Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
75
+
|Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
76
+
|Authentication| User-assigned managed identities or system-assigned managed identities already supported in Spark job definitions. |
76
77
77
78
:::image type="content" source="./media/transform-data-synapse-spark-job-definition/spark-job-definition-activity-settings.png" alt-text="Screenshot that shows the UI for the spark job definition activity.":::
The Azure Synapse notebook activity in a [Synapse pipeline](../data-factory/concepts-pipelines-activities.md) runs a Synapse notebook. This article builds on the[data transformation activities](../data-factory/transform-data.md)article, which presents a general overview of data transformation and the supported transformation activities.
18
+
The Azure Synapse notebook activity in a [Synapse pipeline](../data-factory/concepts-pipelines-activities.md) runs a Synapse notebook. This article builds on the[data transformation activities](../data-factory/transform-data.md)article, which presents a general overview of data transformation and the supported transformation activities.
19
19
20
20
## Create a Synapse notebook activity
21
21
@@ -27,20 +27,21 @@ Drag and drop **Synapse notebook** under **Activities** onto the Synapse pipelin
27
27
28
28
If you select an existing notebook from the current workspace, you can click the **Open** button to directly open the notebook's page.
29
29
30
-
(Optional) You can also reconfigure Spark pool\Executor size\Dynamically allocate executors\Min executors\Max executors\Driver size in settings. It should be noted that the settings reconfigured here will replace the settings of the configure session in Notebook. If nothing is set in the settings of the current notebook activity, it will run with the settings of the configure session in that notebook.
30
+
(Optional) You can also reconfigure Spark pool\Executor size\Dynamically allocate executors\Min executors\Max executors\Driver size\Authentication in settings. It should be noted that the settings reconfigured here will replace the settings of the configure session in Notebook. If nothing is set in the settings of the current notebook activity, it will run with the settings of the configure session in that notebook.
|Spark pool| Reference to the Spark pool. You can select Apache Spark pool from the list. If this setting is empty, it will run in the spark pool of the notebook itself.| No |
39
39
|Executor size| Number of cores and memory to be used for executors allocated in the specified Apache Spark pool for the session.| No |
40
40
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.| No |
41
41
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.| No |
42
42
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.| No |
43
43
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.| No |
44
+
|Authentication| Can authenticate using either a system-assigned managed identity or a user-assigned managed identity.| No |
44
45
45
46
> [!NOTE]
46
47
> The execution of parallel Spark Notebooks in Azure Synapse pipelines be queued and executed in a FIFO manner, jobs order in the queue is according to the time sequence, the expire time of a job in the queue is 3 days, please notice that queue for notebook only work in synapse pipeline.
Copy file name to clipboardExpand all lines: articles/synapse-analytics/synapse-service-identity.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -322,8 +322,9 @@ You can create, delete, manage user-assigned managed identities in Microsoft Ent
322
322
323
323
In order to use a user-assigned managed identity, you must first [create credentials](../data-factory/credentials.md) in your service instance for the UAMI.
324
324
325
-
>[!NOTE]
326
-
> User-assigned Managed Identity is not currently supported in Synapse notebooks and Spark job definitions.
325
+
> [!NOTE]
326
+
>
327
+
> User-assigned Managed Identity already supported Synapse Notebook activities and Spark job definition activities in pipeline.
0 commit comments