Skip to content

Commit 22ed1f6

Browse files
authored
Merge pull request #298099 from v-lanjunli/adduami
update uami
2 parents d5a4f7c + df6c002 commit 22ed1f6

File tree

5 files changed

+10
-7
lines changed

5 files changed

+10
-7
lines changed
24.5 KB
Loading

articles/data-factory/transform-data-synapse-spark-job-definition.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.subservice: orchestration
1313
# Transform data by running a Synapse Spark job definition
1414
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
1515

16-
The Azure Synapse Spark job definition Activity in a [pipeline](concepts-pipelines-activities.md) runs a Synapse Spark job definition in your Azure Synapse Analytics workspace. This article builds on the [data transformation activities](transform-data.md) article, which presents a general overview of data transformation and the supported transformation activities.
16+
The Azure Synapse Spark job definition Activity in a [pipeline](concepts-pipelines-activities.md) runs a Synapse Spark job definition in your Azure Synapse Analytics workspace. This article builds on the [data transformation activities](transform-data.md) article, which presents a general overview of data transformation and the supported transformation activities.
1717

1818
## Set Apache Spark job definition canvas
1919

@@ -72,7 +72,8 @@ To use a Spark job definition activity for Synapse in a pipeline, complete the f
7272
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|
7373
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
7474
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
75-
|Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
75+
|Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
76+
|Authentication| User-assigned managed identities or system-assigned managed identities already supported in Spark job definitions. |
7677

7778
:::image type="content" source="./media/transform-data-synapse-spark-job-definition/spark-job-definition-activity-settings.png" alt-text="Screenshot that shows the UI for the spark job definition activity.":::
7879

26.6 KB
Loading

articles/synapse-analytics/synapse-notebook-activity.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.author: ruxu
1515

1616
[!INCLUDE [appliesto-adf-asa-md](../data-factory/includes/appliesto-adf-asa-md.md)]
1717

18-
The Azure Synapse notebook activity in a [Synapse pipeline](../data-factory/concepts-pipelines-activities.md) runs a Synapse notebook. This article builds on the [data transformation activities](../data-factory/transform-data.md) article, which presents a general overview of data transformation and the supported transformation activities. 
18+
The Azure Synapse notebook activity in a [Synapse pipeline](../data-factory/concepts-pipelines-activities.md) runs a Synapse notebook. This article builds on the [data transformation activities](../data-factory/transform-data.md) article, which presents a general overview of data transformation and the supported transformation activities.
1919

2020
## Create a Synapse notebook activity
2121

@@ -27,20 +27,21 @@ Drag and drop **Synapse notebook** under **Activities** onto the Synapse pipelin
2727

2828
If you select an existing notebook from the current workspace, you can click the **Open** button to directly open the notebook's page.
2929

30-
(Optional) You can also reconfigure Spark pool\Executor size\Dynamically allocate executors\Min executors\Max executors\Driver size in settings. It should be noted that the settings reconfigured here will replace the settings of the configure session in Notebook. If nothing is set in the settings of the current notebook activity, it will run with the settings of the configure session in that notebook.
30+
(Optional) You can also reconfigure Spark pool\Executor size\Dynamically allocate executors\Min executors\Max executors\Driver size\Authentication in settings. It should be noted that the settings reconfigured here will replace the settings of the configure session in Notebook. If nothing is set in the settings of the current notebook activity, it will run with the settings of the configure session in that notebook.
3131

3232
> [!div class="mx-imgBorder"]
3333
> ![screenshot-showing-create-notebook-activity](./media/synapse-notebook-activity/create-synapse-notebook-activity.png)
3434
3535

3636
| Property | Description | Required |
37-
| ----- | ----- | ----- |
37+
| ----- | ----- | ----- |
3838
|Spark pool| Reference to the Spark pool. You can select Apache Spark pool from the list. If this setting is empty, it will run in the spark pool of the notebook itself.| No |
3939
|Executor size| Number of cores and memory to be used for executors allocated in the specified Apache Spark pool for the session.| No |
4040
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.| No |
4141
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.| No |
4242
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.| No |
4343
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.| No |
44+
|Authentication| Can authenticate using either a system-assigned managed identity or a user-assigned managed identity.| No |
4445

4546
> [!NOTE]
4647
> The execution of parallel Spark Notebooks in Azure Synapse pipelines be queued and executed in a FIFO manner, jobs order in the queue is according to the time sequence, the expire time of a job in the queue is 3 days, please notice that queue for notebook only work in synapse pipeline.

articles/synapse-analytics/synapse-service-identity.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -322,8 +322,9 @@ You can create, delete, manage user-assigned managed identities in Microsoft Ent
322322

323323
In order to use a user-assigned managed identity, you must first [create credentials](../data-factory/credentials.md) in your service instance for the UAMI.
324324

325-
>[!NOTE]
326-
> User-assigned Managed Identity is not currently supported in Synapse notebooks and Spark job definitions.
325+
> [!NOTE]
326+
>
327+
> User-assigned Managed Identity already supported Synapse Notebook activities and Spark job definition activities in pipeline.
327328
328329
## Next steps
329330

0 commit comments

Comments
 (0)