You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/synapse-notebook-activity.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,19 +27,19 @@ You can create a Synapse notebook activity directly from the Synapse pipeline ca
27
27
28
28
Drag and drop **Synapse notebook** under **Activities** onto the Synapse pipeline canvas. Select on the Synapse notebook activity box and config the notebook content for current activity in the **settings**. You can select an existing notebook from the current workspace or add a new one.
29
29
30
-
You can also select an Apache Spark pool in the settings. It should be noted that the Apache spark pool set here will replace the Apache spark pool used in the notebook. If Apache spark pool is not selected in the settings of notebook content for current activity, the Apache spark pool selected in that notebook will be used to run.
30
+
(Optional) You can also reconfigure Spark pool\Executor size\Dynamically allocate executors\Min executors\Max executors\Driver size in settings. It should be noted that the settings reconfigured here will replace the settings of the Configure session in Notebook. If nothing is set in the settings of the currently activity notebook, it will run with the settings of the configure session in that notebook.
|Spark pool| Reference to the Spark pool. You can select Apache Spark pool from the list. If this setting is empty, it will run in the spark pool of the notebook itself.|
38
-
|Executor size| Number of cores and memory to be used for executors allocated in the specified Apache Spark pool for the session.|
39
-
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.|
40
-
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|
41
-
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
42
-
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
35
+
| Property | Description |Required |
36
+
| ----- | ----- |----- |
37
+
|Spark pool| Reference to the Spark pool. You can select Apache Spark pool from the list. If this setting is empty, it will run in the spark pool of the notebook itself.| No |
38
+
|Executor size| Number of cores and memory to be used for executors allocated in the specified Apache Spark pool for the session.| No |
39
+
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.| No |
40
+
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.| No |
41
+
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.| No |
42
+
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.| No |
43
43
44
44
> [!NOTE]
45
45
> The execution of parallel Spark Notebooks in Azure Synapse pipelines be queued and executed in a FIFO manner, jobs order in the queue is according to the time sequence, the expire time of a job in the queue is 3 days, please notice that queue for notebook only work in synapse pipeline.
0 commit comments