You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,7 +76,7 @@ On this panel, you can reference to the Spark job definition to run.
76
76
77
77
* Expand the Spark job definition list, you can choose an existing Apache Spark job definition. You can also create a new Apache Spark job definition by selecting the **New** button to reference the Spark job definition to be run.
78
78
79
-
* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings are not empty, these settings will replace the settings of the spark job definition itself.
79
+
* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings aren't empty, these settings will replace the settings of the spark job definition itself.
80
80
81
81
| Property | Description |
82
82
| ----- | ----- |
@@ -85,13 +85,13 @@ On this panel, you can reference to the Spark job definition to run.
85
85
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`|
86
86
|Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt``abfss://…/path/to/result`* <br> |
87
87
|Apache Spark pool| You can select Apache Spark pool from the list.|
88
-
|Python code reference|Additional Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
89
-
|Reference files |Additional files used for reference in the main definition file. |
88
+
|Python code reference|Other Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
89
+
|Reference files |Other files used for reference in the main definition file. |
90
90
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.|
91
91
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|
92
92
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
93
93
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
94
-
|Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
94
+
|Spark configuration| Specify values for Spark configuration properties listed in the article: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
0 commit comments