Skip to content

Commit 5199118

Browse files
committed
Acrolinx
1 parent 24a0bff commit 5199118

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ On this panel, you can reference to the Spark job definition to run.
7676

7777
* Expand the Spark job definition list, you can choose an existing Apache Spark job definition. You can also create a new Apache Spark job definition by selecting the **New** button to reference the Spark job definition to be run.
7878

79-
* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings are not empty, these settings will replace the settings of the spark job definition itself.
79+
* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings aren't empty, these settings will replace the settings of the spark job definition itself.
8080

8181
| Property | Description |
8282
| ----- | ----- |
@@ -85,13 +85,13 @@ On this panel, you can reference to the Spark job definition to run.
8585
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`|
8686
|Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> |
8787
|Apache Spark pool| You can select Apache Spark pool from the list.|
88-
|Python code reference| Additional Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
89-
|Reference files | Additional files used for reference in the main definition file. |
88+
|Python code reference| Other Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
89+
|Reference files | Other files used for reference in the main definition file. |
9090
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.|
9191
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|
9292
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
9393
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
94-
|Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
94+
|Spark configuration| Specify values for Spark configuration properties listed in the article: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
9595

9696
![spark job definition pipline settings](media/quickstart-transform-data-using-spark-job-definition/spark-job-definition-pipline-settings.png)
9797

0 commit comments

Comments
 (0)