Skip to content

Commit 5fd8f66

Browse files
committed
Pattern compliance
1 parent 8e82604 commit 5fd8f66

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
2-
title: "Quickstart: Transform data using Apache Spark job definition"
2+
title: 'Quickstart: Transform data using Apache Spark job definition'
33
description: This tutorial provides step-by-step instructions for using Azure Synapse Analytics to transform data with Apache Spark job definition.
44
author: juluczni
55
ms.author: juluczni
66
ms.reviewer: makromer
77
ms.service: azure-synapse-analytics
88
ms.subservice: pipeline
9-
ms.topic: conceptual
9+
ms.topic: quickstart
1010
ms.date: 02/15/2022
1111
---
1212

@@ -20,7 +20,6 @@ In this quickstart, you'll use Azure Synapse Analytics to create a pipeline usin
2020
* **Azure Synapse workspace**: Create a Synapse workspace using the Azure portal following the instructions in [Quickstart: Create a Synapse workspace](quickstart-create-workspace.md).
2121
* **Apache Spark job definition**: Create an Apache Spark job definition in the Synapse workspace following the instructions in [Tutorial: Create Apache Spark job definition in Synapse Studio](spark/apache-spark-job-definitions.md).
2222

23-
2423
### Navigate to the Synapse Studio
2524

2625
After your Azure Synapse workspace is created, you have two ways to open Synapse Studio:
@@ -93,7 +92,7 @@ On this panel, you can reference to the Spark job definition to run.
9392
|Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
9493
|Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
9594
|Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
96-
95+
9796
![spark job definition pipline settings](media/quickstart-transform-data-using-spark-job-definition/spark-job-definition-pipline-settings.png)
9897

9998
* You can add dynamic content by clicking the **Add Dynamic Content** button or by pressing the shortcut key <kbd>Alt</kbd>+<kbd>Shift</kbd>+<kbd>D</kbd>. In the **Add Dynamic Content** page, you can use any combination of expressions, functions, and system variables to add to dynamic content.
@@ -106,7 +105,7 @@ You can add properties for Apache Spark job definition activity in this panel.
106105

107106
![user properties](media/quickstart-transform-data-using-spark-job-definition/user-properties.png)
108107

109-
## Next steps
108+
## Related content
110109

111110
Advance to the following articles to learn about Azure Synapse Analytics support:
112111

0 commit comments

Comments
 (0)