Skip to content

Commit edafec7

Browse files
committed
Acrolinx improvements
1 parent 0c6710e commit edafec7

File tree

4 files changed

+16
-16
lines changed

4 files changed

+16
-16
lines changed

articles/synapse-analytics/concepts-data-flow-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Data flows provide an entirely visual experience with no coding required. Your d
1919

2020
## Get started
2121

22-
Data flows are created from the Develop pane in Synapse studio. To create a data flow, select the plus sign next to **Develop**, and then select **Data Flow**.
22+
Data flows are created from the **Develop** pane in Synapse studio. To create a data flow, select the plus sign next to **Develop**, and then select **Data Flow**.
2323

2424
![New data flow](media/data-flow/new-data-flow.png)
2525

@@ -61,7 +61,7 @@ The **Inspect** tab provides a view into the metadata of the data stream that yo
6161

6262
![Inspect tab](media/data-flow/inspect.png)
6363

64-
As you change the shape of your data through transformations, you'll see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata won't be visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
64+
As you change the shape of your data through transformations, you see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata isn't visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
6565

6666
#### Data preview
6767

articles/synapse-analytics/data-integration/concepts-data-factory-differences.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,6 @@ Check below table for features availability:
2424
| | Support for global parameters |||
2525
| **Template Gallery and Knowledge center** | Solution Templates |*Azure Data Factory Template Gallery* |*Synapse Workspace Knowledge center* |
2626
| **GIT Repository Integration** | GIT Integration |||
27-
| **Monitoring** | Monitoring of Spark Jobs for Data Flow ||*Leverage the Synapse Spark pools* |
27+
| **Monitoring** | Monitoring of Spark Jobs for Data Flow ||*Use the Synapse Spark pools* |
2828

2929
Get started with data integration in your Synapse workspace by learning how to [ingest data into an Azure Data Lake Storage gen2 account](data-integration-data-lake.md).

articles/synapse-analytics/get-started-pipelines.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: 'Tutorial: Get started integrate with pipelines'
3-
description: In this tutorial, you'll learn how to integrate pipelines and activities using Synapse Studio.
3+
description: In this tutorial, you learn how to integrate pipelines and activities using Synapse Studio.
44
author: whhender
55
ms.author: whhender
66
ms.reviewer: whhender
@@ -12,7 +12,7 @@ ms.date: 12/11/2024
1212

1313
# Tutorial: Integrate with pipelines
1414

15-
In this tutorial, you'll learn how to integrate pipelines and activities using Synapse Studio.
15+
In this tutorial, you learn how to integrate pipelines and activities using Synapse Studio.
1616

1717
## Create a pipeline and add a notebook activity
1818

@@ -40,7 +40,7 @@ Once the pipeline is published, you might want to run it immediately without wai
4040

4141
1. Go to the **Monitor** hub.
4242
1. Select **Pipeline runs** to monitor pipeline execution progress.
43-
1. In this view you can switch between tabular **List** display a graphical **Gantt** chart.
43+
1. In this view, you can switch between tabular **List** display a graphical **Gantt** chart.
4444
1. Select a pipeline name to see the status of activities in that pipeline.
4545

4646
## Next step

articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.date: 12/11/2024
1212

1313
# Quickstart: Transform data using Apache Spark job definition
1414

15-
In this quickstart, you'll use Azure Synapse Analytics to create a pipeline using Apache Spark job definition.
15+
In this quickstart, you use Azure Synapse Analytics to create a pipeline using Apache Spark job definition.
1616

1717
## Prerequisites
1818

@@ -27,13 +27,13 @@ After your Azure Synapse workspace is created, you have two ways to open Synapse
2727
* Open your Synapse workspace in the [Azure portal](https://portal.azure.com). Select **Open** on the Open Synapse Studio card under **Getting started**.
2828
* Open [Azure Synapse Analytics](https://web.azuresynapse.net/) and sign in to your workspace.
2929

30-
In this quickstart, we use the workspace named "sampletest" as an example. It will automatically navigate you to the Synapse Studio home page.
30+
In this quickstart, we use the workspace named "sampletest" as an example.
3131

3232
![synapse studio home page](media/quickstart-transform-data-using-spark-job-definition/synapse-studio-home.png)
3333

3434
## Create a pipeline with an Apache Spark job definition
3535

36-
A pipeline contains the logical flow for an execution of a set of activities. In this section, you'll create a pipeline that contains an Apache Spark job definition activity.
36+
A pipeline contains the logical flow for an execution of a set of activities. In this section, you create a pipeline that contains an Apache Spark job definition activity.
3737

3838
1. Go to the **Integrate** tab. Select the plus icon next to the pipelines header and select **Pipeline**.
3939

@@ -48,7 +48,7 @@ A pipeline contains the logical flow for an execution of a set of activities. In
4848

4949
## Set Apache Spark job definition canvas
5050

51-
Once you create your Apache Spark job definition, you'll be automatically sent to the Spark job definition canvas.
51+
Once you create your Apache Spark job definition, you're automatically sent to the Spark job definition canvas.
5252

5353
### General settings
5454

@@ -64,9 +64,9 @@ Once you create your Apache Spark job definition, you'll be automatically sent t
6464

6565
6. Retry interval: The number of seconds between each retry attempt.
6666

67-
7. Secure output: When checked, output from the activity won't be captured in logging.
67+
7. Secure output: When checked, output from the activity isn't captured in logging.
6868

69-
8. Secure input: When checked, input from the activity won't be captured in logging.
69+
8. Secure input: When checked, input from the activity isn't captured in logging.
7070

7171
![spark job definition general](media/quickstart-transform-data-using-spark-job-definition/spark-job-definition-general.png)
7272

@@ -76,16 +76,16 @@ On this panel, you can reference to the Spark job definition to run.
7676

7777
* Expand the Spark job definition list, you can choose an existing Apache Spark job definition. You can also create a new Apache Spark job definition by selecting the **New** button to reference the Spark job definition to be run.
7878

79-
* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings aren't empty, these settings will replace the settings of the spark job definition itself.
79+
* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself is used to run; if the following settings aren't empty, these settings replace the settings of the spark job definition itself.
8080

8181
| Property | Description |
8282
| ----- | ----- |
8383
|Main definition file| The main file used for the job. Select a PY/JAR/ZIP file from your storage. You can select **Upload file** to upload the file to a storage account. <br> Sample: `abfss://…/path/to/wordcount.jar`|
84-
| References from subfolders | Scanning subfolders from the root folder of the main definition file, these files will be added as reference files. The folders named "jars", "pyFiles", "files" or "archives" will be scanned, and the folders name are case sensitive. |
84+
| References from subfolders | Scanning subfolders from the root folder of the main definition file, these files are added as reference files. The folders named "jars", "pyFiles", "files" or "archives" are scanned, and the folders name are case sensitive. |
8585
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`|
86-
|Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> |
86+
|Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> |
8787
|Apache Spark pool| You can select Apache Spark pool from the list.|
88-
|Python code reference| Other Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
88+
|Python code reference| Other Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It overrides the "pyFiles" property defined in Spark job definition. <br>|
8989
|Reference files | Other files used for reference in the main definition file. |
9090
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.|
9191
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|

0 commit comments

Comments
 (0)