Skip to content

Commit 506dd12

Browse files
committed
more acrolinx issues
1 parent bcd8d27 commit 506dd12

7 files changed

+64
-44
lines changed

articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,9 +70,11 @@ Session-scoped packages allow users to define package dependencies at the start
7070

7171
To learn more about how to manage session-scoped packages, see the following articles:
7272

73-
- [Python session packages: ](./apache-spark-manage-session-packages.md#session-scoped-python-packages) At the start of a session, provide a Conda *environment.yml* to install more Python packages from popular repositories.
74-
- [Scala/Java session packages: ](./apache-spark-manage-session-packages.md#session-scoped-java-or-scala-packages) At the start of your session, provide a list of *.jar* files to install using `%%configure`.
75-
- [R session packages: ] (./apache-spark-manage-session-packages.md##session-scoped-r-packages-preview) Within your session, you can install packages across all nodes within your Spark pool using `install.packages` or `devtools`.
73+
- [Python session packages:](./apache-spark-manage-session-packages.md#session-scoped-python-packages) At the start of a session, provide a Conda *environment.yml* to install more Python packages from popular repositories.
74+
75+
- [Scala/Java session packages:](./apache-spark-manage-session-packages.md#session-scoped-java-or-scala-packages) At the start of your session, provide a list of *.jar* files to install using `%%configure`.
76+
77+
- [R session packages:](./apache-spark-manage-session-packages.md##session-scoped-r-packages-preview) Within your session, you can install packages across all nodes within your Spark pool using `install.packages` or `devtools`.
7678

7779
## Manage your packages outside Synapse Analytics UI
7880

@@ -84,5 +86,6 @@ To learn more about Azure PowerShell cmdlets and package management REST APIs, s
8486
- Package management REST APIs: [Manage your Spark pool libraries through REST APIs](apache-spark-manage-packages-outside-ui.md#manage-packages-through-rest-apis)
8587

8688
## Next steps
89+
8790
- View the default libraries: [Apache Spark version support](apache-spark-version-support.md)
8891
- Troubleshoot library installation errors: [Troubleshoot library errors](apache-spark-troubleshoot-library-errors.md)

articles/synapse-analytics/spark/apache-spark-data-visualization.md

Lines changed: 28 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -11,43 +11,44 @@ ms.subservice: spark
1111
ms.date: 09/13/2020
1212
---
1313
# Visualize data
14-
Azure Synapse is an integrated analytics service that accelerates time to insight, across data warehouses and big data analytics systems. Data visualization is a key component in being able to gain insight into your data. It helps make big and small data easier for humans to understand. It also makes it easier to detect patterns, trends, and outliers in groups of data.
14+
15+
Azure Synapse is an integrated analytics service that accelerates time to insight, across data warehouses and big data analytics systems. Data visualization is a key component in being able to gain insight into your data. It helps make big and small data easier for humans to understand. It also makes it easier to detect patterns, trends, and outliers in groups of data.
1516

1617
When using Apache Spark in Azure Synapse Analytics, there are various built-in options to help you visualize your data, including Synapse notebook chart options, access to popular open-source libraries, and integration with Synapse SQL and Power BI.
1718

1819
## Notebook chart options
19-
When using an Azure Synapse notebook, you can turn your tabular results view into a customized chart using chart options. Here, you can visualize your data without having to write any code.
20+
21+
When using an Azure Synapse notebook, you can turn your tabular results view into a customized chart using chart options. Here, you can visualize your data without having to write any code.
2022

2123
### display(df) function
22-
The ```display``` function allows you to turn SQL queries and Apache Spark dataframes and RDDs into rich data visualizations.The ```display``` function can be used on dataframes or RDDs created in PySpark, Scala, Java, R, and .NET.
24+
25+
The ```display``` function allows you to turn SQL queries and Apache Spark dataframes and RDDs into rich data visualizations. The ```display``` function can be used on dataframes or RDDs created in PySpark, Scala, Java, R, and .NET.
2326

2427
To access the chart options:
25-
1. The output of ```%%sql``` magic commands appear in the rendered table view by default. You can also call ```display(df)``` on Spark DataFrames or Resilient Distributed Datasets (RDD) function to produce the rendered table view.
26-
28+
29+
1. The output of ```%%sql``` magic commands appear in the rendered table view by default. You can also call ```display(df)``` on Spark DataFrames or Resilient Distributed Datasets (RDD) function to produce the rendered table view.
2730
2. Once you have a rendered table view, switch to the Chart View.
2831
![built-in-charts](./media/apache-spark-development-using-notebooks/synapse-built-in-charts.png#lightbox)
29-
3032
3. You can now customize your visualization by specifying the following values:
3133

3234
| Configuration | Description |
33-
|--|--|
35+
|--|--|
3436
| Chart Type | The ```display``` function supports a wide range of chart types, including bar charts, scatter plots, line graphs, and more |
3537
| Key | Specify the range of values for the x-axis|
3638
| Value | Specify the range of values for the y-axis values |
37-
| Series Group | Used to determine the groups for the aggregation |
38-
| Aggregation | Method to aggregate data in your visualization|
39-
40-
39+
| Series Group | Used to determine the groups for the aggregation |
40+
| Aggregation | Method to aggregate data in your visualization|
4141
> [!NOTE]
4242
> By default the ```display(df)``` function will only take the first 1000 rows of the data to render the charts. Check the **Aggregation over all results** and click the **Apply** button, you will apply the chart generation from the whole dataset. A Spark job will be triggered when the chart setting changes. Please note that it may take several minutes to complete the calculation and render the chart.
43-
4443
4. Once done, you can view and interact with your final visualization!
4544

4645
### display(df) statistic details
46+
4747
You can use <code>display(df, summary = true)</code> to check the statistics summary of a given Apache Spark DataFrame that include the column name, column type, unique values, and missing values for each column. You can also select on specific column to see its minimum value, maximum value, mean value and standard deviation.
4848
![built-in-charts-summary](./media/apache-spark-development-using-notebooks/synapse-built-in-charts-summary.png#lightbox)
49-
49+
5050
### displayHTML() option
51+
5152
Azure Synapse Analytics notebooks support HTML graphics using the ```displayHTML``` function.
5253

5354
The following image is an example of creating visualizations using [D3.js](https://d3js.org/).
@@ -141,9 +142,11 @@ svg
141142
```
142143

143144
## Python Libraries
144-
When it comes to data visualization, Python offers multiple graphing libraries that come packed with many different features. By default, every Apache Spark Pool in Azure Synapse Analytics contains a set of curated and popular open-source libraries. You can also add or manage additional libraries & versions by using the Azure Synapse Analytics library management capabilities.
145+
146+
When it comes to data visualization, Python offers multiple graphing libraries that come packed with many different features. By default, every Apache Spark Pool in Azure Synapse Analytics contains a set of curated and popular open-source libraries. You can also add or manage additional libraries & versions by using the Azure Synapse Analytics library management capabilities.
145147

146148
### Matplotlib
149+
147150
You can render standard plotting libraries, like Matplotlib, using the built-in rendering functions for each library.
148151

149152
The following image is an example of creating a bar chart using **Matplotlib**.
@@ -173,9 +176,9 @@ plt.legend()
173176
plt.show()
174177
```
175178

176-
177179
### Bokeh
178-
You can render HTML or interactive libraries, like **bokeh**, using the ```displayHTML(df)```.
180+
181+
You can render HTML or interactive libraries, like **bokeh**, using the ```displayHTML(df)```.
179182

180183
The following image is an example of plotting glyphs over a map using **bokeh**.
181184

@@ -212,15 +215,14 @@ html = file_html(p, CDN, "my plot1")
212215
displayHTML(html)
213216
```
214217

215-
216218
### Plotly
219+
217220
You can render HTML or interactive libraries like **Plotly**, using the **displayHTML()**.
218221

219222
Run the following sample code to draw the image below.
220223

221224
![plotly-example](./media/apache-spark-development-using-notebooks/synapse-plotly-image.png#lightbox)
222225

223-
224226
```python
225227
from urllib.request import urlopen
226228
import json
@@ -248,9 +250,10 @@ h = plotly.offline.plot(fig, output_type='div')
248250
# display this html
249251
displayHTML(h)
250252
```
253+
251254
### Pandas
252255

253-
You can view html output of pandas dataframe as the default output, notebook will automatically show the styled html content.
256+
You can view html output of pandas dataframe as the default output, notebook will automatically show the styled html content.
254257

255258
![Panda graph example.](./media/apache-spark-data-viz/support-panda.png#lightbox)
256259

@@ -267,11 +270,11 @@ df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232
267270
df
268271
```
269272

273+
### Additional libraries
270274

271-
### Additional libraries
272275
Beyond these libraries, the Azure Synapse Analytics Runtime also includes the following set of libraries that are often used for data visualization:
273276

274-
- [Seaborn](https://seaborn.pydata.org/)
277+
- [Seaborn](https://seaborn.pydata.org/)
275278

276279
You can visit the Azure Synapse Analytics Runtime [documentation](./spark/../apache-spark-version-support.md) for the most up to date information about the available libraries and versions.
277280

@@ -281,7 +284,7 @@ The R ecosystem offers multiple graphing libraries that come packed with many di
281284

282285
### ggplot2
283286

284-
The [ggplot2](https://ggplot2.tidyverse.org/) library is popular for data visualization and exploratory data analysis.
287+
The [ggplot2](https://ggplot2.tidyverse.org/) library is popular for data visualization and exploratory data analysis.
285288

286289
![Screenshot of a ggplot2 graph example.](./media/apache-spark-data-viz/ggplot2.png#lightbox)
287290

@@ -314,7 +317,7 @@ install.packages("rbokeh")
314317

315318
Once installed, you can leverage rBokeh to create interactive visualizations.
316319

317-
![Screenshot of a rBokeh graph example.](./media/apache-spark-data-viz/bokeh_plot.png#lightbox)
320+
![Screenshot of a rBokeh graph example.](./media/apache-spark-data-viz/bokeh-plot.png#lightbox)
318321

319322
```r
320323
library(rbokeh)
@@ -350,7 +353,7 @@ fig
350353

351354
### Highcharter
352355

353-
[Highcharter](https://jkunst.com/highcharter/) is a R wrapper for Highcharts javascript library and its modules.
356+
[Highcharter](https://jkunst.com/highcharter/) is a R wrapper for Highcharts Javascript library and its modules.
354357

355358
To install Highcharter, you can use the following command:
356359

@@ -379,4 +382,4 @@ Azure Synapse Analytics allows the different workspace computational engines to
379382
## Next steps
380383

381384
- For more information on how to set up the Spark SQL DW Connector: [Synapse SQL connector](./spark/../synapse-spark-sql-pool-import-export.md)
382-
- View the default libraries: [Azure Synapse Analytics runtime](../spark/apache-spark-version-support.md)
385+
- View the default libraries: [Azure Synapse Analytics runtime](../spark/apache-spark-version-support.md)

articles/synapse-analytics/spark/apache-spark-delta-lake-overview.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ The order of the results is different from above as there was no order explicitl
160160

161161
## Update table data
162162

163-
Delta Lake supports several operations to modify tables using standard DataFrame APIs, this is one of the big enhancements that delta format adds. The following example runs a batch job to overwrite the data in the table.
163+
Delta Lake supports several operations to modify tables using standard DataFrame APIs. These operations are one of the enhancements that delta format adds. The following example runs a batch job to overwrite the data in the table.
164164

165165
:::zone pivot = "programming-language-python"
166166

@@ -344,7 +344,7 @@ Results in:
344344

345345
## Conditional update without overwrite
346346

347-
Delta Lake provides programmatic APIs to conditional update, delete, and merge (this is commonly referred to as an upsert) data into tables.
347+
Delta Lake provides programmatic APIs to conditional update, delete, and merge (this command is commonly referred to as an upsert) data into tables.
348348

349349
:::zone pivot = "programming-language-python"
350350

@@ -530,7 +530,7 @@ Here you have a combination of the existing data. The existing data has been ass
530530

531531
### History
532532

533-
Delta Lake's has the ability to allow looking into history of a table. That is, the changes that were made to the underlying Delta Table. The cell below shows how simple it is to inspect the history.
533+
Delta Lake's has the ability to allow looking into history of a table. That is, the changes that were made to the underlying Delta Table. The cell below shows how simple it's to inspect the history.
534534

535535
:::zone pivot = "programming-language-python"
536536

@@ -572,7 +572,7 @@ Here you can see all of the modifications made over the above code snippets.
572572

573573
It's possible to query previous snapshots of your Delta Lake table by using a feature called Time Travel. If you want to access the data that you overwrote, you can query a snapshot of the table before you overwrote the first set of data using the versionAsOf option.
574574

575-
Once you run the cell below, you should see the first set of data from before you overwrote it. Time Travel is an extremely powerful feature that takes advantage of the power of the Delta Lake transaction log to access data that is no longer in the table. Removing the version 0 option (or specifying version 1) would let you see the newer data again. For more information, see [Query an older snapshot of a table](https://docs.delta.io/latest/delta-batch.html#deltatimetravel).
575+
Once you run the cell below, you should see the first set of data from before you overwrote it. Time Travel is a powerful feature that takes advantage of the power of the Delta Lake transaction log to access data that is no longer in the table. Removing the version 0 option (or specifying version 1) would let you see the newer data again. For more information, see [Query an older snapshot of a table](https://docs.delta.io/latest/delta-batch.html#deltatimetravel).
576576

577577
:::zone pivot = "programming-language-python"
578578

@@ -611,7 +611,7 @@ Results in:
611611
| 3|
612612
| 2|
613613

614-
Here you can see you have gone back to the earliest version of the data.
614+
Here you can see you've gone back to the earliest version of the data.
615615

616616
## Write a stream of data to a table
617617

@@ -626,7 +626,7 @@ In the cells below, here's what we are doing:
626626
* Cell 32 Stop the structured streaming job
627627
* Cell 33 Inspect history <--You'll notice appends have stopped
628628

629-
First you are going to set up a simple Spark Streaming job to generate a sequence and make the job write to your Delta Table.
629+
First you're going to set up a simple Spark Streaming job to generate a sequence and make the job write to your Delta Table.
630630

631631
:::zone pivot = "programming-language-python"
632632

@@ -748,7 +748,7 @@ Results in:
748748
| 1|2020-04-25 00:35:05| WRITE| [mode -> Overwrite, partitionBy -> []]| 0|
749749
| 0|2020-04-25 00:34:34| WRITE| [mode -> ErrorIfExists, partitionBy -> []]| null|
750750

751-
Here you are dropping some of the less interesting columns to simplify the viewing experience of the history view.
751+
Here you're dropping some of the less interesting columns to simplify the viewing experience of the history view.
752752

753753
:::zone pivot = "programming-language-python"
754754

@@ -792,7 +792,7 @@ Results in:
792792

793793
You can do an in-place conversion from the Parquet format to Delta.
794794

795-
Here you are going to test if the existing table is in delta format or not.
795+
Here you're going to test if the existing table is in delta format or not.
796796
:::zone pivot = "programming-language-python"
797797

798798
```python
@@ -830,7 +830,7 @@ Results in:
830830

831831
False
832832

833-
Now you are going to convert the data to delta format and verify it worked.
833+
Now you're going to convert the data to delta format and verify it worked.
834834

835835
:::zone pivot = "programming-language-python"
836836

@@ -936,7 +936,7 @@ Results in:
936936
|--------------------|
937937
|abfss://data@arca...|
938938

939-
Now you are going to verify that a table is not a delta format table, then convert it to delta format using Spark SQL and confirm it was converted correctly.
939+
Now, you're going to verify that a table is not a delta format table. Then, you will convert the table to delta format using Spark SQL and confirm that it was converted correctly.
940940

941941
:::zone pivot = "programming-language-python"
942942

articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -238,7 +238,7 @@ deptLocation: String = /your-path/departments.parquet
238238

239239
Let's verify the contents of the Parquet files we created to make sure they contain expected records in the correct format. Later, we'll use these data files to create Hyperspace indexes and run sample queries.
240240

241-
Running the following cell produces and output that displays the rows in employee and department dataFrames in a tabular form. There should be 14 employees and 4 departments, each matching with one of triplets you created in the previous cell.
241+
Running the following cell produces an output that displays the rows in employee and department dataFrames in a tabular form. There should be 14 employees and 4 departments, each matching with one of triplets you created in the previous cell.
242242

243243
:::zone pivot = "programming-language-scala"
244244

0 commit comments

Comments
 (0)