Skip to content

Commit 3d0e4a1

Browse files
committed
one more Acrolinx fix
1 parent cb6ade3 commit 3d0e4a1

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ val df_olap = spark.read.format("cosmos.olap").
4444

4545
## Create Spark table
4646

47-
In this gesture, you'll create a Spark table pointing to the container you selected. That operation doesn't incur any data movement. If you decide to delete that table, the underlying container (and corresponding analytical store) won't be impacted.
47+
In this gesture, you'll create a Spark table pointing to the container you selected. That operation doesn't incur any data movement. If you decide to delete that table, the underlying container (and corresponding analytical store) won't be affected.
4848

4949
This scenario is convenient to reuse tables through third-party tools and provide accessibility to the data for the run-time.
5050

@@ -60,7 +60,7 @@ create table call_center using cosmos.olap options (
6060

6161
## Write DataFrame to container
6262

63-
In this gesture, you will write a dataframe into a container. This operation will impact the transactional performance and consume Request Units. Using Azure Cosmos DB transactional performance is ideal for write transactions. Make sure that you replace **YOURDATAFRAME** by the dataframe that you want to write back to.
63+
In this gesture, you'll write a dataframe into a container. This operation will impact the transactional performance and consume Request Units. Using Azure Cosmos DB transactional performance is ideal for write transactions. Make sure that you replace **YOURDATAFRAME** by the dataframe that you want to write back to.
6464

6565
```python
6666
# Write a Spark DataFrame into a Cosmos DB container
@@ -90,7 +90,7 @@ df.write.format("cosmos.oltp").
9090
```
9191

9292
## Load streaming DataFrame from container
93-
In this gesture, you'll use Spark Streaming capability to load data from a container into a dataframe. The data will be stored into the primary data lake account (and file system) that you connected to the workspace.
93+
In this gesture, you'll use Spark Streaming capability to load data from a container into a dataframe. The data will be stored in the primary data lake account (and file system) that you connected to the workspace.
9494

9595
If the folder */localReadCheckpointFolder* isn't created, it will be automatically created. This operation will impact the transactional performance of Cosmos DB.
9696

0 commit comments

Comments
 (0)