Skip to content

Commit 1004e29

Browse files
authored
Update analytical-store-introduction.md
1 parent b146ce1 commit 1004e29

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

articles/cosmos-db/analytical-store-introduction.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -207,8 +207,8 @@ df = spark.read\
207207
* MinKey/MaxKey
208208

209209
* When using DateTime strings that follow the ISO 8601 UTC standard, expect the following behavior:
210-
* Spark pools in Azure Synapse represents these columns as `string`.
211-
* SQL serverless pools in Azure Synapse represents these columns as `varchar(8000)`.
210+
* Spark pools in Azure Synapse represent these columns as `string`.
211+
* SQL serverless pools in Azure Synapse represent these columns as `varchar(8000)`.
212212

213213
* Properties with `UNIQUEIDENTIFIER (guid)` types are represented as `string` in analytical store and should be converted to `VARCHAR` in **SQL** or to `string` in **Spark** for correct visualization.
214214

@@ -230,8 +230,8 @@ The well-defined schema representation creates a simple tabular representation o
230230

231231
* The first document defines the base schema and properties must always have the same type across all documents. The only exceptions are:
232232
* From `NULL` to any other data type. The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store.
233-
* From `float` to `integer`. All documents will be represented in analytical store.
234-
* From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
233+
* From `float` to `integer`. All documents are represented in analytical store.
234+
* From `integer` to `float`. All documents are represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
235235

236236
```SQL
237237
SELECT CAST (num as float) as num
@@ -260,16 +260,16 @@ WITH (num varchar(100)) AS [IntToFloat]
260260
> If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items won't be included in the analytical store.
261261
262262
* Expect different behavior in regard to different types in well-defined schema:
263-
* Spark pools in Azure Synapse represents these values as `undefined`.
264-
* SQL serverless pools in Azure Synapse represents these values as `NULL`.
263+
* Spark pools in Azure Synapse represent these values as `undefined`.
264+
* SQL serverless pools in Azure Synapse represent these values as `NULL`.
265265

266266
* Expect different behavior in regard to explicit `NULL` values:
267-
* Spark pools in Azure Synapse reads these values as `0` (zero), and as `undefined` as soon as the column has a non-null value.
268-
* SQL serverless pools in Azure Synapse reads these values as `NULL`.
267+
* Spark pools in Azure Synapse read these values as `0` (zero), and as `undefined` as soon as the column has a non-null value.
268+
* SQL serverless pools in Azure Synapse read these values as `NULL`.
269269

270270
* Expect different behavior in regard to missing columns:
271-
* Spark pools in Azure Synapse represents these columns as `undefined`.
272-
* SQL serverless pools in Azure Synapse represents these columns as `NULL`.
271+
* Spark pools in Azure Synapse represent these columns as `undefined`.
272+
* SQL serverless pools in Azure Synapse represent these columns as `NULL`.
273273

274274
##### Representation challenges workarounds
275275

@@ -480,7 +480,7 @@ It's possible to use full fidelity Schema for API for NoSQL accounts, instead of
480480

481481
* Currently, if you enable Synapse Link in your NoSQL API account using the Azure Portal, it will be enabled as well-defined schema.
482482
* Currently, if you want to use full fidelity schema with NoSQL or Gremlin API accounts, you have to set it at account level in the same CLI or PowerShell command that will enable Synapse Link at account level.
483-
* Currently Azure Cosmso DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
483+
* Currently Azure Cosmos DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts have full fidelity schema representation type.
484484
* It's not possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.
485485
* Currently, containers schema in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account.
486486
* Containers or graphs created before Synapse Link was enabled with full fidelity schema at account level will have well-defined schema.

0 commit comments

Comments
 (0)