Skip to content

Commit ed48782

Browse files
Merge pull request #247721 from Rodrigossz/main
Update analytical-store-introduction.md
2 parents 2bc6e92 + 02426e2 commit ed48782

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

articles/cosmos-db/analytical-store-introduction.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -336,7 +336,10 @@ Here's a map of MongoDB data types and their representations in the analytical s
336336

337337
* Expect different behavior in regard to `timestamp` values:
338338
* Spark pools in Azure Synapse will read these values as `TimestampType`, `DateType`, or `Float`. It depends on the range and how the timestamp was generated.
339-
* SQL Serverless pools in Azure Synapse will read these values as `DATETIME2`. Data will be truncated if the timestamp is beyond the DATETIME2 range in Synapse SQL Serverless supported data types. That's because MongoDB range is bigger than SQL range.
339+
* SQL Serverless pools in Azure Synapse will read these values as `DATETIME2`, ranging from `0001-01-01` through `9999-12-31`. Values beyond this range are not supported and will cause an execution failure for your queries. If this is your case, you can:
340+
* Remove the column from the query. To keep the representation, you can create a new property mirroring that column but within the supported range. And use it in your queries.
341+
* Use [Change Data Capture from analytical store](analytical-store-change-data-capture.md), at no RUs cost, to transform and load the data into a new format, within one of the supported sinks.
342+
340343

341344
##### Using full fidelity schema with Spark
342345

0 commit comments

Comments
 (0)