You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: sdk/cosmos/azure-cosmos-spark_3/docs/configuration-reference.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@
67
67
| `spark.cosmos.write.point.maxConcurrency` | None | Cosmos DB Item Write Max concurrency. If not specified it will be determined based on the Spark executor VM Size |
68
68
| `spark.cosmos.write.bulk.maxPendingOperations` | None | Cosmos DB Item Write bulk mode maximum pending operations. Defines a limit of bulk operations being processed concurrently. If not specified it will be determined based on the Spark executor VM Size. If the volume of data is large for the provisioned throughput on the destination container, this setting can be adjusted by following the estimation of `1000 x Cores` |
| `spark.cosmos.write.bulk.targetedPayloadSizeInBytes` | `220201` | When the targeted payload size is reached for buffered documents, the request is sent to the backend. The default value is optimized for small documents <= 10 KB - when documents often exceed 110 KB, it can help to increase this value to up to about `1500000` (should still be smaller than 2 MB). |
70
+
| `spark.cosmos.write.bulk.transactional` | `false` | Enable transactional batch mode for bulk writes. When enabled, all operations for the same partition key are executed atomically (all succeed or all fail). Requires ordering and clustering by partition key columns. Only supports upsert operations. Cannot exceed 100 operations or 2MB per partition key. See [Transactional Batch documentation](https://learn.microsoft.com/azure/cosmos-db/nosql/transactional-batch) for details. |\n| `spark.cosmos.write.bulk.targetedPayloadSizeInBytes` | `220201` | When the targeted payload size is reached for buffered documents, the request is sent to the backend. The default value is optimized for small documents <= 10 KB - when documents often exceed 110 KB, it can help to increase this value to up to about `1500000` (should still be smaller than 2 MB). |
71
71
| `spark.cosmos.write.bulk.initialBatchSize` | `100` | Cosmos DB initial bulk micro batch size - a micro batch will be flushed to the backend when the number of documents enqueued exceeds this size - or the target payload size is met. The micro batch size is getting automatically tuned based on the throttling rate. By default the initial micro batch size is 100. Reduce this when you want to avoid that the first few requests consume too many RUs. |
72
72
| `spark.cosmos.write.bulk.maxBatchSize` | `100` | Cosmos DB max. bulk micro batch size - a micro batch will be flushed to the backend when the number of documents enqueued exceeds this size - or the target payload size is met. The micro batch size is getting automatically tuned based on the throttling rate. By default the max. micro batch size is 100. Use this setting only when migrating Spark 2.4 workloads - for other scenarios relying on the auto-tuning combined with throughput control will result in better experience. |
73
73
| `spark.cosmos.write.flush.noProgress.maxIntervalInSeconds` | `180` | The time interval in seconds that write operations will wait when no progress can be made for bulk writes before forcing a retry. The retry will reinitialize the bulk write process - so, any delays on the retry can be sure to be actual service issues. The default value of 3 min should be sufficient to prevent false negatives when there is a short service-side write unavailability - like for partition splits or merges. Increase it only if you regularly see these transient errors to exceed a time period of 180 seconds. |
Copy file name to clipboardExpand all lines: sdk/cosmos/azure-cosmos-spark_3/docs/scenarios/Ingestion.md
+55Lines changed: 55 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,6 +79,61 @@ When your container has a "unique key constraint policy" any 409 "Conflict" (ind
79
79
- For `ItemOverwrite` a 409 - Conflict due to unique key violation will result in an error - and the Spark job will fail. *NOTE: Conflicts due to pk+id being identical to another document won't even result in a 409 - because with Upsert the existing document would simply be updated.*
80
80
- For `ItemAppend` like conflicts on pk+id any unique key policy constraint violation will be ignored.
81
81
82
+
### Transactional batch writes
83
+
84
+
For scenarios requiring atomic all-or-nothing semantics within a partition, you can enable transactional batch writes using the `spark.cosmos.write.bulk.transactional` configuration. When enabled, all operations for a single partition key value either succeed or fail together.
-**Atomic semantics**: All operations for the same partition key succeed or all fail (rollback)
119
+
-**Operation type**: Only upsert operations are supported (equivalent to `ItemOverwrite` write strategy)
120
+
-**Partition grouping**: Spark automatically partitions and orders data by partition key columns
121
+
-**Size limits**: Maximum 100 operations per transaction; maximum 2MB total payload per transaction
122
+
-**Partition key requirement**: All operations in a transaction must share the same partition key value
123
+
-**Bulk mode required**: Must have `spark.cosmos.write.bulk.enabled=true` (enabled by default)
124
+
125
+
#### Use cases
126
+
127
+
Transactional batch writes are ideal for:
128
+
- Financial transactions requiring consistency across multiple documents
129
+
- Order processing where order header and line items must be committed together
130
+
- Multi-document updates that must be atomic (e.g., inventory adjustments)
131
+
- Any scenario where partial success would leave data in an inconsistent state
132
+
133
+
#### Error handling
134
+
135
+
If any operation in a transaction fails (e.g., insufficient RUs, document too large, transaction exceeds 100 operations), the entire transaction is rolled back and no documents are modified. The Spark task will fail and retry according to Spark's retry policy.
136
+
82
137
## Preparation
83
138
Below are a couple of tips/best-practices that can help you to prepare for a data migration into a Cosmos DB container.
0 commit comments