You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/burst-capacity.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,18 +28,18 @@ After the 10 seconds is over, the burst capacity has been used up. If the worklo
28
28
29
29
## Getting started
30
30
31
-
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features**blade](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
31
+
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features**page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
32
32
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
33
33
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
34
34
35
35
## Limitations
36
36
37
37
### Preview eligibility criteria
38
38
To enroll in the preview, your Cosmos account must meet all the following criteria:
39
-
- Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity does not apply to serverless accounts.
39
+
- Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
40
40
- If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
41
41
- There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
42
-
- Your Cosmos account is not using any unsupported connectors
42
+
- Your Cosmos account isn't using any unsupported connectors
43
43
- Azure Data Factory
44
44
- Azure Stream Analytics
45
45
- Logic Apps
@@ -62,7 +62,7 @@ Support for other SQL API SDKs is planned for the future.
62
62
> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
63
63
64
64
#### Table API
65
-
For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table`is not supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
65
+
For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table`isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
66
66
67
67
| SDK | Supported versions | Package manager link |
Copy file name to clipboardExpand all lines: articles/cosmos-db/merge.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,13 +17,13 @@ Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number
17
17
18
18
## Getting started
19
19
20
-
To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features**blade](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
20
+
To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features**page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
21
21
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
22
22
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
23
23
24
24
### Merging physical partitions
25
25
26
-
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag is not passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.
26
+
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag isn't passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.
27
27
> [!TIP]
28
28
> Before running a merge, it's recommended to set your provisioned RU/s (either manual RU/s or autoscale max RU/s) as close as possible to your desired steady state RU/s post-merge, to help ensure the system calculates an efficient partition layout.
29
29
@@ -74,9 +74,9 @@ az cosmosdb mongodb collection merge \
74
74
---
75
75
76
76
### Monitor merge operations
77
-
Partition merge is a long-running operation and there is no SLA on how long it takes to complete. The time depends on the amount of data in the container as well as the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete.
77
+
Partition merge is a long-running operation and there's no SLA on how long it takes to complete. The time depends on the amount of data in the container and the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete.
78
78
79
-
While partition merge is running on your container, it is not possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc). Wait until the merge operation completes before changing your container settings.
79
+
While partition merge is running on your container, it isn't possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc.). Wait until the merge operation completes before changing your container settings.
80
80
81
81
You can track whether merge is still in progress by checking the **Activity Log** and filtering for the events **Merge the physical partitions of a MongoDB collection** or **Merge the physical partitions of a SQL container**.
82
82
@@ -85,18 +85,18 @@ You can track whether merge is still in progress by checking the **Activity Log*
85
85
### Preview eligibility criteria
86
86
To enroll in the preview, your Cosmos account must meet all the following criteria:
87
87
* Your Cosmos account uses SQL API or API for MongoDB with version >=3.6.
88
-
* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge does not apply to serverless accounts.
89
-
* Currently, merge is not supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
88
+
* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
89
+
* Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
90
90
* However, only the containers with dedicated throughput will be able to be merged.
91
-
* Your Cosmos account is a single-write region account (merge is not currently supported for multi-region write accounts).
92
-
* Your Cosmos account does not use any of the following features:
91
+
* Your Cosmos account is a single-write region account (merge isn't currently supported for multi-region write accounts).
92
+
* Your Cosmos account doesn't use any of the following features:
* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge is not currently supported for strong consistency).
96
+
* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
97
97
* If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
98
98
* There are no SDK or driver requirements to use the feature with API for MongoDB.
99
-
* Your Cosmos account does not use any currently unsupported connectors:
99
+
* Your Cosmos account doesn't use any currently unsupported connectors:
100
100
* Azure Data Factory
101
101
* Azure Stream Analytics
102
102
* Logic Apps
@@ -111,8 +111,8 @@ To enroll in the preview, your Cosmos account must meet all the following criter
* Containers using merge functionality must have their throughput provisioned at the container level. Database-shared throughput support isn't available.
114
-
* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It is not currently supported for strong consistency.
115
-
* After a container has been merged, it is not possible to read the change feed with start time. Support for this feature is planned for the future.
114
+
* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency.
115
+
* After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/distribute-throughput-across-partitions-faq.yml
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ sections:
21
21
- question: |
22
22
What resources can I use this feature on?
23
23
answer: |
24
-
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview. The feature does not apply to serverless accounts.
24
+
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview. The feature doesn't apply to serverless accounts.
25
25
- question: |
26
26
Which version of the Azure Cosmos DB functionality in Azure PowerShell supports this feature?
27
27
answer: |
@@ -59,11 +59,11 @@ sections:
59
59
- question: |
60
60
Why am I seeing a discrepancy between the overall RU/s on my container and the sum of the RU/s across all physical partitions?
61
61
answer: |
62
-
- This can happen when you scale up your overall RU/s such that for any single partition, `(current RU/s per partition * new container RU/s)/(old container RU/s)` is greater than 10,000 RU/s. This can happen when you trigger a partition split by increasing RU/s beyond `currentNumberOfPartitions * 10,000 RU/s` or increase RU/s without triggering a partition split.
63
-
- It is recommended to redistribute your throughput equally after the scale-up. Otherwise, it is possible that you will not be able to use all the RU/s you've provisioned (and are being billed for).
62
+
- This discrepancy can happen when you scale up your overall RU/s for any single partition, `(current RU/s per partition * new container RU/s)/(old container RU/s)` is greater than 10,000 RU/s. This discrepancy occurs when you trigger a partition split by increasing RU/s beyond `currentNumberOfPartitions * 10,000 RU/s` or increase RU/s without triggering a partition split.
63
+
- It's recommended to redistribute your throughput equally after the scale-up. Otherwise, it's possible that you won't be able to use all the RU/s you've provisioned (and are being billed for).
64
64
- To check if this scenario applies to your resource use Azure Monitor metrics. Compare the value of the **ProvisionedThroughput** (when using manual throughput) or **AutoscaleMaxThroughput** (when using autoscale) metric to the value of the **PhysicalPartitionThroughput** metric. If the value of **PhysicalPartitionThroughput** is less than the respective **ProvisionedThroughput** or **AutoscaleMaxThroughput**, then reset your RU/s to an even distribution before redistributing, or lower your resource's throughput to the value of **PhysicalPartitionThroughput**.
65
65
66
-
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 24,000 RU/s. After the scale-up, the total throughput across all partitions is only 18,000 RU/s. This means that while we are being billed for 24,000 RU/s, we are only able to get 18,000 RU/s of effective throughput. By redistributing our RU/s equally, each partition will get 8000 RU/s, and we can redistribute RU/s again as needed. We could also choose to lower our overall RU/s to 18,000 RU/s.
66
+
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 24,000 RU/s. After the scale-up, the total throughput across all partitions is only 18,000 RU/s. This distribution means that while we're being billed for 24,000 RU/s, we're only able to get 18,000 RU/s of effective throughput. Each partition will get 8000 RU/s, as RU/s are redistributed equally, and we can redistribute RU/s again as needed. We could also choose to lower our overall RU/s to 18,000 RU/s.
67
67
68
68
|Before scale-up (6000 RU/s) |After scale up to 24,000 RU/s (effective RU/s = 18,000 RU/s) |Fraction of total RU/s |
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/distribute-throughput-across-partitions.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ If you aren't seeing 429 responses and your end to end latency is acceptable, th
28
28
29
29
## Getting started
30
30
31
-
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features**blade](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
31
+
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features**page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
32
32
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
33
33
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
Next, let's decide how many RU/s we want to give to our hottest physical partition(s). Let's call this set our target partition(s). The most RU/s any physical partition can have is 10,000 RU/s.
125
+
Next, let's decide how many RU/s we want to give to our hottest physical partition(s). Let's call this set our target partition(s). The most RU/s any physical partition can contain is 10,000 RU/s.
126
126
127
127
The right approach depends on your workload requirements. General approaches include:
128
128
- Increasing the RU/s by a percentage, measure the rate of 429 responses, and repeat until desired throughput is achieved.
@@ -135,7 +135,7 @@ The right approach depends on your workload requirements. General approaches inc
135
135
136
136
Finally, let's decide how many RU/s we want to keep on our other physical partitions. This selection will determine the partitions that the target physical partition takes throughput from.
137
137
138
-
In the PowerShell APIs, we must specify at least one source partition to redistribute RU/s from. We can also specify a custom minimum throughput each physical partition should have after the redistribution. If not specified, by default, Azure Cosmos DB will ensure that each physical partition has at least 100 RU/s after the redistribution. It is recommended to explicitly specify the minimum throughput.
138
+
In the PowerShell APIs, we must specify at least one source partition to redistribute RU/s from. We can also specify a custom minimum throughput each physical partition should have after the redistribution. If not specified, by default, Azure Cosmos DB will ensure that each physical partition has at least 100 RU/s after the redistribution. It's recommended to explicitly specify the minimum throughput.
139
139
140
140
The right approach depends on your workload requirements. General approaches include:
141
141
- Taking RU/s equally from all source partitions (works best when there are <= 10 partitions)
@@ -221,9 +221,9 @@ After the changes, assuming your overall workload hasn't changed, you'll likely
221
221
To enroll in the preview, your Cosmos account must meet all the following criteria:
222
222
- Your Cosmos account is using SQL API or API for MongoDB.
223
223
- If you're using API for MongoDB, the version must be >= 3.6.
224
-
- Your Cosmos account is using provisioned throughput (manual or autoscale). Distribution of throughput across partitions does not apply to serverless accounts.
224
+
- Your Cosmos account is using provisioned throughput (manual or autoscale). Distribution of throughput across partitions doesn't apply to serverless accounts.
225
225
- If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When the ability to redistribute throughput across partitions is enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
226
-
- Your Cosmos account is not using any unsupported connectors:
226
+
- Your Cosmos account isn't using any unsupported connectors:
0 commit comments