You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/burst-capacity.md
+29-5Lines changed: 29 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,25 +28,49 @@ After the 10 seconds is over, the burst capacity has been used up. If the worklo
28
28
29
29
## Getting started
30
30
31
-
To get started using burst capacity, enroll in the preview by filing a support ticket in the [Azure portal](https://portal.azure.com).
31
+
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
32
+
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
33
+
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
32
34
33
35
## Limitations
34
36
35
-
### SDK requirements (SQL API only)
37
+
### Preview eligibility criteria
38
+
To enroll in the preview, your Cosmos account must meet all the following criteria:
39
+
- Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
40
+
- If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
41
+
- There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
42
+
- Your Cosmos account isn't using any unsupported connectors
43
+
- Azure Data Factory
44
+
- Azure Stream Analytics
45
+
- Logic Apps
46
+
- Azure Functions
47
+
- Azure Search
36
48
37
-
Burst capacity is supported only in the latest preview version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with other APIs.
49
+
### SDK requirements (SQL and Table API only)
50
+
#### SQL API
51
+
For SQL API accounts, burst capacity is supported only in the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with Gremlin API, Cassandra API, or API for MongoDB.
38
52
39
-
Find the latest preview version the supported SDK:
53
+
Find the latest version of the supported SDK:
40
54
41
55
| SDK | Supported versions | Package manager link |
Support for other SQL API SDKs is planned for the future.
46
60
47
61
> [!TIP]
48
62
> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
49
63
64
+
#### Table API
65
+
For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
66
+
67
+
| SDK | Supported versions | Package manager link |
68
+
| --- | --- | --- |
69
+
|**Azure Tables client library for .NET**|*>= 12.0.0*|<https://www.nuget.org/packages/Azure.Data.Tables/>|
70
+
|**Azure Tables client library for Java**|*>= 12.0.0*|<https://mvnrepository.com/artifact/com.azure/azure-data-tables>|
71
+
|**Azure Tables client library for JavaScript**|*>= 12.0.0*|<https://www.npmjs.com/package/@azure/data-tables>|
72
+
|**Azure Tables client library for Python**|*>= 12.0.0*|<https://pypi.org/project/azure-data-tables/>|
73
+
50
74
### Unsupported connectors
51
75
52
76
If you enroll in the preview, the following connectors will fail.
Copy file name to clipboardExpand all lines: articles/cosmos-db/merge.md
+51-15Lines changed: 51 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,72 +17,108 @@ Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number
17
17
18
18
## Getting started
19
19
20
-
To get started using merge, enroll in the preview by filing a support ticket in the [Azure portal](https://portal.azure.com).
20
+
To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
21
+
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
22
+
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
21
23
22
24
### Merging physical partitions
23
-
When the parameter `IsDryRun` is set to true, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When set to false, the merge will execute against the resource.
25
+
26
+
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag isn't passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.
24
27
> [!TIP]
25
28
> Before running a merge, it's recommended to set your provisioned RU/s (either manual RU/s or autoscale max RU/s) as close as possible to your desired steady state RU/s post-merge, to help ensure the system calculates an efficient partition layout.
Partition merge is a long-running operation and there's no SLA on how long it takes to complete. The time depends on the amount of data in the container and the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete.
78
+
79
+
While partition merge is running on your container, it isn't possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc.). Wait until the merge operation completes before changing your container settings.
80
+
81
+
You can track whether merge is still in progress by checking the **Activity Log** and filtering for the events **Merge the physical partitions of a MongoDB collection** or **Merge the physical partitions of a SQL container**.
82
+
69
83
## Limitations
70
84
71
-
### Account resources and configuration
85
+
### Preview eligibility criteria
86
+
To enroll in the preview, your Cosmos account must meet all the following criteria:
87
+
* Your Cosmos account uses SQL API or API for MongoDB with version >=3.6.
88
+
* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
89
+
* Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
90
+
* However, only the containers with dedicated throughput will be able to be merged.
91
+
* Your Cosmos account is a single-write region account (merge isn't currently supported for multi-region write accounts).
92
+
* Your Cosmos account doesn't use any of the following features:
* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
97
+
* If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
98
+
* There are no SDK or driver requirements to use the feature with API for MongoDB.
99
+
* Your Cosmos account doesn't use any currently unsupported connectors:
100
+
* Azure Data Factory
101
+
* Azure Stream Analytics
102
+
* Logic Apps
103
+
* Azure Functions
104
+
* Azure Search
72
105
106
+
### Account resources and configuration
107
+
* Merge is only available for SQL API and API for MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
73
108
* Merge is only available for single-region write accounts. Multi-region write account support isn't available.
74
-
* Accounts using merge functionality can't also use these features:
109
+
* Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, resources in the account will no longer be able to be merged):
* Containers using merge functionality must have their throughput provisioned at the container level. Database-shared throughput support isn't available.
79
-
* For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
114
+
* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency.
115
+
* After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
80
116
81
117
### SDK requirements (SQL API only)
82
118
83
-
Accounts with the merge feature enabled are supported only in the latest preview version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
119
+
Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
84
120
85
-
Find the latest preview version the supported SDK:
121
+
Find the latest version of the supported SDK:
86
122
87
123
| SDK | Supported versions | Package manager link |
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/distribute-throughput-across-partitions-faq.yml
+17-5Lines changed: 17 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -21,11 +21,11 @@ sections:
21
21
- question: |
22
22
What resources can I use this feature on?
23
23
answer: |
24
-
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview.
24
+
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview. The feature doesn't apply to serverless accounts.
25
25
- question: |
26
-
Which version of the Azure Cosmos DB functionality in Azure PowerShell and Azure CLI supports this feature?
26
+
Which version of the Azure Cosmos DB functionality in Azure PowerShell supports this feature?
27
27
answer: |
28
-
The ability to redistribute RU/s across physical partitions is only supported in the latest preview version of Azure PowerShell and Azure CLI.
28
+
The ability to redistribute RU/s across physical partitions is only supported in the latest preview version of Azure PowerShell.
29
29
- question: |
30
30
What is the maximum number of physical partitions I can change in one request?
31
31
answer: |
@@ -47,7 +47,7 @@ sections:
47
47
|P1: 4000 RU/s | P1: 1000 RU/s | 2/3 |
48
48
|P2: 1000 RU/s | P2: 500 RU/s | 1/6 |
49
49
50
-
- If you increase your RU/s without triggering a split - that is, you scale to a total RU/s <= current partition count * 10,000 RU/s - each physical partition will have RU/s = `MIN(current throughput fraction * new RU/s, 10,000 RU/s)`. Consider an example where the resulting sum of all RU/s across all partitions is less than the total new RU/s of the resource. It's recommended to reset your RU/s to an even distribution and redistribute to ensure that all available RU/s are allocated to a partition. To check if this scenario applies to your resource use Azure Monitor metrics. Compare the value of the **ProvisionedThroughput** (when using manual throughput) or **AutoscaleMaxThroughput** (when using autoscale) metric to the value of the **PhysicalPartitionThroughput** metric. If the value of **PhysicalPartitionThroughput** is less than the respective **ProvisionedThroughput** or **AutoscaleMaxThroughput**, then reset your RU/s to an even distribution before redistributing, or lower your resource's throughput to the value of **PhysicalPartitionThroughput**.
50
+
- If you increase your RU/s, each physical partition will have RU/s = `MIN(current throughput fraction * new RU/s, 10,000 RU/s)`. The RU/s on a physical partition can never exceed 10,000 RU/s.
51
51
52
52
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 12,000 RU/s:
53
53
@@ -56,5 +56,17 @@ sections:
56
56
|P0: 1000 RU/s | P0: 2000 RU/s | 1/6 |
57
57
|P1: 4000 RU/s | P1: 8000 RU/s | 2/3 |
58
58
|P2: 1000 RU/s | P2: 2000 RU/s | 1/6 |
59
+
- question: |
60
+
Why am I seeing a discrepancy between the overall RU/s on my container and the sum of the RU/s across all physical partitions?
61
+
answer: |
62
+
- This discrepancy can happen when you scale up your overall RU/s for any single partition, `(current RU/s per partition * new container RU/s)/(old container RU/s)` is greater than 10,000 RU/s. This discrepancy occurs when you trigger a partition split by increasing RU/s beyond `currentNumberOfPartitions * 10,000 RU/s` or increase RU/s without triggering a partition split.
63
+
- It's recommended to redistribute your throughput equally after the scale-up. Otherwise, it's possible that you won't be able to use all the RU/s you've provisioned (and are being billed for).
64
+
- To check if this scenario applies to your resource use Azure Monitor metrics. Compare the value of the **ProvisionedThroughput** (when using manual throughput) or **AutoscaleMaxThroughput** (when using autoscale) metric to the value of the **PhysicalPartitionThroughput** metric. If the value of **PhysicalPartitionThroughput** is less than the respective **ProvisionedThroughput** or **AutoscaleMaxThroughput**, then reset your RU/s to an even distribution before redistributing, or lower your resource's throughput to the value of **PhysicalPartitionThroughput**.
59
65
60
-
- If you increase your RU/s [beyond what the current partition layout can serve](../scaling-provisioned-throughput-best-practices.md), you trigger a split. By design, all physical partitions will default to having the same number of RU/s. After partitions split, the logical partitions that contributed to a hot partition may be on a different physical partition. If necessary, you can redistribute your RU/s on the new layout.
66
+
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 24,000 RU/s. After the scale-up, the total throughput across all partitions is only 18,000 RU/s. This distribution means that while we're being billed for 24,000 RU/s, we're only able to get 18,000 RU/s of effective throughput. Each partition will get 8000 RU/s, as RU/s are redistributed equally, and we can redistribute RU/s again as needed. We could also choose to lower our overall RU/s to 18,000 RU/s.
67
+
68
+
|Before scale-up (6000 RU/s) |After scale up to 24,000 RU/s (effective RU/s = 18,000 RU/s) |Fraction of total RU/s |
0 commit comments