Skip to content

Commit 987f139

Browse files
authored
Merge pull request #199070 from seesharprun/may24-elasticity-updates-cosmos
Elasticity feature updates Azure Cosmos DB
2 parents 2c843fb + 7195bfe commit 987f139

File tree

4 files changed

+180
-39
lines changed

4 files changed

+180
-39
lines changed

articles/cosmos-db/burst-capacity.md

Lines changed: 29 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,25 +28,49 @@ After the 10 seconds is over, the burst capacity has been used up. If the worklo
2828

2929
## Getting started
3030

31-
To get started using burst capacity, enroll in the preview by filing a support ticket in the [Azure portal](https://portal.azure.com).
31+
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
32+
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
33+
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
3234

3335
## Limitations
3436

35-
### SDK requirements (SQL API only)
37+
### Preview eligibility criteria
38+
To enroll in the preview, your Cosmos account must meet all the following criteria:
39+
- Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
40+
- If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
41+
- There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
42+
- Your Cosmos account isn't using any unsupported connectors
43+
- Azure Data Factory
44+
- Azure Stream Analytics
45+
- Logic Apps
46+
- Azure Functions
47+
- Azure Search
3648

37-
Burst capacity is supported only in the latest preview version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with other APIs.
49+
### SDK requirements (SQL and Table API only)
50+
#### SQL API
51+
For SQL API accounts, burst capacity is supported only in the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with Gremlin API, Cassandra API, or API for MongoDB.
3852

39-
Find the latest preview version the supported SDK:
53+
Find the latest version of the supported SDK:
4054

4155
| SDK | Supported versions | Package manager link |
4256
| --- | --- | --- |
4357
| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
4458

45-
Support for other SDKs is planned for the future.
59+
Support for other SQL API SDKs is planned for the future.
4660

4761
> [!TIP]
4862
> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
4963
64+
#### Table API
65+
For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
66+
67+
| SDK | Supported versions | Package manager link |
68+
| --- | --- | --- |
69+
| **Azure Tables client library for .NET** | *>= 12.0.0* | <https://www.nuget.org/packages/Azure.Data.Tables/> |
70+
| **Azure Tables client library for Java** | *>= 12.0.0* | <https://mvnrepository.com/artifact/com.azure/azure-data-tables> |
71+
| **Azure Tables client library for JavaScript** | *>= 12.0.0* | <https://www.npmjs.com/package/@azure/data-tables> |
72+
| **Azure Tables client library for Python** | *>= 12.0.0* | <https://pypi.org/project/azure-data-tables/> |
73+
5074
### Unsupported connectors
5175

5276
If you enroll in the preview, the following connectors will fail.

articles/cosmos-db/merge.md

Lines changed: 51 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -17,72 +17,108 @@ Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number
1717

1818
## Getting started
1919

20-
To get started using merge, enroll in the preview by filing a support ticket in the [Azure portal](https://portal.azure.com).
20+
To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
21+
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
22+
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
2123

2224
### Merging physical partitions
23-
When the parameter `IsDryRun` is set to true, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When set to false, the merge will execute against the resource.
25+
26+
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag isn't passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.
2427
> [!TIP]
2528
> Before running a merge, it's recommended to set your provisioned RU/s (either manual RU/s or autoscale max RU/s) as close as possible to your desired steady state RU/s post-merge, to help ensure the system calculates an efficient partition layout.
2629
2730
#### [PowerShell](#tab/azure-powershell)
2831

2932
```azurepowershell
33+
// Add the preview extension
34+
Install-Module -Name Az.CosmosDB -AllowPrerelease -Force
35+
3036
// SQL API
31-
Invoke-AzCosmosDbSqlContainerPartitionMerge `
37+
Invoke-AzCosmosDBSqlContainerMerge `
3238
-ResourceGroupName "<resource-group-name>" `
3339
-AccountName "<cosmos-account-name>" `
3440
-DatabaseName "<cosmos-database-name>" `
35-
-Name "<cosmos-container-name>"
36-
-IsDryRun "<True|False>"
41+
-Name "<cosmos-container-name>" `
42+
-WhatIf
3743
3844
// API for MongoDB
39-
Invoke-AzCosmosDBMongoDBCollectionPartitionMerge `
45+
Invoke-AzCosmosDBMongoDBCollectionMerge `
4046
-ResourceGroupName "<resource-group-name>" `
4147
-AccountName "<cosmos-account-name>" `
4248
-DatabaseName "<cosmos-database-name>" `
43-
-Name "<cosmos-collection-name>"
44-
-IsDryRun "<True|False>"
49+
-Name "<cosmos-collection-name>" `
50+
-WhatIf
4551
```
4652

4753
#### [Azure CLI](#tab/azure-cli)
4854

4955
```azurecli
56+
// Add the preview extension
57+
az extension add --name cosmosdb-preview
58+
5059
// SQL API
5160
az cosmosdb sql container merge \
5261
--resource-group '<resource-group-name>' \
5362
--account-name '<cosmos-account-name>' \
5463
--database-name '<cosmos-database-name>' \
5564
--name '<cosmos-container-name>'
56-
--is-dry-run '<true|false>'
5765
5866
// API for MongoDB
5967
az cosmosdb mongodb collection merge \
6068
--resource-group '<resource-group-name>' \
6169
--account-name '<cosmos-account-name>' \
6270
--database-name '<cosmos-database-name>' \
6371
--name '<cosmos-collection-name>'
64-
--is-dry-run '<true|false>'
6572
```
6673

6774
---
6875

76+
### Monitor merge operations
77+
Partition merge is a long-running operation and there's no SLA on how long it takes to complete. The time depends on the amount of data in the container and the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete.
78+
79+
While partition merge is running on your container, it isn't possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc.). Wait until the merge operation completes before changing your container settings.
80+
81+
You can track whether merge is still in progress by checking the **Activity Log** and filtering for the events **Merge the physical partitions of a MongoDB collection** or **Merge the physical partitions of a SQL container**.
82+
6983
## Limitations
7084

71-
### Account resources and configuration
85+
### Preview eligibility criteria
86+
To enroll in the preview, your Cosmos account must meet all the following criteria:
87+
* Your Cosmos account uses SQL API or API for MongoDB with version >=3.6.
88+
* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
89+
* Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
90+
* However, only the containers with dedicated throughput will be able to be merged.
91+
* Your Cosmos account is a single-write region account (merge isn't currently supported for multi-region write accounts).
92+
* Your Cosmos account doesn't use any of the following features:
93+
* [Point-in-time restore](continuous-backup-restore-introduction.md)
94+
* [Customer-managed keys](how-to-setup-cmk.md)
95+
* [Analytical store](analytical-store-introduction.md)
96+
* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
97+
* If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
98+
* There are no SDK or driver requirements to use the feature with API for MongoDB.
99+
* Your Cosmos account doesn't use any currently unsupported connectors:
100+
* Azure Data Factory
101+
* Azure Stream Analytics
102+
* Logic Apps
103+
* Azure Functions
104+
* Azure Search
72105

106+
### Account resources and configuration
107+
* Merge is only available for SQL API and API for MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
73108
* Merge is only available for single-region write accounts. Multi-region write account support isn't available.
74-
* Accounts using merge functionality can't also use these features:
109+
* Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, resources in the account will no longer be able to be merged):
75110
* [Point-in-time restore](continuous-backup-restore-introduction.md)
76111
* [Customer-managed keys](how-to-setup-cmk.md)
77112
* [Analytical store](analytical-store-introduction.md)
78113
* Containers using merge functionality must have their throughput provisioned at the container level. Database-shared throughput support isn't available.
79-
* For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
114+
* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency.
115+
* After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
80116

81117
### SDK requirements (SQL API only)
82118

83-
Accounts with the merge feature enabled are supported only in the latest preview version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
119+
Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
84120

85-
Find the latest preview version the supported SDK:
121+
Find the latest version of the supported SDK:
86122

87123
| SDK | Supported versions | Package manager link |
88124
| --- | --- | --- |

articles/cosmos-db/sql/distribute-throughput-across-partitions-faq.yml

Lines changed: 17 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,11 @@ sections:
2121
- question: |
2222
What resources can I use this feature on?
2323
answer: |
24-
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview.
24+
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview. The feature doesn't apply to serverless accounts.
2525
- question: |
26-
Which version of the Azure Cosmos DB functionality in Azure PowerShell and Azure CLI supports this feature?
26+
Which version of the Azure Cosmos DB functionality in Azure PowerShell supports this feature?
2727
answer: |
28-
The ability to redistribute RU/s across physical partitions is only supported in the latest preview version of Azure PowerShell and Azure CLI.
28+
The ability to redistribute RU/s across physical partitions is only supported in the latest preview version of Azure PowerShell.
2929
- question: |
3030
What is the maximum number of physical partitions I can change in one request?
3131
answer: |
@@ -47,7 +47,7 @@ sections:
4747
|P1: 4000 RU/s | P1: 1000 RU/s | 2/3 |
4848
|P2: 1000 RU/s | P2: 500 RU/s | 1/6 |
4949
50-
- If you increase your RU/s without triggering a split - that is, you scale to a total RU/s <= current partition count * 10,000 RU/s - each physical partition will have RU/s = `MIN(current throughput fraction * new RU/s, 10,000 RU/s)`. Consider an example where the resulting sum of all RU/s across all partitions is less than the total new RU/s of the resource. It's recommended to reset your RU/s to an even distribution and redistribute to ensure that all available RU/s are allocated to a partition. To check if this scenario applies to your resource use Azure Monitor metrics. Compare the value of the **ProvisionedThroughput** (when using manual throughput) or **AutoscaleMaxThroughput** (when using autoscale) metric to the value of the **PhysicalPartitionThroughput** metric. If the value of **PhysicalPartitionThroughput** is less than the respective **ProvisionedThroughput** or **AutoscaleMaxThroughput**, then reset your RU/s to an even distribution before redistributing, or lower your resource's throughput to the value of **PhysicalPartitionThroughput**.
50+
- If you increase your RU/s, each physical partition will have RU/s = `MIN(current throughput fraction * new RU/s, 10,000 RU/s)`. The RU/s on a physical partition can never exceed 10,000 RU/s.
5151
5252
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 12,000 RU/s:
5353
@@ -56,5 +56,17 @@ sections:
5656
|P0: 1000 RU/s | P0: 2000 RU/s | 1/6 |
5757
|P1: 4000 RU/s | P1: 8000 RU/s | 2/3 |
5858
|P2: 1000 RU/s | P2: 2000 RU/s | 1/6 |
59+
- question: |
60+
Why am I seeing a discrepancy between the overall RU/s on my container and the sum of the RU/s across all physical partitions?
61+
answer: |
62+
- This discrepancy can happen when you scale up your overall RU/s for any single partition, `(current RU/s per partition * new container RU/s)/(old container RU/s)` is greater than 10,000 RU/s. This discrepancy occurs when you trigger a partition split by increasing RU/s beyond `currentNumberOfPartitions * 10,000 RU/s` or increase RU/s without triggering a partition split.
63+
- It's recommended to redistribute your throughput equally after the scale-up. Otherwise, it's possible that you won't be able to use all the RU/s you've provisioned (and are being billed for).
64+
- To check if this scenario applies to your resource use Azure Monitor metrics. Compare the value of the **ProvisionedThroughput** (when using manual throughput) or **AutoscaleMaxThroughput** (when using autoscale) metric to the value of the **PhysicalPartitionThroughput** metric. If the value of **PhysicalPartitionThroughput** is less than the respective **ProvisionedThroughput** or **AutoscaleMaxThroughput**, then reset your RU/s to an even distribution before redistributing, or lower your resource's throughput to the value of **PhysicalPartitionThroughput**.
5965
60-
- If you increase your RU/s [beyond what the current partition layout can serve](../scaling-provisioned-throughput-best-practices.md), you trigger a split. By design, all physical partitions will default to having the same number of RU/s. After partitions split, the logical partitions that contributed to a hot partition may be on a different physical partition. If necessary, you can redistribute your RU/s on the new layout.
66+
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 24,000 RU/s. After the scale-up, the total throughput across all partitions is only 18,000 RU/s. This distribution means that while we're being billed for 24,000 RU/s, we're only able to get 18,000 RU/s of effective throughput. Each partition will get 8000 RU/s, as RU/s are redistributed equally, and we can redistribute RU/s again as needed. We could also choose to lower our overall RU/s to 18,000 RU/s.
67+
68+
|Before scale-up (6000 RU/s) |After scale up to 24,000 RU/s (effective RU/s = 18,000 RU/s) |Fraction of total RU/s |
69+
|---------|---------|---------|
70+
|P0: 1000 RU/s | P0: 4000 RU/s | 1/6 |
71+
|P1: 4000 RU/s | P1: 10000 RU/s (partition can't exceed 10,000 RU/s) | 2/3 |
72+
|P2: 1000 RU/s | P2: 4000 RU/s | 1/6 |

0 commit comments

Comments
 (0)