Skip to content

Commit d78a35f

Browse files
authored
Merge pull request #203001 from deborahc/dech-elasticity-docs
Adding eligibility checker content for elasticity features
2 parents ade03fa + 8397cd7 commit d78a35f

9 files changed

+79
-32
lines changed

articles/cosmos-db/burst-capacity.md

Lines changed: 27 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -20,31 +20,44 @@ Burst capacity applies only to Azure Cosmos DB accounts using provisioned throug
2020
## How burst capacity works
2121

2222
> [!NOTE]
23-
> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is not guaranteed. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity.
23+
> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is not guaranteed. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity. Before enabling burst capacity, it is also recommended to evaluate if your partition layout can be [merged](merge.md) to permanently give more RU/s per physical partition without relying on burst capacity.
2424
2525
Let's take an example of a physical partition that has 100 RU/s of provisioned throughput and is idle for 5 minutes. With burst capacity, it can accumulate a maximum of 100 RU/s * 300 seconds = 30,000 RU of burst capacity. The capacity can be consumed at a maximum rate of 3000 RU/s, so if there's a sudden spike in request volume, the partition can burst up to 3000 RU/s for up 30,000 RU / 3000 RU/s = 10 seconds. Without burst capacity, any requests that are consumed beyond the provisioned 100 RU/s would have been rate limited (429).
2626

2727
After the 10 seconds is over, the burst capacity has been used up. If the workload continues to exceed the provisioned 100 RU/s, any requests that are consumed beyond the provisioned 100 RU/s would now be rate limited (429). The maximum amount of burst capacity a physical partition can accumulate at any point in time is equal to 300 seconds * the provisioned RU/s of the physical partition.
2828

2929
## Getting started
3030

31-
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
32-
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
33-
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
31+
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
32+
33+
Before submitting your request:
34+
- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
35+
- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
36+
37+
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
38+
39+
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Burst Capacity**. Run the **Check eligibility for burst capacity preview** diagnostic.
40+
41+
:::image type="content" source="media/burst-capacity/throughput-and-scaling-category.png" alt-text="Throughput and Scaling topic in Diagnose and solve issues page":::
42+
43+
:::image type="content" source="media/burst-capacity/burst-capacity-eligibility-check.png" alt-text="Burst capacity eligibility check with table of all preview eligibility criteria":::
3444

3545
## Limitations
3646

3747
### Preview eligibility criteria
3848
To enroll in the preview, your Cosmos account must meet all the following criteria:
3949
- Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
4050
- If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
41-
- There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
51+
- There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, or API for MongoDB.
4252
- Your Cosmos account isn't using any unsupported connectors
4353
- Azure Data Factory
4454
- Azure Stream Analytics
4555
- Logic Apps
4656
- Azure Functions
4757
- Azure Search
58+
- Azure Cosmos DB Spark connector
59+
- Azure Cosmos DB data migration tool
60+
- Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
4861

4962
### SDK requirements (SQL and Table API only)
5063
#### SQL API
@@ -75,13 +88,16 @@ For Table API accounts, burst capacity is supported only when using the latest v
7588

7689
If you enroll in the preview, the following connectors will fail.
7790

78-
* Azure Data Factory
79-
* Azure Stream Analytics
80-
* Logic Apps
81-
* Azure Functions
82-
* Azure Search
91+
* Azure Data Factory<sup>1</sup>
92+
* Azure Stream Analytics<sup>1</sup>
93+
* Logic Apps<sup>1</sup>
94+
* Azure Functions<sup>1</sup>
95+
* Azure Search<sup>1</sup>
96+
* Azure Cosmos DB Spark connector<sup>1</sup>
97+
* Azure Cosmos DB data migration tool
98+
* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
8399

84-
Support for these connectors is planned for the future.
100+
<sup>1</sup>Support for these connectors is planned for the future.
85101

86102
## Next steps
87103

543 KB
Loading
234 KB
Loading
234 KB
Loading
Loading
627 KB
Loading
234 KB
Loading

articles/cosmos-db/merge.md

Lines changed: 26 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,23 @@ ms.date: 05/09/2022
1313
# Merge partitions in Azure Cosmos DB (preview)
1414
[!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
1515

16-
Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
16+
Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container in place. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container and RU/s per partition is low. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
1717

1818
## Getting started
1919

20-
To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
21-
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
22-
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
20+
To get started using partition merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
21+
22+
Before submitting your request:
23+
- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
24+
- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
25+
26+
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
27+
28+
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Partition Merge**. Run the **Check eligibility for partition merge preview** diagnostic.
29+
30+
:::image type="content" source="media/merge/throughput-and-scaling-category.png" alt-text="Throughput and Scaling topic in Diagnose and solve issues page":::
31+
32+
:::image type="content" source="media/merge/merge-eligibility-check.png" alt-text="Merge eligibility check with table of all preview eligibility criteria":::
2333

2434
### Merging physical partitions
2535

@@ -102,6 +112,9 @@ To enroll in the preview, your Cosmos account must meet all the following criter
102112
* Logic Apps
103113
* Azure Functions
104114
* Azure Search
115+
* Azure Cosmos DB Spark connector
116+
* Azure Cosmos DB data migration tool
117+
* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
105118

106119
### Account resources and configuration
107120
* Merge is only available for SQL API and API for MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
@@ -133,13 +146,16 @@ Support for other SDKs is planned for the future.
133146

134147
If you enroll in the preview, the following connectors will fail.
135148

136-
* Azure Data Factory
137-
* Azure Stream Analytics
138-
* Logic Apps
139-
* Azure Functions
140-
* Azure Search
149+
* Azure Data Factory<sup>1</sup>
150+
* Azure Stream Analytics<sup>1</sup>
151+
* Logic Apps<sup>1</sup>
152+
* Azure Functions<sup>1</sup>
153+
* Azure Search<sup>1</sup>
154+
* Azure Cosmos DB Spark connector<sup>1</sup>
155+
* Azure Cosmos DB data migration tool
156+
* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
141157

142-
Support for these connectors is planned for the future.
158+
<sup>1</sup>Support for these connectors is planned for the future.
143159

144160
## Next steps
145161

articles/cosmos-db/sql/distribute-throughput-across-partitions.md

Lines changed: 26 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -24,14 +24,23 @@ In general, usage of this feature is recommended for scenarios when both the fol
2424
- You're consistently seeing greater than 1-5% overall rate of 429 responses
2525
- You've a consistent, predictable hot partition
2626

27-
If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements.
27+
If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements. If you have a small amount of RU/s per partition, you can also use the [partition merge (preview)](../merge.md) to reduce the number of partitions and ensure more RU/s per partition for the same total provisioned throughput.
2828

2929
## Getting started
3030

31-
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
32-
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
33-
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
31+
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
3432

33+
Before submitting your request:
34+
- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
35+
- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
36+
37+
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
38+
39+
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Throughput redistribution across partition**. Run the **Check eligibility for throughput redistribution across partitions preview** diagnostic.
40+
41+
:::image type="content" source="../media/distribute-throughput-across-partitions/throughput-and-scaling-category.png" alt-text="Throughput and Scaling topic in Diagnose and solve issues page":::
42+
43+
:::image type="content" source="../media/distribute-throughput-across-partitions/throughput-redistribution-across-partitions-eligibility-check.png" alt-text="Throughput redistribution across partitions eligibility check with table of all preview eligibility criteria":::
3544

3645
## Example scenario
3746

@@ -229,7 +238,10 @@ To enroll in the preview, your Cosmos account must meet all the following criter
229238
- Logic Apps
230239
- Azure Functions
231240
- Azure Search
232-
241+
- Azure Cosmos DB Spark connector
242+
- Azure Cosmos DB data migration tool
243+
- Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
244+
233245
### SDK requirements (SQL API only)
234246

235247
Throughput redistribution across partitions is supported only with the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use this feature for API for MongoDB accounts.
@@ -249,13 +261,16 @@ Support for other SDKs is planned for the future.
249261

250262
If you enroll in the preview, the following connectors will fail.
251263

252-
* Azure Data Factory
253-
* Azure Stream Analytics
254-
* Logic Apps
255-
* Azure Functions
256-
* Azure Search
264+
* Azure Data Factory<sup>1</sup>
265+
* Azure Stream Analytics<sup>1</sup>
266+
* Logic Apps<sup>1</sup>
267+
* Azure Functions<sup>1</sup>
268+
* Azure Search<sup>1</sup>
269+
* Azure Cosmos DB Spark connector<sup>1</sup>
270+
* Azure Cosmos DB data migration tool
271+
* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
257272

258-
Support for these connectors is planned for the future.
273+
<sup>1</sup>Support for these connectors is planned for the future.
259274

260275
## Next steps
261276

0 commit comments

Comments
 (0)