Skip to content

Commit 6a0b3dc

Browse files
committed
elasticity feature updates
1 parent 4844339 commit 6a0b3dc

File tree

5 files changed

+13
-34
lines changed

5 files changed

+13
-34
lines changed

articles/cosmos-db/autoscale-faq.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,11 +94,11 @@ sections:
9494
9595
**Migration from standard (manual) provisioned throughput to autoscale**
9696
97-
For a container, use the following formula to estimate the initial autoscale max RU/s: ``MAX(1000, current manual provisioned RU/s, maximum RU/s ever provisioned / 10, storage in GB * 100)``, rounded to the nearest 1000 RU/s. The actual initial autoscale max RU/s may vary depending on your account configuration.
97+
For a container, use the following formula to estimate the initial autoscale max RU/s: ``MAX(1000, current manual provisioned RU/s, maximum RU/s ever provisioned / 10, storage in GB * 10)``, rounded to the nearest 1000 RU/s. The actual initial autoscale max RU/s may vary depending on your account configuration.
9898
9999
Example #1: Suppose you have a container with 10,000 RU/s manual provisioned throughput, and 25 GB of storage. When you enable autoscale, the initial autoscale max RU/s will be: 10,000 RU/s, which will scale between 1000 - 10,000 RU/s.
100100
101-
Example #2: Suppose you have a container with 50,000 RU/s manual provisioned throughput, and 2500 GB of storage. When you enable autoscale, the initial autoscale max RU/s will be: 250,000 RU/s, which will scale between 25,000 - 250,000 RU/s.
101+
Example #2: Suppose you have a container with 50,000 RU/s manual provisioned throughput, and 25000 GB of storage. When you enable autoscale, the initial autoscale max RU/s will be: 250,000 RU/s, which will scale between 25,000 - 250,000 RU/s.
102102
103103
**Migration from autoscale to standard (manual) provisioned throughput**
104104

articles/cosmos-db/burst-capacity.md

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -29,16 +29,11 @@ After the 10 seconds is over, the burst capacity has been used up. If the worklo
2929

3030
## Getting started
3131

32-
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
32+
To get started using burst capacity, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Burst Capacity (preview)** feature.
3333

34-
:::image type="content" source="media/burst-capacity/burst-capacity-enable-feature.png" alt-text="Screenshot of Burst Capacity feature in Preview Features page in Subscriptions overview in Azure Portal.":::
35-
36-
Before submitting your request:
34+
Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect.
3735

38-
- Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
39-
- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria).
40-
41-
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
36+
:::image type="content" source="media/burst-capacity/burst-capacity-enable-feature.png" alt-text="Screenshot of Burst Capacity feature in Preview Features page in Subscriptions overview in Azure Portal.":::
4237

4338
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Burst Capacity**. Run the **Check eligibility for burst capacity preview** diagnostic.
4439

articles/cosmos-db/concepts-limits.md

Lines changed: 7 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -33,14 +33,12 @@ You can provision throughput at a container-level or a database-level in terms o
3333
| Maximum number of distinct (logical) partition keys | Unlimited |
3434
| Maximum storage per container | Unlimited |
3535
| Maximum attachment size per Account (Attachment feature is being deprecated) | 2 GB |
36-
| Minimum RU/s required per 1 GB | 10 RU/s <sup>3</sup> |
36+
| Minimum RU/s required per 1 GB | 1 RU/s |
3737

3838
<sup>1</sup> You can increase Maximum RUs per container or database by [filing an Azure support ticket](create-support-request-quota-increase.md).
3939

4040
<sup>2</sup> To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20 GB in production, it's recommended to rearchitect your application with a different partition key as a long-term solution. To help give time to rearchitect your application, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Requesting a temporary increase is intended as a temporary mitigation and not recommended as a long-term solution, as **SLA guarantees are not honored when the limit is increased**. To remove the configuration, file a support ticket and select quota type **Restore container’s logical partition key size to default (20 GB)**. Filing this support ticket can be done after you have either deleted data to fit the 20-GB logical partition limit or have rearchitected your application with a different partition key.
4141

42-
<sup>3</sup> Minimum can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program)
43-
4442
### Minimum throughput limits
4543

4644
An Azure Cosmos DB container (or shared throughput database) using manual throughput must have a minimum throughput of 400 RU/s. As the container grows, Azure Cosmos DB requires a minimum throughput to ensure the database or container has sufficient resource for its operations.
@@ -54,26 +52,20 @@ The actual minimum RU/s may vary depending on your account configuration. You ca
5452
To estimate the minimum throughput required of a container with manual throughput, find the maximum of:
5553

5654
* 400 RU/s
57-
* Current storage in GB * 10 RU/s
55+
* Current storage in GB * 1 RU/s
5856
* Highest RU/s ever provisioned on the container / 100
5957

60-
For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 10 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 200 GB. The minimum RU/s is now `MAX(400, 200 * 10 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
61-
62-
> [!NOTE]
63-
> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
58+
For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 1 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 2000 GB. The minimum RU/s is now `MAX(400, 2000 * 1 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
6459

6560
#### Minimum throughput on shared throughput database
6661
To estimate the minimum throughput required of a shared throughput database with manual throughput, find the maximum of:
6762

6863
* 400 RU/s
69-
* Current storage in GB * 10 RU/s
64+
* Current storage in GB * 1 RU/s
7065
* Highest RU/s ever provisioned on the database / 100
7166
* 400 + MAX(Container count - 25, 0) * 100 RU/s
7267

73-
For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 10 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
74-
75-
> [!NOTE]
76-
> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
68+
For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 1 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
7769

7870
In summary, here are the minimum provisioned RU limits when using manual throughput.
7971

@@ -219,8 +211,8 @@ See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
219211
| Minimum RU/s the system can scale to | `0.1 * Tmax`|
220212
| Current RU/s the system is scaled to | `0.1*Tmax <= T <= Tmax`, based on usage|
221213
| Minimum billable RU/s per hour| `0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. |
222-
| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
223-
| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
214+
| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10)` rounded to nearest 1000 RU/s |
215+
| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
224216

225217
## SQL query limits
226218

66.7 KB
Loading

articles/cosmos-db/set-throughput.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ The response of those methods also contains the [minimum provisioned throughput]
106106
The actual minimum RU/s may vary depending on your account configuration. But generally it's the maximum of:
107107

108108
* 400 RU/s
109-
* Current storage in GB * 10 RU/s (this constraint can be relaxed in some cases, see our [high storage / low throughput program](#high-storage-low-throughput-program))
109+
* Current storage in GB * 1 RU/s
110110
* Highest RU/s ever provisioned on the database or container / 100
111111

112112
### Changing the provisioned throughput
@@ -132,14 +132,6 @@ You can programmatically check the scaling progress by reading the [current prov
132132

133133
You can use [Azure Monitor metrics](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
134134

135-
## <a id="high-storage-low-throughput-program"></a> High storage / low throughput program
136-
137-
As described in the [Current provisioned throughput](#current-provisioned-throughput) section above, the minimum throughput you can provision on a container or database depends on a number of factors. One of them is the amount of data currently stored, as Azure Cosmos DB enforces a minimum throughput of 10 RU/s per GB of storage.
138-
139-
This can be a concern in situations where you need to store large amounts of data, but have low throughput requirements in comparison. To better accommodate these scenarios, Azure Cosmos DB has introduced a **"high storage / low throughput" program** that decreases the RU/s per GB constraint on eligible accounts.
140-
141-
To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://aka.ms/cosmosdb-high-storage-low-throughput-program). The Azure Cosmos DB team will then follow up and proceed with your onboarding.
142-
143135
## Comparison of models
144136
This table shows a comparison between provisioning standard (manual) throughput on a database vs. on a container.
145137

0 commit comments

Comments
 (0)