Skip to content

Commit 3a03457

Browse files
authored
Merge pull request #227554 from richagaur/richagaur/elasticity-feature-updates
Richagaur/elasticity feature updates
2 parents 04f041c + 3927712 commit 3a03457

File tree

6 files changed

+49
-72
lines changed

6 files changed

+49
-72
lines changed

articles/cosmos-db/autoscale-faq.yml

Lines changed: 30 additions & 30 deletions
Large diffs are not rendered by default.

articles/cosmos-db/burst-capacity.md

Lines changed: 3 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -29,22 +29,11 @@ After the 10 seconds is over, the burst capacity has been used up. If the worklo
2929

3030
## Getting started
3131

32-
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
32+
To get started using burst capacity, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Burst Capacity (preview)** feature.
3333

34-
:::image type="content" source="media/burst-capacity/burst-capacity-enable-feature.png" alt-text="Screenshot of Burst Capacity feature in Preview Features page in Subscriptions overview in Azure Portal.":::
34+
Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect.
3535

36-
Before submitting your request:
37-
38-
- Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
39-
- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria).
40-
41-
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
42-
43-
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Burst Capacity**. Run the **Check eligibility for burst capacity preview** diagnostic.
44-
45-
:::image type="content" source="media/burst-capacity/throughput-and-scaling-category.png" alt-text="Throughput and Scaling topic in Diagnose and solve issues page":::
46-
47-
:::image type="content" source="media/burst-capacity/burst-capacity-eligibility-check.png" alt-text="Burst capacity eligibility check with table of all preview eligibility criteria":::
36+
:::image type="content" source="media/burst-capacity/burst-capacity-enable-feature.png" alt-text="Screenshot of Burst Capacity feature in Preview Features page in Subscriptions overview in Azure portal.":::
4837

4938
## Limitations (preview eligibility criteria)
5039

articles/cosmos-db/concepts-limits.md

Lines changed: 10 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -34,14 +34,12 @@ You can allocate throughput at a container-level or a database-level in terms of
3434
| Maximum number of distinct (logical) partition keys | Unlimited |
3535
| Maximum storage per container | Unlimited |
3636
| Maximum attachment size per Account (Attachment feature is being deprecated) | 2 GB |
37-
| Minimum RU/s required per 1 GB | 10 RU/s ³ |
37+
| Minimum RU/s required per 1 GB | 1 RU/s |
3838

3939
¹ You can increase Maximum RUs per container or database by [filing an Azure support ticket](create-support-request-quota-increase.md).
4040

4141
² To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20 GB in production, it's recommended to rearchitect your application with a different partition key as a long-term solution. To help give time to rearchitect your application, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Requesting a temporary increase is intended as a temporary mitigation and not recommended as a long-term solution, as **SLA guarantees are not honored when the limit is increased**. To remove the configuration, file a support ticket and select quota type **Restore container’s logical partition key size to default (20 GB)**. Filing this support ticket can be done after you have either deleted data to fit the 20-GB logical partition limit or have rearchitected your application with a different partition key.
4242

43-
³ Minimum can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program)
44-
4543
### Minimum throughput limits
4644

4745
An Azure Cosmos DB container (or shared throughput database) using manual throughput must have a minimum throughput of 400 RU/s. As the container grows, Azure Cosmos DB requires a minimum throughput to ensure the resource (database or container) has sufficient resource for its operations.
@@ -54,28 +52,22 @@ The actual minimum RU/s may vary depending on your account configuration. You ca
5452

5553
To estimate the minimum throughput required of a container with manual throughput, find the maximum of:
5654

57-
* 400 RU/s
58-
* Current storage in GB * 10 RU/s
55+
* 400 RU/s
56+
* Current storage in GB * 1 RU/s
5957
* Highest RU/s ever provisioned on the container / 100
6058

61-
For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 10 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 200 GB. The minimum RU/s is now `MAX(400, 200 * 10 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
62-
63-
> [!NOTE]
64-
> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
59+
For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 1 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 2000 GB. The minimum RU/s is now `MAX(400, 2000 * 1 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
6560

6661
#### Minimum throughput on shared throughput database
6762

6863
To estimate the minimum throughput required of a shared throughput database with manual throughput, find the maximum of:
6964

70-
* 400 RU/s
71-
* Current storage in GB * 10 RU/s
65+
* 400 RU/s
66+
* Current storage in GB * 1 RU/s
7267
* Highest RU/s ever provisioned on the database / 100
7368
* 400 + MAX(Container count - 25, 0) * 100 RU/s
7469

75-
For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 10 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
76-
77-
> [!NOTE]
78-
> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
70+
For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 1 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
7971

8072
In summary, here are the minimum provisioned RU limits when using manual throughput.
8173

@@ -161,7 +153,7 @@ An Azure Cosmos DB item can represent either a document in a collection, a row i
161153
| Maximum size of an item | 2 MB (UTF-8 length of JSON representation) ¹ |
162154
| Maximum length of partition key value | 2048 bytes (101 bytes if large partition-key isn't enabled) |
163155
| Maximum length of ID value | 1023 bytes |
164-
| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are known limitations in some versions of the Cosmos DB SDK, connectors (ADF, Spark, Kafka etc.), and http-drivers/libraries etc. These limitations can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, encode the ID value - [for example via Base64 + custom encoding of special characters allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
156+
| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are several known limitations in some versions of the Cosmos DB SDK, as well as connectors (ADF, Spark, Kafka etc.) and http-drivers/libraries etc. that can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, please encode the ID value - [for example via Base64 + custom encoding of special characters allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
165157
| Maximum number of properties per item | No practical limit |
166158
| Maximum length of property name | No practical limit |
167159
| Maximum length of property value | No practical limit |
@@ -222,8 +214,8 @@ See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
222214
| Minimum RU/s the system can scale to | `0.1 * Tmax`|
223215
| Current RU/s the system is scaled to | `0.1*Tmax <= T <= Tmax`, based on usage|
224216
| Minimum billable RU/s per hour| `0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. |
225-
| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
226-
| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s). |
217+
| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10)` rounded to nearest 1000 RU/s |
218+
| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
227219

228220
## SQL query limits
229221

66.7 KB
Loading

articles/cosmos-db/nosql/TOC.yml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,11 @@
188188
- name: Autoscale FAQ
189189
href: ../autoscale-faq.yml
190190
- name: Serverless
191-
href: ../serverless.md
191+
items:
192+
- name: Serverless overview
193+
href: ../serverless.md
194+
- name: Serverless 1 TB
195+
href: ../serverless-1TB.md
192196
- name: Choose between autoscale and standard (manual) throughput
193197
href: ../how-to-choose-offer.md
194198
- name: Choose between provisioned throughput and serverless

articles/cosmos-db/set-throughput.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ The response of those methods also contains the [minimum provisioned throughput]
106106
The actual minimum RU/s may vary depending on your account configuration. But generally it's the maximum of:
107107

108108
* 400 RU/s
109-
* Current storage in GB * 10 RU/s (this constraint can be relaxed in some cases, see our [high storage / low throughput program](#high-storage-low-throughput-program))
109+
* Current storage in GB * 1 RU/s
110110
* Highest RU/s ever provisioned on the database or container / 100
111111

112112
### Changing the provisioned throughput
@@ -132,14 +132,6 @@ You can programmatically check the scaling progress by reading the [current prov
132132

133133
You can use [Azure Monitor metrics](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
134134

135-
## <a id="high-storage-low-throughput-program"></a> High storage / low throughput program
136-
137-
As described in the [Current provisioned throughput](#current-provisioned-throughput) section above, the minimum throughput you can provision on a container or database depends on a number of factors. One of them is the amount of data currently stored, as Azure Cosmos DB enforces a minimum throughput of 10 RU/s per GB of storage.
138-
139-
This can be a concern in situations where you need to store large amounts of data, but have low throughput requirements in comparison. To better accommodate these scenarios, Azure Cosmos DB has introduced a **"high storage / low throughput" program** that decreases the RU/s per GB constraint on eligible accounts.
140-
141-
To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://aka.ms/cosmosdb-high-storage-low-throughput-program). The Azure Cosmos DB team will then follow up and proceed with your onboarding.
142-
143135
## Comparison of models
144136
This table shows a comparison between provisioning standard (manual) throughput on a database vs. on a container.
145137

0 commit comments

Comments
 (0)