You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/burst-capacity.md
+3-14Lines changed: 3 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,22 +29,11 @@ After the 10 seconds is over, the burst capacity has been used up. If the worklo
29
29
30
30
## Getting started
31
31
32
-
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview**button in the eligibility check page to open the **Preview Features** page.
32
+
To get started using burst capacity, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Burst Capacity (preview)**feature.
33
33
34
-
:::image type="content" source="media/burst-capacity/burst-capacity-enable-feature.png" alt-text="Screenshot of Burst Capacity feature in Preview Features page in Subscriptions overview in Azure Portal.":::
34
+
Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect.
35
35
36
-
Before submitting your request:
37
-
38
-
- Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
39
-
- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria).
40
-
41
-
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
42
-
43
-
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Burst Capacity**. Run the **Check eligibility for burst capacity preview** diagnostic.
44
-
45
-
:::image type="content" source="media/burst-capacity/throughput-and-scaling-category.png" alt-text="Throughput and Scaling topic in Diagnose and solve issues page":::
46
-
47
-
:::image type="content" source="media/burst-capacity/burst-capacity-eligibility-check.png" alt-text="Burst capacity eligibility check with table of all preview eligibility criteria":::
36
+
:::image type="content" source="media/burst-capacity/burst-capacity-enable-feature.png" alt-text="Screenshot of Burst Capacity feature in Preview Features page in Subscriptions overview in Azure portal.":::
Copy file name to clipboardExpand all lines: articles/cosmos-db/concepts-limits.md
+10-18Lines changed: 10 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,14 +34,12 @@ You can allocate throughput at a container-level or a database-level in terms of
34
34
| Maximum number of distinct (logical) partition keys | Unlimited |
35
35
| Maximum storage per container | Unlimited |
36
36
| Maximum attachment size per Account (Attachment feature is being deprecated) | 2 GB |
37
-
| Minimum RU/s required per 1 GB |10 RU/s ³|
37
+
| Minimum RU/s required per 1 GB |1 RU/s |
38
38
39
39
¹ You can increase Maximum RUs per container or database by [filing an Azure support ticket](create-support-request-quota-increase.md).
40
40
41
41
² To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20 GB in production, it's recommended to rearchitect your application with a different partition key as a long-term solution. To help give time to rearchitect your application, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Requesting a temporary increase is intended as a temporary mitigation and not recommended as a long-term solution, as **SLA guarantees are not honored when the limit is increased**. To remove the configuration, file a support ticket and select quota type **Restore container’s logical partition key size to default (20 GB)**. Filing this support ticket can be done after you have either deleted data to fit the 20-GB logical partition limit or have rearchitected your application with a different partition key.
42
42
43
-
³ Minimum can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program)
44
-
45
43
### Minimum throughput limits
46
44
47
45
An Azure Cosmos DB container (or shared throughput database) using manual throughput must have a minimum throughput of 400 RU/s. As the container grows, Azure Cosmos DB requires a minimum throughput to ensure the resource (database or container) has sufficient resource for its operations.
@@ -54,28 +52,22 @@ The actual minimum RU/s may vary depending on your account configuration. You ca
54
52
55
53
To estimate the minimum throughput required of a container with manual throughput, find the maximum of:
56
54
57
-
* 400 RU/s
58
-
* Current storage in GB * 10 RU/s
55
+
* 400 RU/s
56
+
* Current storage in GB * 1 RU/s
59
57
* Highest RU/s ever provisioned on the container / 100
60
58
61
-
For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 10 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 200 GB. The minimum RU/s is now `MAX(400, 200 * 10 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
62
-
63
-
> [!NOTE]
64
-
> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
59
+
For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 1 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 2000 GB. The minimum RU/s is now `MAX(400, 2000 * 1 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
65
60
66
61
#### Minimum throughput on shared throughput database
67
62
68
63
To estimate the minimum throughput required of a shared throughput database with manual throughput, find the maximum of:
69
64
70
-
* 400 RU/s
71
-
* Current storage in GB * 10 RU/s
65
+
* 400 RU/s
66
+
* Current storage in GB * 1 RU/s
72
67
* Highest RU/s ever provisioned on the database / 100
73
68
* 400 + MAX(Container count - 25, 0) * 100 RU/s
74
69
75
-
For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 10 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
76
-
77
-
> [!NOTE]
78
-
> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
70
+
For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 1 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
79
71
80
72
In summary, here are the minimum provisioned RU limits when using manual throughput.
81
73
@@ -161,7 +153,7 @@ An Azure Cosmos DB item can represent either a document in a collection, a row i
161
153
| Maximum size of an item | 2 MB (UTF-8 length of JSON representation) ¹ |
162
154
| Maximum length of partition key value | 2048 bytes (101 bytes if large partition-key isn't enabled) |
163
155
| Maximum length of ID value | 1023 bytes |
164
-
| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are known limitations in some versions of the Cosmos DB SDK, connectors (ADF, Spark, Kafka etc.), and http-drivers/libraries etc. These limitations can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, encode the ID value - [for example via Base64 + custom encoding of special characters allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
156
+
| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are several known limitations in some versions of the Cosmos DB SDK, as well as connectors (ADF, Spark, Kafka etc.) and http-drivers/libraries etc. that can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, please encode the ID value - [for example via Base64 + custom encoding of special characters allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
165
157
| Maximum number of properties per item | No practical limit |
166
158
| Maximum length of property name | No practical limit |
167
159
| Maximum length of property value | No practical limit |
@@ -222,8 +214,8 @@ See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
222
214
| Minimum RU/s the system can scale to |`0.1 * Tmax`|
223
215
| Current RU/s the system is scaled to |`0.1*Tmax <= T <= Tmax`, based on usage|
224
216
| Minimum billable RU/s per hour|`0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. |
225
-
| Minimum autoscale max RU/s for a container |`MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
226
-
| Minimum autoscale max RU/s for a database |`MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).|
217
+
| Minimum autoscale max RU/s for a container |`MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10)` rounded to nearest 1000 RU/s |
218
+
| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
Copy file name to clipboardExpand all lines: articles/cosmos-db/set-throughput.md
+1-9Lines changed: 1 addition & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -106,7 +106,7 @@ The response of those methods also contains the [minimum provisioned throughput]
106
106
The actual minimum RU/s may vary depending on your account configuration. But generally it's the maximum of:
107
107
108
108
* 400 RU/s
109
-
* Current storage in GB * 10 RU/s (this constraint can be relaxed in some cases, see our [high storage / low throughput program](#high-storage-low-throughput-program))
109
+
* Current storage in GB * 1 RU/s
110
110
* Highest RU/s ever provisioned on the database or container / 100
111
111
112
112
### Changing the provisioned throughput
@@ -132,14 +132,6 @@ You can programmatically check the scaling progress by reading the [current prov
132
132
133
133
You can use [Azure Monitor metrics](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
134
134
135
-
## <aid="high-storage-low-throughput-program"></a> High storage / low throughput program
136
-
137
-
As described in the [Current provisioned throughput](#current-provisioned-throughput) section above, the minimum throughput you can provision on a container or database depends on a number of factors. One of them is the amount of data currently stored, as Azure Cosmos DB enforces a minimum throughput of 10 RU/s per GB of storage.
138
-
139
-
This can be a concern in situations where you need to store large amounts of data, but have low throughput requirements in comparison. To better accommodate these scenarios, Azure Cosmos DB has introduced a **"high storage / low throughput" program** that decreases the RU/s per GB constraint on eligible accounts.
140
-
141
-
To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://aka.ms/cosmosdb-high-storage-low-throughput-program). The Azure Cosmos DB team will then follow up and proceed with your onboarding.
142
-
143
135
## Comparison of models
144
136
This table shows a comparison between provisioning standard (manual) throughput on a database vs. on a container.
0 commit comments