You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/burst-capacity-faq.yml
+10-2Lines changed: 10 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -18,20 +18,28 @@ summary: |
18
18
sections:
19
19
- name: General
20
20
questions:
21
+
- question: |
22
+
How much does it cost to use burst capacity?
23
+
answer: |
24
+
There's no charge to use burst capacity.
21
25
- question: |
22
26
How does burst capacity work with autoscale?
23
27
answer: |
24
28
Autoscale and burst capacity are compatible. Autoscale gives you a guaranteed instant 10 times scale range. Burst capacity allows you to take advantage of unused, idle capacity to handle temporary spikes, potentially beyond your autoscale max RU/s. For example, suppose we have an autoscale container with one physical partition that scales between 100 - 1000 RU/s. Without burst capacity, any requests that consume beyond 1000 RU/s would be rate limited. With burst capacity however, the partition can accumulate a maximum of 1000 RU/s of idle capacity each second. Burst capacity allows the partition to burst at a maximum rate of 3000 RU/s for a limited amount of time.
25
29
26
-
The autoscale max RU/s per physical partition must be less than 3000 RU/s for burst capacity to be applicable.
30
+
Accumulation of burst is based on the maximum autoscale RU/s.
31
+
32
+
The autoscale maximum RU/s per physical partition must be less than 3000 RU/s for burst capacity to be applicable.
33
+
34
+
When burst capacity is used with autoscale, autoscale will use up to the maximum RU/s before using burst capacity. You may see autoscale scale up to max RU/s during spikes of traffic.
27
35
- question: |
28
36
What resources can use burst capacity?
29
37
answer: |
30
38
When your account is enrolled in the preview, any shared throughput databases or containers with dedicated throughput that have less than 3000 RU/s per physical partition can use burst capacity. The resource can use either manual or autoscale throughput.
31
39
- question: |
32
40
How can I monitor burst capacity?
33
41
answer: |
34
-
[Azure Monitor metrics](monitor-cosmos-db.md#analyzing-metrics), built-in to Azure Cosmos DB, can filter by the dimension **CapacityType** on the **TotalRequests** and **TotalRequestUnits** metrics. Requests served with burst capacity will have **CapacityType** equal to **BurstCapacity**.
42
+
[Azure Monitor metrics](monitor-cosmos-db.md#analyzing-metrics), built-in to Azure Cosmos DB, can filter by the dimension **CapacityType** on the **TotalRequests** and **TotalRequestUnits (preview)** metrics. Requests served with burst capacity will have **CapacityType** equal to **BurstCapacity**.
35
43
- question: |
36
44
How can I see which resources have less than 3000 RU/s per physical partition?
Copy file name to clipboardExpand all lines: articles/cosmos-db/concepts-limits.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,9 +93,10 @@ Depending on the current RU/s provisioned and resource settings, each resource c
93
93
| Maximum RU/s per container | 5,000 |
94
94
| Maximum storage across all items per (logical) partition | 20 GB |
95
95
| Maximum number of distinct (logical) partition keys | Unlimited |
96
-
| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB |
96
+
| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB<sup>1</sup>|
97
97
| Maximum storage per container (Cassandra API)| 30 GB |
98
98
99
+
<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"*[preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
99
100
100
101
## Control plane operations
101
102
@@ -163,7 +164,7 @@ An Azure Cosmos item can represent either a document in a collection, a row in a
163
164
| Maximum level of nesting for embedded objects / arrays | 128 |
164
165
| Maximum TTL value |2147483647 |
165
166
166
-
<sup>1</sup> Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature “Azure Cosmos DB API For MongoDB 16MB Document Support” from [Preview Features the Azure portal](./access-previews.md), to try the new feature.
167
+
<sup>1</sup> Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature “Azure Cosmos DB API For MongoDB 16 MB Document Support” from [Preview Features the Azure portal](./access-previews.md), to try the new feature.
167
168
168
169
There are no restrictions on the item payloads (like number of properties and nesting depth), except for the length restrictions on partition key and ID values, and the overall size restriction of 2 MB. You may have to configure indexing policy for containers with large or complex item structures to reduce RU consumption. See [Modeling items in Cosmos DB](how-to-model-partition-example.md) for a real-world example, and patterns to manage large items.
169
170
@@ -215,7 +216,7 @@ See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
215
216
| Current RU/s the system is scaled to |`0.1*Tmax <= T <= Tmax`, based on usage|
216
217
| Minimum billable RU/s per hour|`0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. |
217
218
| Minimum autoscale max RU/s for a container |`MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
218
-
| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per additional container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
219
+
| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
219
220
220
221
## SQL query limits
221
222
@@ -291,7 +292,7 @@ Get started with Azure Cosmos DB with one of our quickstarts:
291
292
*[Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)
292
293
*[Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
293
294
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
294
-
* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
295
+
* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
295
296
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
0 commit comments