Skip to content

Commit 1076afb

Browse files
committed
Acrolinx tidy up
1 parent 7fc07f2 commit 1076afb

File tree

4 files changed

+25
-25
lines changed

4 files changed

+25
-25
lines changed

articles/cosmos-db/burst-capacity.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,18 +28,18 @@ After the 10 seconds is over, the burst capacity has been used up. If the worklo
2828

2929
## Getting started
3030

31-
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** blade](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
31+
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
3232
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
3333
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
3434

3535
## Limitations
3636

3737
### Preview eligibility criteria
3838
To enroll in the preview, your Cosmos account must meet all the following criteria:
39-
- Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity does not apply to serverless accounts.
39+
- Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
4040
- If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
4141
- There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
42-
- Your Cosmos account is not using any unsupported connectors
42+
- Your Cosmos account isn't using any unsupported connectors
4343
- Azure Data Factory
4444
- Azure Stream Analytics
4545
- Logic Apps
@@ -62,7 +62,7 @@ Support for other SQL API SDKs is planned for the future.
6262
> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
6363
6464
#### Table API
65-
For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` is not supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
65+
For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
6666

6767
| SDK | Supported versions | Package manager link |
6868
| --- | --- | --- |

articles/cosmos-db/merge.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,13 @@ Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number
1717

1818
## Getting started
1919

20-
To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** blade](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
20+
To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
2121
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
2222
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
2323

2424
### Merging physical partitions
2525

26-
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag is not passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.
26+
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag isn't passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.
2727
> [!TIP]
2828
> Before running a merge, it's recommended to set your provisioned RU/s (either manual RU/s or autoscale max RU/s) as close as possible to your desired steady state RU/s post-merge, to help ensure the system calculates an efficient partition layout.
2929
@@ -74,9 +74,9 @@ az cosmosdb mongodb collection merge \
7474
---
7575

7676
### Monitor merge operations
77-
Partition merge is a long-running operation and there is no SLA on how long it takes to complete. The time depends on the amount of data in the container as well as the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete.
77+
Partition merge is a long-running operation and there's no SLA on how long it takes to complete. The time depends on the amount of data in the container and the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete.
7878

79-
While partition merge is running on your container, it is not possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc). Wait until the merge operation completes before changing your container settings.
79+
While partition merge is running on your container, it isn't possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc.). Wait until the merge operation completes before changing your container settings.
8080

8181
You can track whether merge is still in progress by checking the **Activity Log** and filtering for the events **Merge the physical partitions of a MongoDB collection** or **Merge the physical partitions of a SQL container**.
8282

@@ -85,18 +85,18 @@ You can track whether merge is still in progress by checking the **Activity Log*
8585
### Preview eligibility criteria
8686
To enroll in the preview, your Cosmos account must meet all the following criteria:
8787
* Your Cosmos account uses SQL API or API for MongoDB with version >=3.6.
88-
* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge does not apply to serverless accounts.
89-
* Currently, merge is not supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
88+
* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
89+
* Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
9090
* However, only the containers with dedicated throughput will be able to be merged.
91-
* Your Cosmos account is a single-write region account (merge is not currently supported for multi-region write accounts).
92-
* Your Cosmos account does not use any of the following features:
91+
* Your Cosmos account is a single-write region account (merge isn't currently supported for multi-region write accounts).
92+
* Your Cosmos account doesn't use any of the following features:
9393
* [Point-in-time restore](continuous-backup-restore-introduction.md)
9494
* [Customer-managed keys](how-to-setup-cmk.md)
9595
* [Analytical store](analytical-store-introduction.md)
96-
* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge is not currently supported for strong consistency).
96+
* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
9797
* If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
9898
* There are no SDK or driver requirements to use the feature with API for MongoDB.
99-
* Your Cosmos account does not use any currently unsupported connectors:
99+
* Your Cosmos account doesn't use any currently unsupported connectors:
100100
* Azure Data Factory
101101
* Azure Stream Analytics
102102
* Logic Apps
@@ -111,8 +111,8 @@ To enroll in the preview, your Cosmos account must meet all the following criter
111111
* [Customer-managed keys](how-to-setup-cmk.md)
112112
* [Analytical store](analytical-store-introduction.md)
113113
* Containers using merge functionality must have their throughput provisioned at the container level. Database-shared throughput support isn't available.
114-
* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It is not currently supported for strong consistency.
115-
* After a container has been merged, it is not possible to read the change feed with start time. Support for this feature is planned for the future.
114+
* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency.
115+
* After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
116116

117117
### SDK requirements (SQL API only)
118118

articles/cosmos-db/sql/distribute-throughput-across-partitions-faq.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ sections:
2121
- question: |
2222
What resources can I use this feature on?
2323
answer: |
24-
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview. The feature does not apply to serverless accounts.
24+
The feature is only supported for SQL and API for MongoDB accounts and on collections with dedicated throughput (either manual or autoscale). Shared throughput databases aren't supported in the preview. The feature doesn't apply to serverless accounts.
2525
- question: |
2626
Which version of the Azure Cosmos DB functionality in Azure PowerShell supports this feature?
2727
answer: |
@@ -59,11 +59,11 @@ sections:
5959
- question: |
6060
Why am I seeing a discrepancy between the overall RU/s on my container and the sum of the RU/s across all physical partitions?
6161
answer: |
62-
- This can happen when you scale up your overall RU/s such that for any single partition, `(current RU/s per partition * new container RU/s)/(old container RU/s)` is greater than 10,000 RU/s. This can happen when you trigger a partition split by increasing RU/s beyond `currentNumberOfPartitions * 10,000 RU/s` or increase RU/s without triggering a partition split.
63-
- It is recommended to redistribute your throughput equally after the scale-up. Otherwise, it is possible that you will not be able to use all the RU/s you've provisioned (and are being billed for).
62+
- This discrepancy can happen when you scale up your overall RU/s for any single partition, `(current RU/s per partition * new container RU/s)/(old container RU/s)` is greater than 10,000 RU/s. This discrepancy occurs when you trigger a partition split by increasing RU/s beyond `currentNumberOfPartitions * 10,000 RU/s` or increase RU/s without triggering a partition split.
63+
- It's recommended to redistribute your throughput equally after the scale-up. Otherwise, it's possible that you won't be able to use all the RU/s you've provisioned (and are being billed for).
6464
- To check if this scenario applies to your resource use Azure Monitor metrics. Compare the value of the **ProvisionedThroughput** (when using manual throughput) or **AutoscaleMaxThroughput** (when using autoscale) metric to the value of the **PhysicalPartitionThroughput** metric. If the value of **PhysicalPartitionThroughput** is less than the respective **ProvisionedThroughput** or **AutoscaleMaxThroughput**, then reset your RU/s to an even distribution before redistributing, or lower your resource's throughput to the value of **PhysicalPartitionThroughput**.
6565
66-
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 24,000 RU/s. After the scale-up, the total throughput across all partitions is only 18,000 RU/s. This means that while we are being billed for 24,000 RU/s, we are only able to get 18,000 RU/s of effective throughput. By redistributing our RU/s equally, each partition will get 8000 RU/s, and we can redistribute RU/s again as needed. We could also choose to lower our overall RU/s to 18,000 RU/s.
66+
For example, suppose you have a collection with 6000 RU/s and 3 physical partitions. You scale it up to 24,000 RU/s. After the scale-up, the total throughput across all partitions is only 18,000 RU/s. This distribution means that while we're being billed for 24,000 RU/s, we're only able to get 18,000 RU/s of effective throughput. Each partition will get 8000 RU/s, as RU/s are redistributed equally, and we can redistribute RU/s again as needed. We could also choose to lower our overall RU/s to 18,000 RU/s.
6767
6868
|Before scale-up (6000 RU/s) |After scale up to 24,000 RU/s (effective RU/s = 18,000 RU/s) |Fraction of total RU/s |
6969
|---------|---------|---------|

articles/cosmos-db/sql/distribute-throughput-across-partitions.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ If you aren't seeing 429 responses and your end to end latency is acceptable, th
2828

2929
## Getting started
3030

31-
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** blade](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
31+
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
3232
- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
3333
- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
3434

@@ -122,7 +122,7 @@ $allPartitions = Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
122122
```
123123
### Determine RU/s for target partition
124124

125-
Next, let's decide how many RU/s we want to give to our hottest physical partition(s). Let's call this set our target partition(s). The most RU/s any physical partition can have is 10,000 RU/s.
125+
Next, let's decide how many RU/s we want to give to our hottest physical partition(s). Let's call this set our target partition(s). The most RU/s any physical partition can contain is 10,000 RU/s.
126126

127127
The right approach depends on your workload requirements. General approaches include:
128128
- Increasing the RU/s by a percentage, measure the rate of 429 responses, and repeat until desired throughput is achieved.
@@ -135,7 +135,7 @@ The right approach depends on your workload requirements. General approaches inc
135135

136136
Finally, let's decide how many RU/s we want to keep on our other physical partitions. This selection will determine the partitions that the target physical partition takes throughput from.
137137

138-
In the PowerShell APIs, we must specify at least one source partition to redistribute RU/s from. We can also specify a custom minimum throughput each physical partition should have after the redistribution. If not specified, by default, Azure Cosmos DB will ensure that each physical partition has at least 100 RU/s after the redistribution. It is recommended to explicitly specify the minimum throughput.
138+
In the PowerShell APIs, we must specify at least one source partition to redistribute RU/s from. We can also specify a custom minimum throughput each physical partition should have after the redistribution. If not specified, by default, Azure Cosmos DB will ensure that each physical partition has at least 100 RU/s after the redistribution. It's recommended to explicitly specify the minimum throughput.
139139

140140
The right approach depends on your workload requirements. General approaches include:
141141
- Taking RU/s equally from all source partitions (works best when there are <= 10 partitions)
@@ -221,9 +221,9 @@ After the changes, assuming your overall workload hasn't changed, you'll likely
221221
To enroll in the preview, your Cosmos account must meet all the following criteria:
222222
- Your Cosmos account is using SQL API or API for MongoDB.
223223
- If you're using API for MongoDB, the version must be >= 3.6.
224-
- Your Cosmos account is using provisioned throughput (manual or autoscale). Distribution of throughput across partitions does not apply to serverless accounts.
224+
- Your Cosmos account is using provisioned throughput (manual or autoscale). Distribution of throughput across partitions doesn't apply to serverless accounts.
225225
- If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When the ability to redistribute throughput across partitions is enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
226-
- Your Cosmos account is not using any unsupported connectors:
226+
- Your Cosmos account isn't using any unsupported connectors:
227227
- Azure Data Factory
228228
- Azure Stream Analytics
229229
- Logic Apps

0 commit comments

Comments
 (0)