Skip to content

Commit 17de6e6

Browse files
author
Jake Willey
committed
More grammar fixe
1 parent e74300f commit 17de6e6

File tree

2 files changed

+6
-8
lines changed

2 files changed

+6
-8
lines changed

articles/cosmos-db/secure-access-to-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Azure Cosmos DB uses two types of keys to authenticate users and provide access
2121

2222
<a id="master-keys"></a>
2323

24-
## Master keys
24+
## Master keys
2525

2626
Master keys provide access to all the administrative resources for the database account. Master keys:
2727

articles/cosmos-db/troubleshoot-dot-net-sdk.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -98,15 +98,15 @@ The [query metrics](sql-api-query-metrics.md) will help determine where the quer
9898
### The MAC signature found in the HTTP request is not the same as the computed signature
9999
If you received the following error message: "The MAC signature found in the HTTP request is not the same as the computed signature." it can be caused by the following scenarios.
100100

101-
1. The key was rotated and did not follow the [best practices](secure-access-to-data.md#master-keys). This is usually the case. Comos DB account key rotation can take anywhere from a few seconds to possibly days depending on the Cosmos DB account size.
101+
1. The key was rotated and did not follow the [best practices](secure-access-to-data.md#key-rotation). This is usually the case. Comos DB account key rotation can take anywhere from a few seconds to possibly days depending on the Cosmos DB account size.
102102
1. 401 MAC signature is seen shortly after a key rotation and eventually stops without any changes.
103103
2. The key is misconfigured on the application so the key does not match the account. For instance cases where the key is read from a file and localization is not taken in consideration.
104104
1. 401 MAC signature issue will be consistent and happens for all calls
105105
3. There is a race condition with container creation. An application instance is trying to access the container before container creation is complete. The most common scenario for this if the application is running, and the container is deleted and recreated with the same name while the application is running. The SDK will attempt to use the new container, but the container creation is still in progress so it does not have the keys.
106106
1. 401 MAC signature issue is seen shortly after a container creation, and only occur until the container creation is completed.
107107

108108
### HTTP Error 400. The size of the request headers is too long.
109-
The size of the header has grown to large and is exceeding the maximum allowed size. It's always recommended to use the latest SDK. Make sure to use at least version [2.9.3](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md#-293) or [3.5.1](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md#-351---2019-12-11) which adds header size tracing to the exception message.
109+
The size of the header has grown to large and is exceeding the maximum allowed size. It's always recommended to use the latest SDK. Make sure to use at least version [2.9.3](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md#-293) or [3.5.1](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md#-351---2019-12-11), which adds header size tracing to the exception message.
110110

111111
Causes:
112112
1. The session token has grown to large. The session token grows as the number of partitions increase in the container.
@@ -115,15 +115,13 @@ Causes:
115115

116116
Solution:
117117
1. Follow the [performance tips](performance-tips.md) and convert the application to Direct + TCP connection mode. Direct + TCP does not have the header size restriction like HTTP does which avoids this issue.
118-
2. If the session token is the cause, then a temporary mitigation is to restart the application. Restarting the application instance will reset the session token. If the exceptions stop after the restart then it confirms the session token is the cause. It will eventually grow back to the size that will cause the exception.
119-
3. If the application cannot be converted to Direct + TCP and the continuation token is the cause then try setting the ResponseContinuationTokenLimitInKb option. The option can be found in the FeedOptions for v2 or the QueryRequestOptions in v3.
118+
2. If the session token is the cause, then a temporary mitigation is to restart the application. Restarting the application instance will reset the session token. If the exceptions stop after the restart, then it confirms the session token is the cause. It will eventually grow back to the size that will cause the exception.
119+
3. If the application cannot be converted to Direct + TCP and the continuation token is the cause, then try setting the ResponseContinuationTokenLimitInKb option. The option can be found in the FeedOptions for v2 or the QueryRequestOptions in v3.
120120

121121
<!--Anchors-->
122122
[Common issues and workarounds]: #common-issues-workarounds
123123
[Enable client SDK logging]: #logging
124124
[Request rate too large]: #request-rate-too-large
125125
[Request Timeouts]: #request-timeouts
126126
[Azure SNAT (PAT) port exhaustion]: #snat
127-
[Production check list]: #production-check-list
128-
129-
127+
[Production check list]: #production-check-list

0 commit comments

Comments
 (0)