Skip to content

Commit dea610c

Browse files
Merge pull request #274166 from gahl-levy/patch-77
Update error-codes-solutions.md
2 parents 3303fda + 22e91b1 commit dea610c

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

articles/cosmos-db/mongodb/error-codes-solutions.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ The following article describes common errors and solutions for deployments usin
2323
|------------|----------------------|--------------|-----------|
2424
| 2 | BadValue | One common cause is that an index path corresponding to the specified order-by item is excluded or the order by query doesn't have a corresponding composite index that it can be served from. The query requests a sort on a field that isn't indexed. | Create a matching index (or composite index) for the sort query being attempted. |
2525
| 2 | Transaction isn't active | The multi-document transaction surpassed the fixed 5-second time limit. | Retry the multi-document transaction or limit the scope of operations within the multi-document transaction to make it complete within the 5-second time limit. |
26+
| 9 | FailedToParse | Indicates that the Cosmos DB server was unable to interpret or process a parameter because the provided input did not conform to the expected or supported format. | Ensure only valid and supported parameters are included in your queries.
2627
| 13 | Unauthorized | The request lacks the permissions to complete. | Ensure you're using the correct keys. |
2728
| 26 | NamespaceNotFound | The database or collection being referenced in the query can't be found. | Ensure your database/collection name precisely matches the name in your query.|
2829
| 50 | ExceededTimeLimit | The request has exceeded the timeout of 60 seconds of execution. | There can be many causes for this error. One of the causes is when the currently allocated request units capacity isn't sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones. Retrying a write operation that has received this error may result in a duplicate write. <br><br>If you're trying to delete large amounts of data without impacting RUs: <br>- Consider using TTL (Based on Timestamp): [Expire data with Azure Cosmos DB's API for MongoDB](time-to-live.md) <br>- Use Cursor/Batch size to perform the delete. You can fetch a single document at a time and delete it through a loop. This will help you slowly delete data without impacting your production application.|

0 commit comments

Comments
 (0)