Skip to content

Commit fb60c90

Browse files
committed
Adding language specific links
1 parent ab888cc commit fb60c90

File tree

1 file changed

+24
-17
lines changed

1 file changed

+24
-17
lines changed

articles/cosmos-db/sql/conceptual-resilient-sdk-applications.md

Lines changed: 24 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Designing resilient applications with Azure Cosmos DB SDKs
33
description: Learn how to build resilient applications using the Azure Cosmos DB SDKs and what all are the expected error status codes to retry on.
44
author: ealsur
55
ms.service: cosmos-db
6-
ms.date: 03/25/2022
6+
ms.date: 05/05/2022
77
ms.author: maquaran
88
ms.subservice: cosmosdb-sql
99
ms.topic: conceptual
@@ -50,23 +50,23 @@ Your application should be resilient to a [certain degree](#when-to-contact-cust
5050

5151
The short answer is **yes**. But not all errors make sense to retry on, some of the error or status codes aren't transient. The table below describes them in detail:
5252

53-
| Status Code | Should add retry | Description |
53+
| Status Code | Should add retry | SDKs retry | Description |
5454
|----------|-------------|-------------|
55-
| 400 | No | [Bad request](troubleshoot-bad-request.md) |
56-
| 401 | No | [Not authorized](troubleshoot-unauthorized.md) |
57-
| 403 | Optional | [Forbidden](troubleshoot-forbidden.md) |
58-
| 404 | No | [Resource is not found](troubleshoot-not-found.md) |
59-
| 408 | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
60-
| 409 | No | Conflict failure is when the identity (ID and partition key) provided for a resource on a write operation has been taken by an existing resource or when a [unique key constraint](../unique-keys.md) has been violated. |
61-
| 410 | Yes | Gone exceptions (transient failure that shouldn't violate SLA) |
62-
| 412 | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an [optimistic concurrency](database-transactions-optimistic-concurrency.md#optimistic-concurrency-control) error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
63-
| 413 | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
64-
| 429 | Yes | It's safe to retry on a 429. Review the [guide to troubleshoot HTTP 429](troubleshoot-request-rate-too-large.md).|
65-
| 449 | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
66-
| 500 | No | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
67-
| 503 | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
68-
69-
In the table above, all the status codes marked with **Yes** should have some degree of retry coverage in your application.
55+
| 400 | No | No | [Bad request](troubleshoot-bad-request.md) |
56+
| 401 | No | No | [Not authorized](troubleshoot-unauthorized.md) |
57+
| 403 | Optional | No | [Forbidden](troubleshoot-forbidden.md) |
58+
| 404 | No | No | [Resource is not found](troubleshoot-not-found.md) |
59+
| 408 | Yes | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
60+
| 409 | No | No | Conflict failure is when the identity (ID and partition key) provided for a resource on a write operation has been taken by an existing resource or when a [unique key constraint](../unique-keys.md) has been violated. |
61+
| 410 | Yes | Yes | Gone exceptions (transient failure that shouldn't violate SLA) |
62+
| 412 | No | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an [optimistic concurrency](database-transactions-optimistic-concurrency.md#optimistic-concurrency-control) error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
63+
| 413 | No | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
64+
| 429 | Yes | Yes | It's safe to retry on a 429. Review the [guide to troubleshoot HTTP 429](troubleshoot-request-rate-too-large.md).|
65+
| 449 | Yes | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
66+
| 500 | No | No | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
67+
| 503 | Yes | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
68+
69+
In the table above, all the status codes marked with **Yes** on the second column should have some degree of retry coverage in your application.
7070

7171
### HTTP 403
7272

@@ -97,6 +97,13 @@ Because of the nature of timeouts and connectivity failures, these might not app
9797

9898
It's recommended for applications to have their own retry policy for these scenarios and take into consideration how to resolve write timeouts. For example, retrying on a Create timeout can yield an HTTP 409 (Conflict) if the previous request did reach the service, but it would succeed if it didn't.
9999

100+
### Language specific implementation details
101+
102+
For further implementation details regarding a language see:
103+
104+
* [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/)
105+
* [Java SDK implementation information](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/docs/)
106+
100107
## Do retries affect my latency?
101108

102109
From the client perspective, any retries will affect the end to end latency of an operation. When your application P99 latency is being affected, understanding the retries that are happening and how to address them is important.

0 commit comments

Comments
 (0)