You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Resiliency policies proactively prevent, detect, and recover from your container app failures. In this article, you learn how to apply resiliency policies for applications that use Dapr to integrate with different cloud services, like state stores, pub/sub message brokers, secret stores, and more.
18
18
19
-
You can configure resiliency policies like retriesand timeouts for the following outbound and inbound operation directions via a Dapr component:
19
+
You can configure resiliency policies like retries, timeouts, and circuit breakers for the following outbound and inbound operation directions via a Dapr component:
20
20
21
21
-**Outbound operations:** Calls from the Dapr sidecar to a component, such as:
| `retryBackOff.initialDelayInMilliseconds` | Yes | Delay between first error and first retry. | `1000` |
257
275
| `retryBackOff.maxIntervalInMilliseconds` | Yes | Maximum delay between retries. | `10000` |
258
276
277
+
### Circuit breakers
278
+
279
+
Define a `circuitBreakerPolicy` to monitor requests causing elevated failure rates and shut off all traffic to the impacted service when a certain criteria is met.
280
+
281
+
```bicep
282
+
properties: {
283
+
outbound: {
284
+
circuitBreakerPolicy: {
285
+
intervalInSeconds: 15
286
+
consecutiveErrors: 10
287
+
timeoutInSeconds: 5
288
+
}
289
+
},
290
+
inbound: {
291
+
circuitBreakerPolicy: {
292
+
intervalInSeconds: 15
293
+
consecutiveErrors: 10
294
+
timeoutInSeconds: 5
295
+
}
296
+
}
297
+
}
298
+
```
299
+
300
+
| Metadata | Required | Description | Example |
301
+
| -------- | --------- | ----------- | ------- |
302
+
| `intervalInSeconds` | No | Cyclical period of time (in seconds) used by the circuit breaker to clear its internal counts. If not provided, the interval is set to the same value as provided for `timeoutInSeconds`. | `15` |
303
+
| `consecutiveErrors` | Yes | Number of request errors allowed to occur before the circuit trips and opens. | `10` |
304
+
| `timeoutInSeconds` | Yes | Time period (in seconds) of open state, directly after failure. | `5` |
305
+
306
+
#### Circuit breaker process
307
+
308
+
Specifying `consecutiveErrors` (the circuit trip condition as
309
+
`consecutiveFailures > $(consecutiveErrors)-1`) sets the number of errors allowed to occur before the circuit trips and opens halfway.
310
+
311
+
The circuit waits half-open for the `timeoutInSeconds` amount of time, during which the `consecutiveErrors` number of requests must consecutively succeed.
312
+
- _If the requests succeed,_ the circuit fully opens again.
313
+
- _If the requests fail,_ the circuit remains in a half-opened state.
314
+
315
+
If you didn't set any `intervalInSeconds` value, the circuit resets to a closed state after the amount of time you set for `timeoutInSeconds`, regardless of consecutive request success or failure. If you set `intervalInSeconds` to `0`, the circuit never automatically resets, only moving from half-open to closed state by successfully completing `consecutiveErrors` requests in a row.
316
+
317
+
If you did set an `intervalInSeconds` value, that determines the amount of time before the circuit is reset to closed state, independent of whether the requests sent in half-opened state succeeded or not.
318
+
259
319
## Resiliency logs
260
320
261
321
From the *Monitoring* section of your container app, select **Logs**.
0 commit comments