You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Resiliency policies proactively prevent, detect, and recover from your container app failures. In this article, you learn how to apply resiliency policies for applications that use Dapr to integrate with different cloud services, like state stores, pub/sub message brokers, secret stores, and more.
18
18
19
-
You can configure resiliency policies like retriesand timeouts for the following outbound and inbound operation directions via a Dapr component:
19
+
You can configure resiliency policies like retries, timeouts, and circuit breakers for the following outbound and inbound operation directions via a Dapr component:
20
20
21
21
-**Outbound operations:** Calls from the Dapr sidecar to a component, such as:
22
22
- Persisting or retrieving state
@@ -34,6 +34,7 @@ The following screenshot shows how an application uses a retry policy to attempt
34
34
35
35
-[Timeouts](#timeouts)
36
36
-[Retries (HTTP)](#retries)
37
+
-[Circuit breakers](#circuit-breakers)
37
38
38
39
## Configure resiliency policies
39
40
@@ -44,7 +45,7 @@ You can choose whether to create resiliency policies using Bicep, the CLI, or th
44
45
The following resiliency example demonstrates all of the available configurations.
@@ -187,6 +206,9 @@ In the resiliency policy pane, select **Outbound** or **Inbound** to set policie
187
206
188
207
Click **Save** to save the resiliency policies.
189
208
209
+
> [!NOTE]
210
+
> Currently, you can only set timeout and retry policies via the Azure portal.
211
+
190
212
You can edit or remove the resiliency policies by selecting **Edit resiliency**.
191
213
192
214
:::image type="content" source="media/dapr-component-resiliency/edit-dapr-component-resiliency.png" alt-text="Screenshot showing how you can edit existing resiliency policies for the applicable Dapr component.":::
@@ -256,6 +278,48 @@ properties: {
256
278
| `retryBackOff.initialDelayInMilliseconds` | Yes | Delay between first error and first retry. | `1000` |
257
279
| `retryBackOff.maxIntervalInMilliseconds` | Yes | Maximum delay between retries. | `10000` |
258
280
281
+
### Circuit breakers
282
+
283
+
Define a `circuitBreakerPolicy` to monitor requests causing elevated failure rates and shut off all traffic to the impacted service when a certain criteria is met.
284
+
285
+
```bicep
286
+
properties: {
287
+
outbound: {
288
+
circuitBreakerPolicy: {
289
+
intervalInSeconds: 15
290
+
consecutiveErrors: 10
291
+
timeoutInSeconds: 5
292
+
}
293
+
},
294
+
inbound: {
295
+
circuitBreakerPolicy: {
296
+
intervalInSeconds: 15
297
+
consecutiveErrors: 10
298
+
timeoutInSeconds: 5
299
+
}
300
+
}
301
+
}
302
+
```
303
+
304
+
| Metadata | Required | Description | Example |
305
+
| -------- | --------- | ----------- | ------- |
306
+
| `intervalInSeconds` | No | Cyclical period of time (in seconds) used by the circuit breaker to clear its internal counts. If not provided, the interval is set to the same value as provided for `timeoutInSeconds`. | `15` |
307
+
| `consecutiveErrors` | Yes | Number of request errors allowed to occur before the circuit trips and opens. | `10` |
308
+
| `timeoutInSeconds` | Yes | Time period (in seconds) of open state, directly after failure. | `5` |
309
+
310
+
#### Circuit breaker process
311
+
312
+
Specifying `consecutiveErrors` (the circuit trip condition as
313
+
`consecutiveFailures > $(consecutiveErrors)-1`) sets the number of errors allowed to occur before the circuit trips and opens halfway.
314
+
315
+
The circuit waits half-open for the `timeoutInSeconds` amount of time, during which the `consecutiveErrors` number of requests must consecutively succeed.
316
+
- _If the requests succeed,_ the circuit closes.
317
+
- _If the requests fail,_ the circuit remains in a half-opened state.
318
+
319
+
If you didn't set any `intervalInSeconds` value, the circuit resets to a closed state after the amount of time you set for `timeoutInSeconds`, regardless of consecutive request success or failure. If you set `intervalInSeconds` to `0`, the circuit never automatically resets, only moving from half-open to closed state by successfully completing `consecutiveErrors` requests in a row.
320
+
321
+
If you did set an `intervalInSeconds` value, that determines the amount of time before the circuit is reset to closed state, independent of whether the requests sent in half-opened state succeeded or not.
322
+
259
323
## Resiliency logs
260
324
261
325
From the *Monitoring* section of your container app, select **Logs**.
0 commit comments