Skip to content

Commit 4f86237

Browse files
committed
initial add for circuit breaker policy
Signed-off-by: Hannah Hunter <[email protected]>
1 parent 853b650 commit 4f86237

File tree

1 file changed

+63
-3
lines changed

1 file changed

+63
-3
lines changed

articles/container-apps/dapr-component-resiliency.md

Lines changed: 63 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ ms.custom: ignite-fall-2023, ignite-2023, devx-track-azurecli
1616

1717
Resiliency policies proactively prevent, detect, and recover from your container app failures. In this article, you learn how to apply resiliency policies for applications that use Dapr to integrate with different cloud services, like state stores, pub/sub message brokers, secret stores, and more.
1818

19-
You can configure resiliency policies like retries and timeouts for the following outbound and inbound operation directions via a Dapr component:
19+
You can configure resiliency policies like retries, timeouts, and circuit breakers for the following outbound and inbound operation directions via a Dapr component:
2020

2121
- **Outbound operations:** Calls from the Dapr sidecar to a component, such as:
2222
- Persisting or retrieving state
@@ -58,7 +58,12 @@ resource myPolicyDoc 'Microsoft.App/managedEnvironments/daprComponents/resilienc
5858
initialDelayInMilliseconds: 1000
5959
maxIntervalInMilliseconds: 10000
6060
}
61-
}
61+
}
62+
circuitBreakerPolicy: {
63+
intervalInSeconds: 15
64+
consecutiveErrors: 10
65+
timeoutInSeconds: 5
66+
}
6267
}
6368
inboundPolicy: {
6469
timeoutPolicy: {
@@ -70,7 +75,12 @@ resource myPolicyDoc 'Microsoft.App/managedEnvironments/daprComponents/resilienc
7075
initialDelayInMilliseconds: 1000
7176
maxIntervalInMilliseconds: 10000
7277
}
73-
}
78+
}
79+
circuitBreakerPolicy: {
80+
intervalInSeconds: 15
81+
consecutiveErrors: 10
82+
timeoutInSeconds: 5
83+
}
7484
}
7585
}
7686
}
@@ -125,12 +135,20 @@ outboundPolicy:
125135
maxIntervalInMilliseconds: 10000
126136
timeoutPolicy:
127137
responseTimeoutInSeconds: 15
138+
circuitBreakerPolicy:
139+
intervalInSeconds: 15
140+
consecutiveErrors: 10
141+
timeoutInSeconds: 5
128142
inboundPolicy:
129143
httpRetryPolicy:
130144
maxRetries: 3
131145
retryBackOff:
132146
initialDelayInMilliseconds: 500
133147
maxIntervalInMilliseconds: 5000
148+
circuitBreakerPolicy:
149+
intervalInSeconds: 15
150+
consecutiveErrors: 10
151+
timeoutInSeconds: 5
134152
```
135153
136154
### Update specific policies
@@ -256,6 +274,48 @@ properties: {
256274
| `retryBackOff.initialDelayInMilliseconds` | Yes | Delay between first error and first retry. | `1000` |
257275
| `retryBackOff.maxIntervalInMilliseconds` | Yes | Maximum delay between retries. | `10000` |
258276

277+
### Circuit breakers
278+
279+
Define a `circuitBreakerPolicy` to monitor requests causing elevated failure rates and shut off all traffic to the impacted service when a certain criteria is met.
280+
281+
```bicep
282+
properties: {
283+
outbound: {
284+
circuitBreakerPolicy: {
285+
intervalInSeconds: 15
286+
consecutiveErrors: 10
287+
timeoutInSeconds: 5
288+
}
289+
},
290+
inbound: {
291+
circuitBreakerPolicy: {
292+
intervalInSeconds: 15
293+
consecutiveErrors: 10
294+
timeoutInSeconds: 5
295+
}
296+
}
297+
}
298+
```
299+
300+
| Metadata | Required | Description | Example |
301+
| -------- | --------- | ----------- | ------- |
302+
| `intervalInSeconds` | No | Cyclical period of time (in seconds) used by the circuit breaker to clear its internal counts. If not provided, the interval is set to the same value as provided for `timeoutInSeconds`. | `15` |
303+
| `consecutiveErrors` | Yes | Number of request errors allowed to occur before the circuit trips and opens. | `10` |
304+
| `timeoutInSeconds` | Yes | Time period (in seconds) of open state, directly after failure. | `5` |
305+
306+
#### Circuit breaker process
307+
308+
Specifying `consecutiveErrors` (the circuit trip condition as
309+
`consecutiveFailures > $(consecutiveErrors)-1`) sets the number of errors allowed to occur before the circuit trips and opens halfway.
310+
311+
The circuit waits half-open for the `timeoutInSeconds` amount of time, during which the `consecutiveErrors` number of requests must consecutively succeed.
312+
- _If the requests succeed,_ the circuit fully opens again.
313+
- _If the requests fail,_ the circuit remains in a half-opened state.
314+
315+
If you didn't set any `intervalInSeconds` value, the circuit resets to a closed state after the amount of time you set for `timeoutInSeconds`, regardless of consecutive request success or failure. If you set `intervalInSeconds` to `0`, the circuit never automatically resets, only moving from half-open to closed state by successfully completing `consecutiveErrors` requests in a row.
316+
317+
If you did set an `intervalInSeconds` value, that determines the amount of time before the circuit is reset to closed state, independent of whether the requests sent in half-opened state succeeded or not.
318+
259319
## Resiliency logs
260320

261321
From the *Monitoring* section of your container app, select **Logs**.

0 commit comments

Comments
 (0)