You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/aspnetcore/microservices-resiliency-aspnet-core/includes/2-application-infrastructure-resiliency.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ Because microservice environments can be volatile, design your apps to expect an
6
6
7
7
In designing resilient applications, you often have to choose between failing fast and graceful degradation. Failing fast means the application will immediately throw an error or exception when something goes wrong, rather than try to recover or work around the problem. This allows issues to be identified and fixed quickly. Graceful degradation means the application will try to keep operating in a limited capacity even when some component fails.
8
8
9
-
In cloud-native applications it's important for services to handle failures gracefully rather than fail fast. Since microservices are decentralized and independently deployable, partial failures are expected. Failing fast would allow a failure in one service to quickly take down dependent services, which reduce overall system resiliency. Instead, microservices should be coded to anticipate and tolerate both internal and external service failures. This graceful degradation allows the overall system to continue operating even if some services are disrupted. Critical user-facing functions can be sustained, avoiding a complete outage. Graceful failure also allows disturbed services time to recover or self-heal before impacting the rest of the system. So for microservices-based applications, graceful degradation better aligns with resiliency best practices like fault isolation and rapid recovery. It prevents local incidents from cascading across the system.
9
+
In cloud-native applications, it's important for services to handle failures gracefully rather than fail fast. Since microservices are decentralized and independently deployable, partial failures are expected. Failing fast would allow a failure in one service to quickly take down dependent services, which reduce overall system resiliency. Instead, microservices should be coded to anticipate and tolerate both internal and external service failures. This graceful degradation allows the overall system to continue operating even if some services are disrupted. Critical user-facing functions can be sustained, avoiding a complete outage. Graceful failure also allows disturbed services time to recover or self-heal before impacting the rest of the system. So for microservices-based applications, graceful degradation better aligns with resiliency best practices like fault isolation and rapid recovery. It prevents local incidents from cascading across the system.
10
10
11
11
There are two fundamental approaches to support a graceful degradation with resiliency: application and infrastructure. Each approach has benefits and drawbacks. Both approaches can be appropriate depending on the situation. This module explains how to implement both *code-based* and *infrastructure-based* resiliency.
Running this command from the terminal in the apps project folder will add the package reference to the project file.
20
20
21
-
In your application's startup class then add the following using statement:
21
+
Then add the following using statement in your application's startup class :
22
22
23
23
```csharp
24
24
usingMicrosoft.Extensions.Http.Resilience;
@@ -30,12 +30,12 @@ You can now add a standard resilience strategy to your HttpClient service. .NET
30
30
31
31
:::image type="content" source="../media/3-standard-reslience-strategies.png" alt-text="A diagram showing the strategies included in the Standard Resilience Handler. From overall timeout, retry, bulkhead, circuit breaker, and attempt timeout." border="false":::
32
32
33
-
The request handler goes through each of the above strategies in order form left to right:
33
+
The request handler goes through each of these strategies in order form left to right:
34
34
35
35
-**Total request timeout strategy**: This sets a total amount of time that the request can take. You can think of this as setting the upper time limit for all the other strategies.
36
36
-**Retry strategy**: This strategy controls the options on number of retries, backoff, and jitter. These options can't exceed the total timeout set in the previous strategy.
37
37
-**Circuit breaker strategy**: This strategy opens the circuit if the failure ratio exceeds the threshold.
38
-
-**Attempt timeout strategy**: This strategy sets a timeout for each individual request. If the request takes longer than this time then an exception is thrown.
38
+
-**Attempt timeout strategy**: This strategy sets a timeout for each individual request. If the request takes longer than this time, then an exception is thrown.
39
39
40
40
You can add this standard strategy, with all the default values by adding this extension method:
The first line of the above code adds a standard resilience handler to the HTTPClient. This will use all the default settings for the retry and circuit breaker strategies.
55
+
The first line of the preceding code adds a standard resilience handler to the HTTPClient. This will use all the default settings for the retry and circuit breaker strategies.
56
56
57
57
### Configure the resilience strategy
58
58
@@ -68,7 +68,7 @@ You can change the default values of any of the strategies by specifying new opt
68
68
69
69
This code changes the retry strategy defaults to have a maximum number of retires of 10, to use a linear back off, and use a base delay of 1 second.
70
70
71
-
The options you choose have to be compatible with each other. For example, if the total time remains as its default of 30 seconds, then the retry options above will cause an exception. This is an error because the exponential backoff setting would cause the total time to complete the 10 retries to be 2046 seconds. This is a runtime exception, not a compile time error.
71
+
The options you choose have to be compatible with each other. For example, if the total time remains as its default of 30 seconds, then the retry options above will cause an exception. This is an error because the exponential backoff setting would cause the total time to complete the 10 retries to be 2,046 seconds. This is a runtime exception, not a compile time error.
72
72
73
73
The following table lists the options available for each of the strategies.
74
74
@@ -108,4 +108,4 @@ The following table lists the options available for each of the strategies.
108
108
109
109
:::image type="content" source="../media/3-calling-pattern-with-resiliency.png" alt-text="A sequence diagram showing the flow of events in an application using a resiliency strategy." border="false":::
110
110
111
-
The sequence diagram above shows how each of the strategies work together in a standard resiliency strategy. To begin with the limiting factor of how long a request can take is controlled by the total timeout strategy. The retry strategy must then be set to have a maximum number of retries that will complete within the total timeout. The circuit breaker strategy will open the circuit if the failure ratio exceeds the threshold set for it. The attempt timeout strategy sets a timeout for each individual request. If the request takes longer than this time then an exception is thrown.
111
+
The sequence diagram shows how each of the strategies work together in a standard resiliency strategy. To begin with, the total timeout strategy controls the limiting factor of how long a request can take. The retry strategy must then be set to have a maximum number of retries that will complete within the total timeout. The circuit breaker strategy will open the circuit if the failure ratio exceeds the threshold set for it. The attempt timeout strategy sets a timeout for each individual request. If the request takes longer than this time then an exception is thrown.
0 commit comments