Skip to content

Commit a88508b

Browse files
committed
Line edits2
1 parent b6f6455 commit a88508b

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

learn-pr/aspnetcore/microservices-resiliency-aspnet-core/includes/2-application-infrastructure-resiliency.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Because microservice environments can be volatile, design your apps to expect an
66

77
In designing resilient applications, you often have to choose between failing fast and graceful degradation. Failing fast means the application will immediately throw an error or exception when something goes wrong, rather than try to recover or work around the problem. This allows issues to be identified and fixed quickly. Graceful degradation means the application will try to keep operating in a limited capacity even when some component fails.
88

9-
In cloud-native applications it's important for services to handle failures gracefully rather than fail fast. Since microservices are decentralized and independently deployable, partial failures are expected. Failing fast would allow a failure in one service to quickly take down dependent services, which reduce overall system resiliency. Instead, microservices should be coded to anticipate and tolerate both internal and external service failures. This graceful degradation allows the overall system to continue operating even if some services are disrupted. Critical user-facing functions can be sustained, avoiding a complete outage. Graceful failure also allows disturbed services time to recover or self-heal before impacting the rest of the system. So for microservices-based applications, graceful degradation better aligns with resiliency best practices like fault isolation and rapid recovery. It prevents local incidents from cascading across the system.
9+
In cloud-native applications, it's important for services to handle failures gracefully rather than fail fast. Since microservices are decentralized and independently deployable, partial failures are expected. Failing fast would allow a failure in one service to quickly take down dependent services, which reduce overall system resiliency. Instead, microservices should be coded to anticipate and tolerate both internal and external service failures. This graceful degradation allows the overall system to continue operating even if some services are disrupted. Critical user-facing functions can be sustained, avoiding a complete outage. Graceful failure also allows disturbed services time to recover or self-heal before impacting the rest of the system. So for microservices-based applications, graceful degradation better aligns with resiliency best practices like fault isolation and rapid recovery. It prevents local incidents from cascading across the system.
1010

1111
There are two fundamental approaches to support a graceful degradation with resiliency: application and infrastructure. Each approach has benefits and drawbacks. Both approaches can be appropriate depending on the situation. This module explains how to implement both *code-based* and *infrastructure-based* resiliency.
1212

learn-pr/aspnetcore/microservices-resiliency-aspnet-core/includes/3-implement-application-resiliency.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ dotnet add package Microsoft.Extensions.Http.Resilience
1818

1919
Running this command from the terminal in the apps project folder will add the package reference to the project file.
2020

21-
In your application's startup class then add the following using statement:
21+
Then add the following using statement in your application's startup class :
2222

2323
```csharp
2424
using Microsoft.Extensions.Http.Resilience;
@@ -30,12 +30,12 @@ You can now add a standard resilience strategy to your HttpClient service. .NET
3030

3131
:::image type="content" source="../media/3-standard-reslience-strategies.png" alt-text="A diagram showing the strategies included in the Standard Resilience Handler. From overall timeout, retry, bulkhead, circuit breaker, and attempt timeout." border="false":::
3232

33-
The request handler goes through each of the above strategies in order form left to right:
33+
The request handler goes through each of these strategies in order form left to right:
3434

3535
- **Total request timeout strategy**: This sets a total amount of time that the request can take. You can think of this as setting the upper time limit for all the other strategies.
3636
- **Retry strategy**: This strategy controls the options on number of retries, backoff, and jitter. These options can't exceed the total timeout set in the previous strategy.
3737
- **Circuit breaker strategy**: This strategy opens the circuit if the failure ratio exceeds the threshold.
38-
- **Attempt timeout strategy**: This strategy sets a timeout for each individual request. If the request takes longer than this time then an exception is thrown.
38+
- **Attempt timeout strategy**: This strategy sets a timeout for each individual request. If the request takes longer than this time, then an exception is thrown.
3939

4040
You can add this standard strategy, with all the default values by adding this extension method:
4141

@@ -52,7 +52,7 @@ builder.Services.AddHttpClient<ServiceBeingCalled>(httpClient =>
5252
}).AddStandardResilienceHandler();
5353
```
5454

55-
The first line of the above code adds a standard resilience handler to the HTTPClient. This will use all the default settings for the retry and circuit breaker strategies.
55+
The first line of the preceding code adds a standard resilience handler to the HTTPClient. This will use all the default settings for the retry and circuit breaker strategies.
5656

5757
### Configure the resilience strategy
5858

@@ -68,7 +68,7 @@ You can change the default values of any of the strategies by specifying new opt
6868

6969
This code changes the retry strategy defaults to have a maximum number of retires of 10, to use a linear back off, and use a base delay of 1 second.
7070

71-
The options you choose have to be compatible with each other. For example, if the total time remains as its default of 30 seconds, then the retry options above will cause an exception. This is an error because the exponential backoff setting would cause the total time to complete the 10 retries to be 2046 seconds. This is a runtime exception, not a compile time error.
71+
The options you choose have to be compatible with each other. For example, if the total time remains as its default of 30 seconds, then the retry options above will cause an exception. This is an error because the exponential backoff setting would cause the total time to complete the 10 retries to be 2,046 seconds. This is a runtime exception, not a compile time error.
7272

7373
The following table lists the options available for each of the strategies.
7474

@@ -108,4 +108,4 @@ The following table lists the options available for each of the strategies.
108108

109109
:::image type="content" source="../media/3-calling-pattern-with-resiliency.png" alt-text="A sequence diagram showing the flow of events in an application using a resiliency strategy." border="false":::
110110

111-
The sequence diagram above shows how each of the strategies work together in a standard resiliency strategy. To begin with the limiting factor of how long a request can take is controlled by the total timeout strategy. The retry strategy must then be set to have a maximum number of retries that will complete within the total timeout. The circuit breaker strategy will open the circuit if the failure ratio exceeds the threshold set for it. The attempt timeout strategy sets a timeout for each individual request. If the request takes longer than this time then an exception is thrown.
111+
The sequence diagram shows how each of the strategies work together in a standard resiliency strategy. To begin with, the total timeout strategy controls the limiting factor of how long a request can take. The retry strategy must then be set to have a maximum number of retries that will complete within the total timeout. The circuit breaker strategy will open the circuit if the failure ratio exceeds the threshold set for it. The attempt timeout strategy sets a timeout for each individual request. If the request takes longer than this time then an exception is thrown.

0 commit comments

Comments
 (0)