Skip to content

Commit 99625b6

Browse files
author
Kamil Sykora
committed
Updated links
1 parent c2c9930 commit 99625b6

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/app-service/routine-maintenance-downtime.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Azure App Service is a Platform as a Service (PaaS) for hosting web applications
1515

1616
## Background
1717

18-
Our planned maintenance mechanism revolves around the architecture of the scale units that host the servers on which deployed applications run. Any given scale unit contains several different types of roles that all work together. The two roles that are most relevant to our planned maintenance update mechanism are the Worker and File Server roles. For a more detailed description of all the different roles and other details about the App Service architecture, review [Inside the Azure App Service Architecture](https://learn.microsoft.com/en-us/archive/msdn-magazine/2017/february/azure-inside-the-azure-app-service-architecture)
18+
Our planned maintenance mechanism revolves around the architecture of the scale units that host the servers on which deployed applications run. Any given scale unit contains several different types of roles that all work together. The two roles that are most relevant to our planned maintenance update mechanism are the Worker and File Server roles. For a more detailed description of all the different roles and other details about the App Service architecture, review [Inside the Azure App Service Architecture](/archive/msdn-magazine/2017/february/azure-inside-the-azure-app-service-architecture)
1919

2020
There are different ways that an update strategy could be designed and those different designs would each have their own benefits and downsides. One of the strategies that we use for major updates is that these updates don't run on servers / roles that are currently used by our customers. Instead, our update process updates instances in waves and the instances undergoing updates aren't used by applications. Instances being used by applications are gradually swapped out and replaced by updated instances. The resulting effect on an application is that the application experiences a start, or restart. From a statistical perspective and from empirical observations, applications restarts are much less disruptive than performing maintenance on servers that are actively being used by applications.
2121

@@ -52,7 +52,7 @@ Improving application start-up speed and ensuring it's consistently successful h
5252

5353
#### Application Initialization (AppInit)
5454

55-
When an application starts on a Windows Worker, the Azure App Service infrastructure tries to determine when the application is ready to serve requests before external requests are routed to this worker. By default, a successful request to the root (/) of the application is a signal that the application is ready to serve requests. For some applications, this default behavior isn't sufficient to ensure that the application is fully warmed up. Typically that happens if the root of the application has limited dependencies but other paths rely on more libraries or external dependencies to work. The [IIS Application Initialization Module](https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization) works well to fine tune warm-up behavior. At a high level, it allows the application owner to define which path or paths serve as indicators that the application is in fact ready to serve requests. For a detailed discussion of how to implement this mechanism, review the following article: [App Service Warm-Up Demystified](https://michaelcandido.com/app-service-warm-up-demystified/) . When correctly implemented, this feature can result in zero downtime even if the application start-up is more complex.
55+
When an application starts on a Windows Worker, the Azure App Service infrastructure tries to determine when the application is ready to serve requests before external requests are routed to this worker. By default, a successful request to the root (/) of the application is a signal that the application is ready to serve requests. For some applications, this default behavior isn't sufficient to ensure that the application is fully warmed up. Typically that happens if the root of the application has limited dependencies but other paths rely on more libraries or external dependencies to work. The [IIS Application Initialization Module](/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization) works well to fine tune warm-up behavior. At a high level, it allows the application owner to define which path or paths serve as indicators that the application is in fact ready to serve requests. For a detailed discussion of how to implement this mechanism, review the following article: [App Service Warm-Up Demystified](https://michaelcandido.com/app-service-warm-up-demystified/) . When correctly implemented, this feature can result in zero downtime even if the application start-up is more complex.
5656

5757
Linux applications can utilize a similar mechanism by using the WEBSITE_WARMUP_PATH application setting.
5858

@@ -80,7 +80,7 @@ We recommend testing several scenarios
8080

8181
#### Start-up Logging
8282

83-
Having the ability to retroactively troubleshoot start-up failures in production is a consideration that is separate from using testing to improve start-up consistency. However, it's equally or more important since despite all our efforts, we might not be able to simulate all types of real-world failures in a test or QA environment. It's also commonly the weakest area for logging as initializing the logging infrastructure is another start-up activity that must be performed. The order of operations for initializing the application is an important consideration for this reason and can become a chicken and egg type of problem. For example, if we need to configure logging based on a KeyVault reference, and we fail to obtain the KeyVault value, how do we log this failure? We might want to consider duplicating start-up logging using a separate logging mechanism that doesn't depend on any other external factors. For example, logging these types of start-up failures to the local disk. Simply turning on a general logging feature, such as [.NET Core stdout logging](https://learn.microsoft.com/en-us/aspnet/core/test/troubleshoot-azure-iis?view=aspnetcore-3.1#aspnet-core-module-stdout-log-azure-app-service), can be counter-productive as this logging keeps generating log data even after start-up, and that can fill up the disk over time. This feature can be used strategically for troubleshooting reproducible start-up failures.
83+
Having the ability to retroactively troubleshoot start-up failures in production is a consideration that is separate from using testing to improve start-up consistency. However, it's equally or more important since despite all our efforts, we might not be able to simulate all types of real-world failures in a test or QA environment. It's also commonly the weakest area for logging as initializing the logging infrastructure is another start-up activity that must be performed. The order of operations for initializing the application is an important consideration for this reason and can become a chicken and egg type of problem. For example, if we need to configure logging based on a KeyVault reference, and we fail to obtain the KeyVault value, how do we log this failure? We might want to consider duplicating start-up logging using a separate logging mechanism that doesn't depend on any other external factors. For example, logging these types of start-up failures to the local disk. Simply turning on a general logging feature, such as [.NET Core stdout logging](/aspnet/core/test/troubleshoot-azure-iis#aspnet-core-module-stdout-log-azure-app-service), can be counter-productive as this logging keeps generating log data even after start-up, and that can fill up the disk over time. This feature can be used strategically for troubleshooting reproducible start-up failures.
8484

8585
### Strategies for Minimizing Restarts
8686

0 commit comments

Comments
 (0)