-
Notifications
You must be signed in to change notification settings - Fork 18
Description
Since #83 back in April 2013, Riff Raff autoscaling deploys have always disabled ASG scaling alarms at the start of a deploy (SuspendAlarmNotifications), and only re-enabled them at the end of the deploy, once deployment has successfully completed:
riff-raff/magenta-lib/src/main/scala/magenta/deployment_type/AutoScaling.scala
Lines 170 to 205 in 60eb09f
| SuspendAlarmNotifications(autoScalingGroup, target.region), | |
| TagCurrentInstancesWithTerminationTag(autoScalingGroup, target.region), | |
| ProtectCurrentInstances(autoScalingGroup, target.region), | |
| DoubleSize(autoScalingGroup, target.region), | |
| HealthcheckGrace( | |
| autoScalingGroup, | |
| target.region, | |
| healthcheckGrace(pkg, target, reporter) * 1000 | |
| ), | |
| WaitForStabilization( | |
| autoScalingGroup, | |
| secondsToWait(pkg, target, reporter) * 1000, | |
| target.region | |
| ), | |
| WarmupGrace( | |
| autoScalingGroup, | |
| target.region, | |
| warmupGrace(pkg, target, reporter) * 1000 | |
| ), | |
| WaitForStabilization( | |
| autoScalingGroup, | |
| secondsToWait(pkg, target, reporter) * 1000, | |
| target.region | |
| ), | |
| CullInstancesWithTerminationTag(autoScalingGroup, target.region), | |
| TerminationGrace( | |
| autoScalingGroup, | |
| target.region, | |
| terminationGrace(pkg, target, reporter) * 1000 | |
| ), | |
| WaitForStabilization( | |
| autoScalingGroup, | |
| secondsToWait(pkg, target, reporter) * 1000, | |
| target.region | |
| ), | |
| ResumeAlarmNotifications(autoScalingGroup, target.region) |
There are good reasons for this, but it leads to two problems:
- Even during a successful deploy, there's a window of ~3 minutes when the app cannot scale
- More severely, if a deploy fails, the app will be left with ASG scaling alarms disabled, requiring developers to manually re-enable them (or run another deploy) - the response time to this can be much longer, even hours.
For apps where sudden unpredictable bursts of traffic can occur, where many deploys can take place every day, this adds up to significant windows of time where the odds are eventually that a deploy will coincide with a spike in traffic that they are unable to respond to.
Ophan Tracker outage - 22nd May 2024
-
16:04 - Ophan PR #6109, a minor change to the Ophan Dashboard, is merged. This will trigger a deploy of all Ophan apps, including the Ophan Tracker.
-
16:11 - App Notification for major news story Rishi Sunak will call general election for July this afternoon in surprise move, senior sources tell the Guardian is sent out:

-
16:12:02 - Riff Raff deploy disables auto-scaling alarms, with the size of the ASG set to 3 instances
-
16:13:32 - Ophan Tracker's scale-up alarm enters ALARM status. The Tracker ASG would normally scale up on 2 consecutive ALARM states 1 minute apart, but ASG scale-up has been disabled by the deploy.

-
16:14:26 - Riff Raff deploy culls the 3 old instances, taking the ASG size back to 3 instances - the cluster is now very under-scaled for the spike in traffic
-
16:14:37 - Riff Raff deploy starts the final
WaitForStabilization, which is the last step before re-enabling alarms. Due to the servers being so overloaded, they never stabilise. The step has a 15 minute timeout. -
16:29:42 - The deploy finally fails as
WaitForStabilizationtimes out, and the alarms are left disabled. -
17:19:30 – Tracker ASG is manually scaled up to 6 instances by the Ophan team
-
17:23:12 – Tracker ASG stops terminating unhealthy instances - the outage has lasted just over 1 hour
-
17:30:41 - Alarms are finally re-enabled by the Ophan team performing a new deploy
In this case, had ResumeAlarmNotifications been enabled immediately before WaitForStabilization, the deploy would have failed, but the outage would probably have ended within a minute or 2 of 16:14, giving a 2 minute outage, rather than a 1 hour outage.