You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fixes#452
Currently, when using `minMetricSamplesToAlarm` the number of samples is
evaluated for a different period than the main alarm. This makes
monitoring sensitive to false positives as not every breaching datapoint
must have sufficient number of samples (see #452 for more details).
Moreover, the current approach for adjusting alarms to respect
`minMetricSamplesToAlarm` is to create 2 extra alarms - one for
`NoSamples` and one for a top-level composite. Each of these monitors
incurs extra costs ($0.10 for `NoSamples` monitor and $0.50 for the
Composite, see https://aws.amazon.com/cloudwatch/pricing/ for
reference). This means that using `minMetricSamplesToAlarm` increases
the cost from $0.10 per alarm to $0.70 per alarm ($0.60 of overhead!).
It's possible to use Math Expression instead. Instead of adding separate
alarm for `NoSamples`, we can model it a Sample Count metric, and
instead of the Composite, we can use the MathExpression that
conditionally emits the data based on the number of samples. The charge
for Math Expression-based alarms is per metric in the Math Expression,
so that comes down to $0.20 per alarm. That's a 70% cost improvement.
Additionally, it reduces the overall number of alarms, effectively
making it easier to fit your alarming in the CloudWatch quota and
decluttering the UI.
To avoid breaking any customers that rely on `minMetricSamplesToAlarm`
generating alarms (e.g.
#403),
deprecating it and adding `minSampleCountToEvaluateDatapoint` with
updated behaviour next to it.
---
_By submitting this pull request, I confirm that my contribution is made
under the terms of the Apache-2.0 license_
0 commit comments