/alerting/fundamentals/alert-rules/organising-alerts"
-[organising-alerts]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/organising-alerts"
{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/alert-rules/alert-instances.md b/docs/sources/alerting/fundamentals/alert-rules/alert-instances.md
deleted file mode 100644
index f7a3793c1d4..00000000000
--- a/docs/sources/alerting/fundamentals/alert-rules/alert-instances.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/alert-instances/
-description: Learn about alert instances
-keywords:
- - grafana
- - alerting
- - instances
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Alert instances
-weight: 105
----
-
-# Alert instances
-
-Grafana managed alerts support multi-dimensional alerting. Each alert rule can create multiple alert instances. This is exceptionally powerful if you are observing multiple series in a single expression.
-
-Consider the following PromQL expression:
-
-```promql
-sum by(cpu) (
- rate(node_cpu_seconds_total{mode!="idle"}[1m])
-)
-```
-
-A rule using this expression will create as many alert instances as the amount of CPUs we are observing after the first evaluation, allowing a single rule to report the status of each CPU.
-
-{{< figure src="/static/img/docs/alerting/unified/multi-dimensional-alert.png" caption="A multi-dimensional Grafana managed alert rule" >}}
diff --git a/docs/sources/alerting/fundamentals/alert-rules/alert-rule-types.md b/docs/sources/alerting/fundamentals/alert-rules/alert-rule-types.md
deleted file mode 100644
index dd25109c20d..00000000000
--- a/docs/sources/alerting/fundamentals/alert-rules/alert-rule-types.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/alert-rule-types/
-description: Learn about the different alert rule types that Grafana Alerting supports
-keywords:
- - grafana
- - alerting
- - rule types
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Alert rule types
-weight: 102
----
-
-# Alert rule types
-
-Grafana supports two different alert rule types. Learn more about each of the alert rule types, how they work, and decide which one is best for your use case.
-
-## Grafana-managed alert rules
-
-Grafana-managed alert rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our supported data sources.
-
-In addition to supporting multiple data sources, you can also add expressions to transform your data and set alert conditions. Using images in alert notifications is also supported. This is the only type of rule that allows alerting from multiple data sources in a single rule definition.
-
-The following diagram shows how Grafana-managed alerting works.
-
-{{< figure src="/media/docs/alerting/grafana-managed-rule.png" max-width="750px" caption="Grafana-managed alerting" >}}
-
-1. Alert rules are created within Grafana based on one or more data sources.
-
-1. Alert rules are evaluated by the Alert Rule Evaluation Engine from within Grafana.
-
-1. Alerts are delivered using the internal Grafana Alertmanager.
-
-**Note:**
-
-You can also configure alerts to be delivered using an external Alertmanager; or use both internal and external alertmanagers.
-For more information, see Add an external Alertmanager.
-
-## Data source-managed alert rules
-
-To create data source-managed alert rules, you must have a compatible Prometheus or Loki data source.
-
-You can check if your data source supports rule creation via Grafana by testing the data source and observing if the Ruler API is supported.
-
-For more information on the Ruler API, refer to [Ruler API](/docs/loki/latest/api/#ruler).
-
-The following diagram shows how data source-managed alerting works.
-
-{{< figure src="/media/docs/alerting/loki-mimir-rule.png" max-width="750px" caption="Grafana Mimir/Loki-managed alerting" >}}
-
-1. Alert rules are created and stored within the data source itself.
-1. Alert rules can only be created based on Prometheus data.
-1. Alert rule evaluation and delivery is distributed across multiple nodes for high availability and fault tolerance.
-
-## Choose an alert rule type
-
-When choosing which alert rule type to use, consider the following comparison between Grafana-managed alert rules and Grafana Mimir or Loki alert rules.
-
-{{< responsive-table >}}
-| Feature
| Grafana-managed alert rule
| Loki/Mimir-managed alert rule |
-| ----------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Create alert rules based on data from any of our supported data sources | Yes | No: You can only create alert rules that are based on Prometheus data. The data source must have the Ruler API enabled. |
-| Mix and match data sources | Yes | No |
-| Includes support for recording rules | No | Yes |
-| Add expressions to transform your data and set alert conditions | Yes | No |
-| Use images in alert notifications | Yes | No |
-| Scaling | More resource intensive, depend on the database, and are likely to suffer from transient errors. They only scale vertically. | Store alert rules within the data source itself and allow for “infinite” scaling. Generate and send alert notifications from the location of your data. |
-| Alert rule evaluation and delivery | Alert rule evaluation and delivery is done from within Grafana, using an external Alertmanager; or both. | Alert rule evaluation and alert delivery is distributed, meaning there is no single point of failure. |
-
-{{< /responsive-table >}}
-
-**Note:**
-
-If you are using non-Prometheus data, we recommend choosing Grafana-managed alert rules. Otherwise, choose Grafana Mimir or Grafana Loki alert rules where possible.
diff --git a/docs/sources/alerting/fundamentals/alert-rules/annotation-label.md b/docs/sources/alerting/fundamentals/alert-rules/annotation-label.md
new file mode 100644
index 00000000000..5317dd7b3b1
--- /dev/null
+++ b/docs/sources/alerting/fundamentals/alert-rules/annotation-label.md
@@ -0,0 +1,143 @@
+---
+aliases:
+ - ../../fundamentals/annotation-label/ # /docs/grafana//alerting/fundamentals/annotation-label/
+ - ../../fundamentals/annotation-label/labels-and-label-matchers/ # /docs/grafana//alerting/fundamentals/annotation-label/labels-and-label-matchers/
+ - ../../fundamentals/annotation-label/how-to-use-labels/ # /docs/grafana//alerting/fundamentals/annotation-label/how-to-use-labels/
+ - ../../alerting-rules/alert-annotation-label/ # /docs/grafana//alerting/alerting-rules/alert-annotation-label/
+ - ../../unified-alerting/alerting-rules/alert-annotation-label/ # /docs/grafana//alerting/unified-alerting/alerting-rules/alert-annotation-label/
+canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/annotation-label/
+description: Learn how to use annotations and labels to store key information about alerts
+keywords:
+ - grafana
+ - alerting
+ - guide
+ - rules
+ - create
+labels:
+ products:
+ - cloud
+ - enterprise
+ - oss
+title: Labels and annotations
+weight: 105
+---
+
+# Labels and annotations
+
+Labels and annotations contain information about an alert. Labels are used to differentiate an alert from all other alerts, while annotations are used to add additional information to an existing alert.
+
+## Labels
+
+Labels contain information that identifies an alert. An example of a label might be `server=server1` or `team=backend`. Each alert can have more than one label, and the complete set of labels for an alert is called its label set. It is this label set that identifies the alert.
+
+For example, an alert might have the label set `{alertname="High CPU usage",server="server1"}` while another alert might have the label set `{alertname="High CPU usage",server="server2"}`. These are two separate alerts because although their `alertname` labels are the same, their `server` labels are different.
+
+Labels are a fundamental component of alerting:
+
+- The complete set of labels for an alert is what uniquely identifies an alert within Grafana alerts.
+- The alerting UI shows labels for every alert instance generated during evaluation of that rule.
+- Contact points can access labels to send notification messages that contain specific alert information.
+- The Alertmanager uses labels to match alerts for silences and alert groups in notification policies.
+
+Note that two alert rules cannot have the same labels. If two alert rules have the same labels such as `foo=bar,bar=baz` and `foo=bar,bar=baz` then one of the alerts will be discarded.
+
+### How label matching works
+
+Use labels and label matchers to link alert rules to notification policies and silences. This allows for a flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.
+
+A label matchers consists of 3 distinct parts, the **label**, the **value** and the **operator**.
+
+- The **Label** field is the name of the label to match. It must exactly match the label name.
+
+- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
+
+- The **Operator** field is the operator to match against the label value. The available operators are:
+
+ | Operator | Description |
+ | -------- | -------------------------------------------------- |
+ | `=` | Select labels that are exactly equal to the value. |
+ | `!=` | Select labels that are not equal to the value. |
+ | `=~` | Select labels that regex-match the value. |
+ | `!~` | Select labels that do not regex-match the value. |
+
+If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.
+
+{{< collapse title="Label matching example" >}}
+
+If you define the following set of labels for your alert:
+
+`{ foo=bar, baz=qux, id=12 }`
+
+then:
+
+- A label matcher defined as `foo=bar` matches this alert rule.
+- A label matcher defined as `foo!=bar` does _not_ match this alert rule.
+- A label matcher defined as `id=~[0-9]+` matches this alert rule.
+- A label matcher defined as `baz!~[0-9]+` matches this alert rule.
+- Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.
+
+**Exclude labels**
+
+You can also write label matchers to exclude labels.
+
+Here is an example that shows how to exclude the label `Team`. You can choose between any of the values below to exclude labels.
+
+| Label | Operator | Value |
+| ------ | -------- | ----- |
+| `team` | `=` | `""` |
+| `team` | `!~` | `.+` |
+| `team` | `=~` | `^$` |
+
+{{< /collapse >}}
+
+## Label types
+
+An alert's label set can contain three types of labels:
+
+- Labels from the datasource,
+- Custom labels specified in the alert rule,
+- A series of reserved labels, such as `alertname` or `grafana_folder`.
+
+### Custom Labels
+
+Custom labels are additional labels configured manually in the alert rule.
+
+Ensure the label set for an alert does not have two or more labels with the same name. If a custom label has the same name as a label from the datasource then it will replace that label. However, should a custom label have the same name as a reserved label then the custom label will be omitted from the alert.
+
+{{< collapse title="Key format" >}}
+
+Grafana's built-in Alertmanager supports both Unicode label keys and values. If you are using an external Prometheus Alertmanager, label keys must be compatible with their [data model](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
+This means that label keys must only contain **ASCII letters**, **numbers**, as well as **underscores** and match the regex `[a-zA-Z_][a-zA-Z0-9_]*`.
+Any invalid characters will be removed or replaced by the Grafana alerting engine before being sent to the external Alertmanager according to the following rules:
+
+- `Whitespace` will be removed.
+- `ASCII characters` will be replaced with `_`.
+- `All other characters` will be replaced with their lower-case hex representation. If this is the first character it will be prefixed with `_`.
+
+Example: A label key/value pair `Alert! 🔔="🔥"` will become `Alert_0x1f514="🔥"`.
+
+If multiple label keys are sanitized to the same value, the duplicates will have a short hash of the original label appended as a suffix.
+
+{{< /collapse >}}
+
+### Reserved labels
+
+Reserved labels can be used in the same way as manually configured custom labels. The current list of available reserved labels are:
+
+| Label | Description |
+| -------------- | ----------------------------------------- |
+| alert_name | The name of the alert rule. |
+| grafana_folder | Title of the folder containing the alert. |
+
+Labels prefixed with `grafana_` are reserved by Grafana for special use. To stop Grafana Alerting from adding a reserved label, you can disable it via the `disabled_labels` option in [unified_alerting.reserved_labels](/docs/grafana//setup-grafana/configure-grafana#unified_alertingreserved_labels) configuration.
+
+## Annotations
+
+Both labels and annotations have the same structure: a set of named values; however their intended uses are different. The purpose of annotations is to add additional information to existing alerts.
+
+There are a number of suggested annotations in Grafana such as `description`, `summary`, `runbook_url`, `dashboardUId` and `panelId`. Like custom labels, annotations must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired.
+
+{{% docs/reference %}}
+[variables-label-annotation]: "/docs/grafana/ -> /docs/grafana//alerting/alerting-rules/templating-labels-annotations"
+[variables-label-annotation]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/templating-labels-annotations"
+{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/alert-rules/organising-alerts.md b/docs/sources/alerting/fundamentals/alert-rules/organising-alerts.md
index 7952929c852..09113a1622e 100644
--- a/docs/sources/alerting/fundamentals/alert-rules/organising-alerts.md
+++ b/docs/sources/alerting/fundamentals/alert-rules/organising-alerts.md
@@ -1,7 +1,7 @@
---
aliases:
- - ../unified-alerting/alerting-rules/edit-cortex-loki-namespace-group/
- - ../unified-alerting/alerting-rules/edit-mimir-loki-namespace-group/
+ - ../../unified-alerting/alerting-rules/edit-cortex-loki-namespace-group/ # /docs/grafana//alerting/unified-alerting/alerting-rules/edit-cortex-loki-namespace-group/
+ - ../../unified-alerting/alerting-rules/edit-mimir-loki-namespace-group/ # /docs/grafana//alerting/unified-alerting/alerting-rules/edit-mimir-loki-namespace-group/
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/organising-alerts/
description: Learn about organizing alerts using namespaces, folders, and groups
keywords:
@@ -14,7 +14,7 @@ labels:
- enterprise
- oss
title: Namespaces, folders, and groups
-weight: 105
+weight: 107
---
## Namespaces, folders, and groups
diff --git a/docs/sources/alerting/fundamentals/alert-rules/queries-conditions.md b/docs/sources/alerting/fundamentals/alert-rules/queries-conditions.md
new file mode 100644
index 00000000000..41e0f2d2101
--- /dev/null
+++ b/docs/sources/alerting/fundamentals/alert-rules/queries-conditions.md
@@ -0,0 +1,221 @@
+---
+aliases:
+ - ../../fundamentals/evaluate-grafana-alerts/ # /docs/grafana//alerting/fundamentals/evaluate-grafana-alerts/
+ - ../../unified-alerting/fundamentals/evaluate-grafana-alerts/ # /docs/grafana//alerting/unified-alerting/fundamentals/evaluate-grafana-alerts/
+canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/queries-conditions/
+description: Define queries to get the data you want to measure and conditions that need to be met before an alert rule fires
+keywords:
+ - grafana
+ - alerting
+ - queries
+ - conditions
+labels:
+ products:
+ - cloud
+ - enterprise
+ - oss
+title: Queries and conditions
+weight: 104
+---
+
+# Queries and conditions
+
+In Grafana, queries play a vital role in fetching and transforming data from supported data sources, which include databases like MySQL and PostgreSQL, time series databases like Prometheus, InfluxDB and Graphite, and services like Elasticsearch, AWS CloudWatch, Azure Monitor and Google Cloud Monitoring.
+
+For more information on supported data sources, see [Data sources][data-source-alerting].
+
+The process of executing a query involves defining the data source, specifying the desired data to retrieve, and applying relevant filters or transformations. Query languages or syntaxes specific to the chosen data source are utilized for constructing these queries.
+
+In Alerting, you define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.
+
+An alert rule consists of one or more queries and expressions that select the data you want to measure.
+
+For more information on queries and expressions, see [Query and transform data][query-transform-data].
+
+## Data source queries
+
+Queries in Grafana can be applied in various ways, depending on the data source and query language being used. Each data source’s query editor provides a customized user interface that helps you write queries that take advantage of its unique capabilities.
+
+Because of the differences between query languages, each data source query editor looks and functions differently. Depending on your data source, the query editor might provide auto-completion features, metric names, variable suggestions, or a visual query-building interface.
+
+Some common types of query components include:
+
+**Metrics or data fields**: Specify the specific metrics or data fields you want to retrieve, such as CPU usage, network traffic, or sensor readings.
+
+**Time range**: Define the time range for which you want to fetch data, such as the last hour, a specific day, or a custom time range.
+
+**Filters**: Apply filters to narrow down the data based on specific criteria, such as filtering data by a specific tag, host, or application.
+
+**Aggregations**: Perform aggregations on the data to calculate metrics like averages, sums, or counts over a given time period.
+
+**Grouping**: Group the data by specific dimensions or tags to create aggregated views or breakdowns.
+
+**Note**:
+
+Grafana does not support alert queries with template variables. More information is available [here](https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514).
+
+## Expression queries
+
+In Grafana, an expression is used to perform calculations, transformations, or aggregations on the data source queried data. It allows you to create custom metrics or modify existing metrics based on mathematical operations, functions, or logical expressions.
+
+By leveraging expression queries, users can perform tasks such as calculating the percentage change between two values, applying functions like logarithmic or trigonometric functions, aggregating data over specific time ranges or dimensions, and implementing conditional logic to handle different scenarios.
+
+In Alerting, you can only use expressions for Grafana-managed alert rules. For each expression, you can choose from the math, reduce, and resample expressions. These are called multi-dimensional rules, because they generate a separate alert for each series.
+
+You can also use classic condition, which creates an alert rule that triggers a single alert when its condition is met. As a result, Grafana sends only a single alert even when alert conditions are met for multiple series.
+
+**Note:**
+
+Classic conditions exist mainly for compatibility reasons and should be avoided if possible.
+
+**Reduce**
+
+Aggregates time series values in the selected time range into a single value.
+
+**Math**
+
+Performs free-form math functions/operations on time series and number data. Can be used to preprocess time series data or to define an alert condition for number data.
+
+**Resample**
+
+Realigns a time range to a new set of timestamps, this is useful when comparing time series data from different data sources where the timestamps would otherwise not align.
+
+**Threshold**
+
+Checks if any time series data matches the threshold condition.
+
+The threshold expression allows you to compare two single values. It returns `0` when the condition is false and `1` if the condition is true. The following threshold functions are available:
+
+- Is above (x > y)
+- Is below (x < y)
+- Is within range (x > y1 AND x < y2)
+- Is outside range (x < y1 AND x > y2)
+
+**Classic condition**
+
+Checks if any time series data matches the alert condition.
+
+**Note**:
+
+Classic condition expression queries always produce one alert instance only, no matter how many time series meet the condition.
+Classic conditions exist mainly for compatibility reasons and should be avoided if possible.
+
+## Aggregations
+
+Grafana Alerting provides the following aggregation functions to enable you to further refine your query.
+
+These functions are available for **Reduce** and **Classic condition** expressions only.
+
+| Function | Expression | What it does |
+| ---------------- | ---------------- | ------------------------------------------------------------------------------- |
+| avg | Reduce / Classic | Displays the average of the values |
+| min | Reduce / Classic | Displays the lowest value |
+| max | Reduce / Classic | Displays the highest value |
+| sum | Reduce / Classic | Displays the sum of all values |
+| count | Reduce / Classic | Counts the number of values in the result |
+| last | Reduce / Classic | Displays the last value |
+| median | Reduce / Classic | Displays the median value |
+| diff | Classic | Displays the difference between the newest and oldest value |
+| diff_abs | Classic | Displays the absolute value of diff |
+| percent_diff | Classic | Displays the percentage value of the difference between newest and oldest value |
+| percent_diff_abs | Classic | Displays the absolute value of percent_diff |
+| count_non_null | Classic | Displays a count of values in the result set that aren't `null` |
+
+## Alert condition
+
+An alert condition is the query or expression that determines whether the alert will fire or not depending on the value it yields. There can be only one condition which will determine the triggering of the alert.
+
+After you have defined your queries and/or expressions, choose one of them as the alert rule condition.
+
+When the queried data satisfies the defined condition, Grafana triggers the associated alert, which can be configured to send notifications through various channels like email, Slack, or PagerDuty. The notifications inform you about the condition being met, allowing you to take appropriate actions or investigate the underlying issue.
+
+By default, the last expression added is used as the alert condition.
+
+## Recovery threshold
+
+{{% admonition type="note" %}}
+The recovery threshold feature is currently only available in OSS.
+{{% /admonition %}}
+
+To reduce the noise of flapping alerts, you can set a recovery threshold different to the alert threshold.
+
+Flapping alerts occur when a metric hovers around the alert threshold condition and may lead to frequent state changes, resulting in too many notifications being generated.
+
+Grafana-managed alert rules are evaluated for a specific interval of time. During each evaluation, the result of the query is checked against the threshold set in the alert rule. If the value of a metric is above the threshold, an alert rule fires and a notification is sent. When the value goes below the threshold and there is an active alert for this metric, the alert is resolved, and another notification is sent.
+
+It can be tricky to create an alert rule for a noisy metric. That is, when the value of a metric continually goes above and below a threshold. This is called flapping and results in a series of firing - resolved - firing notifications and a noisy alert state history.
+
+For example, if you have an alert for latency with a threshold of 1000ms and the number fluctuates around 1000 (say 980 ->1010 -> 990 -> 1020, and so on) then each of those will trigger a notification.
+
+To solve this problem, you can set a (custom) recovery threshold, which basically means having two thresholds instead of one. An alert is triggered when the first threshold is crossed and is resolved only when the second threshold is crossed.
+
+For example, you could set a threshold of 1000ms and a recovery threshold of 900ms. This way, an alert rule will only stop firing when it goes under 900ms and flapping is reduced.
+
+## Alert on numeric data
+
+Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules.
+When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.
+
+### Tabular Data
+
+This feature is supported with backend data sources that query tabular data:
+
+- SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.
+- The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.
+
+A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:
+
+- The "Format AS" option is set to "Table" in the data source query.
+- The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.
+
+If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.
+
+**Example**
+
+For a MySQL table called "DiskSpace":
+
+| Time | Host | Disk | PercentFree |
+| ----------- | ---- | ---- | ----------- |
+| 2021-June-7 | web1 | /etc | 3 |
+| 2021-June-7 | web2 | /var | 4 |
+| 2021-June-7 | web3 | /var | 8 |
+| ... | ... | ... | ... |
+
+You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:
+
+```sql
+SELECT Host, Disk, CASE WHEN PercentFree < 5.0 THEN PercentFree ELSE 0 END FROM (
+ SELECT
+ Host,
+ Disk,
+ Avg(PercentFree)
+ FROM DiskSpace
+ Group By
+ Host,
+ Disk
+ Where __timeFilter(Time)
+```
+
+This query returns the following Table response to Grafana:
+
+| Host | Disk | PercentFree |
+| ---- | ---- | ----------- |
+| web1 | /etc | 3 |
+| web2 | /var | 4 |
+| web3 | /var | 0 |
+
+When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:
+
+| Labels | Status |
+| --------------------- | -------- |
+| {Host=web1,disk=/etc} | Alerting |
+| {Host=web2,disk=/var} | Alerting |
+| {Host=web3,disk=/var} | Normal |
+
+{{% docs/reference %}}
+[data-source-alerting]: "/docs/grafana/ -> /docs/grafana//alerting/fundamentals/alert-rules#supported-data-sources"
+[data-source-alerting]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules#supported-data-sources"
+
+[query-transform-data]: "/docs/grafana/ -> /docs/grafana//panels-visualizations/query-transform-data"
+[query-transform-data]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data"
+{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/alert-rules/queries-conditions/_index.md b/docs/sources/alerting/fundamentals/alert-rules/queries-conditions/_index.md
deleted file mode 100644
index c4191ad3844..00000000000
--- a/docs/sources/alerting/fundamentals/alert-rules/queries-conditions/_index.md
+++ /dev/null
@@ -1,157 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/queries-conditions/
-description: Define queries to get the data you want to measure and conditions that need to be met before an alert rule fires
-keywords:
- - grafana
- - alerting
- - queries
- - conditions
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Queries and conditions
-weight: 104
----
-
-# Queries and conditions
-
-In Grafana, queries play a vital role in fetching and transforming data from supported data sources, which include databases like MySQL and PostgreSQL, time series databases like Prometheus, InfluxDB and Graphite, and services like Elasticsearch, AWS CloudWatch, Azure Monitor and Google Cloud Monitoring.
-
-For more information on supported data sources, see [Data sources][data-source-alerting].
-
-The process of executing a query involves defining the data source, specifying the desired data to retrieve, and applying relevant filters or transformations. Query languages or syntaxes specific to the chosen data source are utilized for constructing these queries.
-
-In Alerting, you define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.
-
-An alert rule consists of one or more queries and expressions that select the data you want to measure.
-
-For more information on queries and expressions, see [Query and transform data][query-transform-data].
-
-## Data source queries
-
-Queries in Grafana can be applied in various ways, depending on the data source and query language being used. Each data source’s query editor provides a customized user interface that helps you write queries that take advantage of its unique capabilities.
-
-Because of the differences between query languages, each data source query editor looks and functions differently. Depending on your data source, the query editor might provide auto-completion features, metric names, variable suggestions, or a visual query-building interface.
-
-Some common types of query components include:
-
-**Metrics or data fields**: Specify the specific metrics or data fields you want to retrieve, such as CPU usage, network traffic, or sensor readings.
-
-**Time range**: Define the time range for which you want to fetch data, such as the last hour, a specific day, or a custom time range.
-
-**Filters**: Apply filters to narrow down the data based on specific criteria, such as filtering data by a specific tag, host, or application.
-
-**Aggregations**: Perform aggregations on the data to calculate metrics like averages, sums, or counts over a given time period.
-
-**Grouping**: Group the data by specific dimensions or tags to create aggregated views or breakdowns.
-
-**Note**:
-
-Grafana does not support alert queries with template variables. More information is available [here](https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514).
-
-## Expression queries
-
-In Grafana, an expression is used to perform calculations, transformations, or aggregations on the data source queried data. It allows you to create custom metrics or modify existing metrics based on mathematical operations, functions, or logical expressions.
-
-By leveraging expression queries, users can perform tasks such as calculating the percentage change between two values, applying functions like logarithmic or trigonometric functions, aggregating data over specific time ranges or dimensions, and implementing conditional logic to handle different scenarios.
-
-In Alerting, you can only use expressions for Grafana-managed alert rules. For each expression, you can choose from the math, reduce, and resample expressions. These are called multi-dimensional rules, because they generate a separate alert for each series.
-
-You can also use classic condition, which creates an alert rule that triggers a single alert when its condition is met. As a result, Grafana sends only a single alert even when alert conditions are met for multiple series.
-
-**Note:**
-
-Classic conditions exist mainly for compatibility reasons and should be avoided if possible.
-
-**Reduce**
-
-Aggregates time series values in the selected time range into a single value.
-
-**Math**
-
-Performs free-form math functions/operations on time series and number data. Can be used to preprocess time series data or to define an alert condition for number data.
-
-**Resample**
-
-Realigns a time range to a new set of timestamps, this is useful when comparing time series data from different data sources where the timestamps would otherwise not align.
-
-**Threshold**
-
-Checks if any time series data matches the threshold condition.
-
-The threshold expression allows you to compare two single values. It returns `0` when the condition is false and `1` if the condition is true. The following threshold functions are available:
-
-- Is above (x > y)
-- Is below (x < y)
-- Is within range (x > y1 AND x < y2)
-- Is outside range (x < y1 AND x > y2)
-
-**Classic condition**
-
-Checks if any time series data matches the alert condition.
-
-**Note**:
-
-Classic condition expression queries always produce one alert instance only, no matter how many time series meet the condition.
-Classic conditions exist mainly for compatibility reasons and should be avoided if possible.
-
-## Aggregations
-
-Grafana Alerting provides the following aggregation functions to enable you to further refine your query.
-
-These functions are available for **Reduce** and **Classic condition** expressions only.
-
-| Function | Expression | What it does |
-| ---------------- | ---------------- | ------------------------------------------------------------------------------- |
-| avg | Reduce / Classic | Displays the average of the values |
-| min | Reduce / Classic | Displays the lowest value |
-| max | Reduce / Classic | Displays the highest value |
-| sum | Reduce / Classic | Displays the sum of all values |
-| count | Reduce / Classic | Counts the number of values in the result |
-| last | Reduce / Classic | Displays the last value |
-| median | Reduce / Classic | Displays the median value |
-| diff | Classic | Displays the difference between the newest and oldest value |
-| diff_abs | Classic | Displays the absolute value of diff |
-| percent_diff | Classic | Displays the percentage value of the difference between newest and oldest value |
-| percent_diff_abs | Classic | Displays the absolute value of percent_diff |
-| count_non_null | Classic | Displays a count of values in the result set that aren't `null` |
-
-## Alert condition
-
-An alert condition is the query or expression that determines whether the alert will fire or not depending on the value it yields. There can be only one condition which will determine the triggering of the alert.
-
-After you have defined your queries and/or expressions, choose one of them as the alert rule condition.
-
-When the queried data satisfies the defined condition, Grafana triggers the associated alert, which can be configured to send notifications through various channels like email, Slack, or PagerDuty. The notifications inform you about the condition being met, allowing you to take appropriate actions or investigate the underlying issue.
-
-By default, the last expression added is used as the alert condition.
-
-## Recovery threshold
-
-{{% admonition type="note" %}}
-The recovery threshold feature is currently only available in OSS.
-{{% /admonition %}}
-
-To reduce the noise of flapping alerts, you can set a recovery threshold different to the alert threshold.
-
-Flapping alerts occur when a metric hovers around the alert threshold condition and may lead to frequent state changes, resulting in too many notifications being generated.
-
-Grafana-managed alert rules are evaluated for a specific interval of time. During each evaluation, the result of the query is checked against the threshold set in the alert rule. If the value of a metric is above the threshold, an alert rule fires and a notification is sent. When the value goes below the threshold and there is an active alert for this metric, the alert is resolved, and another notification is sent.
-
-It can be tricky to create an alert rule for a noisy metric. That is, when the value of a metric continually goes above and below a threshold. This is called flapping and results in a series of firing - resolved - firing notifications and a noisy alert state history.
-
-For example, if you have an alert for latency with a threshold of 1000ms and the number fluctuates around 1000 (say 980 ->1010 -> 990 -> 1020, and so on) then each of those will trigger a notification.
-
-To solve this problem, you can set a (custom) recovery threshold, which basically means having two thresholds instead of one. An alert is triggered when the first threshold is crossed and is resolved only when the second threshold is crossed.
-
-For example, you could set a threshold of 1000ms and a recovery threshold of 900ms. This way, an alert rule will only stop firing when it goes under 900ms and flapping is reduced.
-
-{{% docs/reference %}}
-[data-source-alerting]: "/docs/grafana/ -> /docs/grafana//alerting/fundamentals/data-source-alerting"
-[data-source-alerting]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/data-source-alerting"
-
-[query-transform-data]: "/docs/grafana/ -> /docs/grafana//panels-visualizations/query-transform-data"
-[query-transform-data]: "/docs/grafana-cloud/ -> /docs/grafana//panels-visualizations/query-transform-data"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/alert-rules/recording-rules/_index.md b/docs/sources/alerting/fundamentals/alert-rules/recording-rules/_index.md
deleted file mode 100644
index 3065ca94086..00000000000
--- a/docs/sources/alerting/fundamentals/alert-rules/recording-rules/_index.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/recording-rules/
-description: Create recording rules to pre-compute frequently needed or computationally expensive expressions and save the result as a new set of time series
-keywords:
- - grafana
- - alerting
- - recording rules
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Recording rules
-weight: 103
----
-
-# Recording rules
-
-_Recording rules are only available for compatible Prometheus or Loki data sources._
-
-A recording rule allows you to pre-compute frequently needed or computationally expensive expressions and save their result as a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query computationally expensive expressions repeatedly.
-
-Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.
-
-Grafana Enterprise offers an alternative to recorded rules in the form of recorded queries that can be executed against any data source.
-
-For more information on recording rules in Prometheus, refer to [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/).
diff --git a/docs/sources/alerting/fundamentals/alert-rules/rule-evaluation.md b/docs/sources/alerting/fundamentals/alert-rules/rule-evaluation.md
new file mode 100644
index 00000000000..a34a508601c
--- /dev/null
+++ b/docs/sources/alerting/fundamentals/alert-rules/rule-evaluation.md
@@ -0,0 +1,72 @@
+---
+canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/rule-evaluation/
+description: Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state
+keywords:
+ - grafana
+ - alerting
+ - evaluation
+labels:
+ products:
+ - cloud
+ - enterprise
+ - oss
+title: Alert rule evaluation
+weight: 108
+---
+
+# Alert rule evaluation
+
+Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.
+
+To do this, you need to make sure that your alert rule is in the right evaluation group and set a pending period time that works best for your use case.
+
+## Evaluation group
+
+Every alert rule is part of an evaluation group. Each evaluation group contains an evaluation interval that determines how frequently the alert rule is checked.
+
+**Data-source managed** alert rules within the same group are evaluated one after the other, while alert rules in different groups can be evaluated simultaneously. This feature is especially useful when you want to ensure that recording rules are evaluated before any alert rules.
+
+**Grafana-managed** alert rules are evaluated at the same time, regardless of alert rule group. The default evaluation interval is set at 10 seconds, which means that Grafana-managed alert rules are evaluated every 10 seconds to the closest 10-second window on the clock, for example, 10:00:00, 10:00:10, 10:00:20, and so on. You can also configure your own evaluation interval, if required.
+
+**Note:**
+
+Evaluation groups and alerts grouping in notification policies are two separate things. Grouping in notification policies allows multiple alerts sharing the same labels to be sent in the same time message.
+
+## Pending period
+
+By setting a pending period, you can avoid unnecessary alerts for temporary problems.
+
+In the pending period, you select the period in which an alert rule can be in breach of the condition until it fires.
+
+**Example**
+
+Imagine you have an alert rule evaluation interval set at every 30 seconds and the pending period to 90 seconds.
+
+Evaluation will occur as follows:
+
+[00:30] First evaluation - condition not met.
+
+[01:00] Second evaluation - condition breached.
+Pending counter starts. **Alert starts pending.**
+
+[01:30] Third evaluation - condition breached. Pending counter = 30s. **Pending state.**
+
+[02:00] Fourth evaluation - condition breached. Pending counter = 60s **Pending state.**
+
+[02:30] Fifth evaluation - condition breached. Pending counter = 90s. **Alert starts firing**
+
+If the alert rule has a condition that needs to be in breach for a certain amount of time before it takes action, then its state changes as follows:
+
+- When the condition is first breached, the rule goes into a "pending" state.
+
+- The rule stays in the "pending" state until the condition has been broken for the required amount of time - pending period.
+
+- Once the required time has passed, the rule goes into a "firing" state.
+
+- If the condition is no longer broken during the pending period, the rule goes back to its normal state.
+
+**Note:**
+
+If you want to skip the pending state, you can simply set the pending period to 0. This effectively skips the pending period and your alert rule will start firing as soon as the condition is breached.
+
+When an alert rule fires, alert instances are produced, which are then sent to the Alertmanager.
diff --git a/docs/sources/alerting/fundamentals/alert-rules/rule-evaluation/_index.md b/docs/sources/alerting/fundamentals/alert-rules/rule-evaluation/_index.md
deleted file mode 100644
index c5250a07eb7..00000000000
--- a/docs/sources/alerting/fundamentals/alert-rules/rule-evaluation/_index.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/rule-evaluation/
-description: Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state
-keywords:
- - grafana
- - alerting
- - evaluation
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Alert rule evaluation
-weight: 106
----
-
-# Alert rule evaluation
-
-Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.
-
-To do this, you need to make sure that your alert rule is in the right evaluation group and set a pending period time that works best for your use case.
-
-## Evaluation group
-
-Every alert rule is part of an evaluation group. Each evaluation group contains an evaluation interval that determines how frequently the alert rule is checked.
-
-**Data-source managed** alert rules within the same group are evaluated one after the other, while alert rules in different groups can be evaluated simultaneously. This feature is especially useful when you want to ensure that recording rules are evaluated before any alert rules.
-
-**Grafana-managed** alert rules are evaluated at the same time, regardless of alert rule group. The default evaluation interval is set at 10 seconds, which means that Grafana-managed alert rules are evaluated every 10 seconds to the closest 10-second window on the clock, for example, 10:00:00, 10:00:10, 10:00:20, and so on. You can also configure your own evaluation interval, if required.
-
-**Note:**
-
-Evaluation groups and alerts grouping in notification policies are two separate things. Grouping in notification policies allows multiple alerts sharing the same labels to be sent in the same time message.
-
-## Pending period
-
-By setting a pending period, you can avoid unnecessary alerts for temporary problems.
-
-In the pending period, you select the period in which an alert rule can be in breach of the condition until it fires.
-
-**Example**
-
-Imagine you have an alert rule evaluation interval set at every 30 seconds and the pending period to 90 seconds.
-
-Evaluation will occur as follows:
-
-[00:30] First evaluation - condition not met.
-
-[01:00] Second evaluation - condition breached.
-Pending counter starts. **Alert starts pending.**
-
-[01:30] Third evaluation - condition breached. Pending counter = 30s. **Pending state.**
-
-[02:00] Fourth evaluation - condition breached. Pending counter = 60s **Pending state.**
-
-[02:30] Fifth evaluation - condition breached. Pending counter = 90s. **Alert starts firing**
-
-If the alert rule has a condition that needs to be in breach for a certain amount of time before it takes action, then its state changes as follows:
-
-- When the condition is first breached, the rule goes into a "pending" state.
-
-- The rule stays in the "pending" state until the condition has been broken for the required amount of time - pending period.
-
-- Once the required time has passed, the rule goes into a "firing" state.
-
-- If the condition is no longer broken during the pending period, the rule goes back to its normal state.
-
-**Note:**
-
-If you want to skip the pending state, you can simply set the pending period to 0. This effectively skips the pending period and your alert rule will start firing as soon as the condition is breached.
-
-When an alert rule fires, alert instances are produced, which are then sent to the Alertmanager.
diff --git a/docs/sources/alerting/fundamentals/alert-rules/state-and-health.md b/docs/sources/alerting/fundamentals/alert-rules/state-and-health.md
index aebbce8ecc2..06ab9b8f2f0 100644
--- a/docs/sources/alerting/fundamentals/alert-rules/state-and-health.md
+++ b/docs/sources/alerting/fundamentals/alert-rules/state-and-health.md
@@ -1,6 +1,7 @@
---
aliases:
- - ../unified-alerting/alerting-rules/state-and-health/
+ - ../../fundamentals/state-and-health/ # /docs/grafana//alerting/fundamentals/state-and-health/
+ - ../../unified-alerting/alerting-rules/state-and-health/ # /docs/grafana//alerting/unified-alerting/alerting-rules/state-and-health
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/state-and-health/
description: Learn about the state and health of alert rules to understand several key status indicators about your alerts
keywords:
@@ -15,7 +16,7 @@ labels:
- enterprise
- oss
title: State and health of alert rules
-weight: 405
+weight: 109
---
# State and health of alert rules
@@ -28,11 +29,13 @@ There are three key components: [alert rule state](#alert-rule-state), [alert in
An alert rule can be in either of the following states:
-| State | Description |
-| ----------- | ---------------------------------------------------------------------------------------------- |
-| **Normal** | None of the time series returned by the evaluation engine is in a `Pending` or `Firing` state. |
-| **Pending** | At least one time series returned by the evaluation engine is `Pending`. |
-| **Firing** | At least one time series returned by the evaluation engine is `Firing`. |
+| State | Description |
+| ----------- | -------------------------------------------------------------------------------------------------- |
+| **Normal** | None of the alert instances returned by the evaluation engine is in a `Pending` or `Firing` state. |
+| **Pending** | At least one alert instances returned by the evaluation engine is `Pending`. |
+| **Firing** | At least one alert instances returned by the evaluation engine is `Firing`. |
+
+The alert rule state is determined by the “worst case” state of the alert instances produced. For example, if one alert instance is firing, the alert rule state will also be firing.
{{% admonition type="note" %}}
Alerts will transition first to `pending` and then `firing`, thus it will take at least two evaluation cycles before an alert is fired.
diff --git a/docs/sources/alerting/fundamentals/alertmanager.md b/docs/sources/alerting/fundamentals/alertmanager.md
deleted file mode 100644
index 9bdd9fbb082..00000000000
--- a/docs/sources/alerting/fundamentals/alertmanager.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-aliases:
- - ../fundamentals/alertmanager/
- - ../metrics/
- - ../unified-alerting/fundamentals/alertmanager/
- - alerting/manage-notifications/alertmanager/
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alertmanager/
-description: Learn about Alertmanagers and the Alertmanager options for Grafana Alerting
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Alertmanager
-weight: 140
----
-
-# Alertmanager
-
-Alertmanager enables you to quickly and efficiently manage and respond to alerts. It receives alerts, handles silencing, inhibition, grouping, and routing by sending notifications out via your channel of choice, for example, email or Slack.
-
-In Grafana, you can use the Cloud Alertmanager, Grafana Alertmanager, or an external Alertmanager. You can also run multiple Alertmanagers; your decision depends on your set up and where your alerts are being generated.
-
-**Cloud Alertmanager**
-
-Cloud Alertmanager runs in Grafana Cloud and it can receive alerts from Grafana, Mimir, and Loki.
-
-**Grafana Alertmanager**
-
-Grafana Alertmanager is an internal Alertmanager that is pre-configured and available for selection by default if you run Grafana on-premises or open-source.
-
-The Grafana Alertmanager can receive alerts from Grafana, but it cannot receive alerts from outside Grafana, for example, from Mimir or Loki.
-
-**Note that inhibition rules are not supported in the Grafana Alertmanager.**
-
-**External Alertmanager**
-
-If you want to use a single Alertmanager to receive all your Grafana, Loki, Mimir, and Prometheus alerts, you can set up Grafana to use an external Alertmanager. This external Alertmanager can be configured and administered from within Grafana itself.
-
-Here are two examples of when you may want to configure your own external alertmanager and send your alerts there instead of the Grafana Alertmanager:
-
-1. You may already have Alertmanagers on-premises in your own Cloud infrastructure that you have set up and still want to use, because you have other alert generators, such as Prometheus.
-
-2. You want to use both Prometheus on-premises and hosted Grafana to send alerts to the same Alertmanager that runs in your Cloud infrastructure.
-
-Alertmanagers are visible from the drop-down menu on the Alerting Contact Points, Notification Policies, and Silences pages.
-
-If you are provisioning your data source, set the flag `handleGrafanaManagedAlerts` in the `jsonData` field to `true` to send Grafana-managed alerts to this Alertmanager.
-
-**Useful links**
-
-[Prometheus Alertmanager documentation](https://prometheus.io/docs/alerting/latest/alertmanager/)
-
-[Add an external Alertmanager][configure-alertmanager]
-
-{{% docs/reference %}}
-[configure-alertmanager]: "/docs/grafana/ -> /docs/grafana//alerting/set-up/configure-alertmanager"
-[configure-alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-alertmanager"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/annotation-label/_index.md b/docs/sources/alerting/fundamentals/annotation-label/_index.md
deleted file mode 100644
index 690902b7cc9..00000000000
--- a/docs/sources/alerting/fundamentals/annotation-label/_index.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-aliases:
- - ../alerting-rules/alert-annotation-label/
- - ../unified-alerting/alerting-rules/alert-annotation-label/
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/
-description: Learn how to use annotations and labels to store key information about alerts
-keywords:
- - grafana
- - alerting
- - guide
- - rules
- - create
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Labels and annotations
-weight: 120
----
-
-# Labels and annotations
-
-Labels and annotations contain information about an alert. Both labels and annotations have the same structure: a set of named values; however their intended uses are different. An example of label, or the equivalent annotation, might be `alertname="test"`.
-
-The main difference between a label and an annotation is that labels are used to differentiate an alert from all other alerts, while annotations are used to add additional information to an existing alert.
-
-For example, consider two high CPU alerts: one for `server1` and another for `server2`. In such an example we might have a label called `server` where the first alert has the label `server="server1"` and the second alert has the label `server="server2"`. However, we might also want to add a description to each alert such as `"The CPU usage for server1 is above 75%."`, where `server1` and `75%` are replaced with the name and CPU usage of the server (please refer to the documentation on [templating labels and annotations][variables-label-annotation] for how to do this). This kind of description would be more suitable as an annotation.
-
-## Labels
-
-Labels contain information that identifies an alert. An example of a label might be `server=server1`. Each alert can have more than one label, and the complete set of labels for an alert is called its label set. It is this label set that identifies the alert.
-
-For example, an alert might have the label set `{alertname="High CPU usage",server="server1"}` while another alert might have the label set `{alertname="High CPU usage",server="server2"}`. These are two separate alerts because although their `alertname` labels are the same, their `server` labels are different.
-
-The label set for an alert is a combination of the labels from the datasource, custom labels from the alert rule, and a number of reserved labels such as `alertname`.
-
-### Custom Labels
-
-Custom labels are additional labels from the alert rule. Like annotations, custom labels must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. Documentation on how to template custom labels can be found [here][variables-label-annotation].
-
-When using custom labels with templates it is important to make sure that the label value does not change between consecutive evaluations of the alert rule as this will end up creating large numbers of distinct alerts. However, it is OK for the template to produce different label values for different alerts. For example, do not put the value of the query in a custom label as this will end up creating a new set of alerts each time the value changes. Instead use annotations.
-
-It is also important to make sure that the label set for an alert does not have two or more labels with the same name. If a custom label has the same name as a label from the datasource then it will replace that label. However, should a custom label have the same name as a reserved label then the custom label will be omitted from the alert.
-
-## Annotations
-
-Annotations are named pairs that add additional information to existing alerts. There are a number of suggested annotations in Grafana such as `description`, `summary`, `runbook_url`, `dashboardUId` and `panelId`. Like custom labels, annotations must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. If an annotation contains template code, the template is evaluated once when the alert is fired. It is not re-evaluated, even when the alert is resolved. Documentation on how to template annotations can be found [here][variables-label-annotation].
-
-{{% docs/reference %}}
-[variables-label-annotation]: "/docs/grafana/ -> /docs/grafana//alerting/fundamentals/annotation-label/variables-label-annotation"
-[variables-label-annotation]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label/variables-label-annotation"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/annotation-label/how-to-use-labels.md b/docs/sources/alerting/fundamentals/annotation-label/how-to-use-labels.md
deleted file mode 100644
index f287f23a7a0..00000000000
--- a/docs/sources/alerting/fundamentals/annotation-label/how-to-use-labels.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/how-to-use-labels/
-description: Learn how to use labels to link alert rules to notification policies and silences
-keywords:
- - grafana
- - alerting
- - guide
- - fundamentals
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Labels in Grafana Alerting
-weight: 117
----
-
-# Labels in Grafana Alerting
-
-This topic explains why labels are a fundamental component of alerting.
-
-- The complete set of labels for an alert is what uniquely identifies an alert within Grafana alerts.
-- The Alertmanager uses labels to match alerts for silences and alert groups in notification policies.
-- The alerting UI shows labels for every alert instance generated during evaluation of that rule.
-- Contact points can access labels to dynamically generate notifications that contain information specific to the alert that is resulting in a notification.
-- You can add labels to an [alerting rule][alerting-rules]. Labels are manually configurable, use template functions, and can reference other labels. Labels added to an alerting rule take precedence in the event of a collision between labels (except in the case of [Grafana reserved labels](#grafana-reserved-labels)).
-
-{{< figure src="/static/img/docs/alerting/unified/rule-edit-details-8-0.png" max-width="550px" caption="Alert details" >}}
-
-## External Alertmanager Compatibility
-
-Grafana's built-in Alertmanager supports both Unicode label keys and values. If you are using an external Prometheus Alertmanager, label keys must be compatible with their [data model](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
-This means that label keys must only contain **ASCII letters**, **numbers**, as well as **underscores** and match the regex `[a-zA-Z_][a-zA-Z0-9_]*`.
-Any invalid characters will be removed or replaced by the Grafana alerting engine before being sent to the external Alertmanager according to the following rules:
-
-- `Whitespace` will be removed.
-- `ASCII characters` will be replaced with `_`.
-- `All other characters` will be replaced with their lower-case hex representation. If this is the first character it will be prefixed with `_`.
-
-Example: A label key/value pair `Alert! 🔔="🔥"` will become `Alert_0x1f514="🔥"`.
-
-**Note** If multiple label keys are sanitized to the same value, the duplicates will have a short hash of the original label appended as a suffix.
-
-## Grafana reserved labels
-
-{{% admonition type="note" %}}
-Labels prefixed with `grafana_` are reserved by Grafana for special use. If a manually configured label is added beginning with `grafana_` it may be overwritten in case of collision.
-To stop the Grafana Alerting engine from adding a reserved label, you can disable it via the `disabled_labels` option in [unified_alerting.reserved_labels][unified-alerting-reserved-labels] configuration.
-{{% /admonition %}}
-
-Grafana reserved labels can be used in the same way as manually configured labels. The current list of available reserved labels are:
-
-| Label | Description |
-| -------------- | ----------------------------------------- |
-| grafana_folder | Title of the folder containing the alert. |
-
-{{% docs/reference %}}
-[alerting-rules]: "/docs/grafana/ -> /docs/grafana//alerting/alerting-rules"
-[alerting-rules]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules"
-
-[unified-alerting-reserved-labels]: "/docs/grafana/ -> /docs/grafana//setup-grafana/configure-grafana#unified_alertingreserved_labels"
-[unified-alerting-reserved-labels]: "/docs/grafana-cloud/ -> /docs/grafana//setup-grafana/configure-grafana#unified_alertingreserved_labels"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/annotation-label/labels-and-label-matchers.md b/docs/sources/alerting/fundamentals/annotation-label/labels-and-label-matchers.md
deleted file mode 100644
index 44f780fb17e..00000000000
--- a/docs/sources/alerting/fundamentals/annotation-label/labels-and-label-matchers.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/labels-and-label-matchers/
-description: Learn how to use label matchers to link alert rules to notification policies and silences
-keywords:
- - grafana
- - alerting
- - guide
- - fundamentals
-labels:
- products:
- - cloud
- - enterprise
- - oss
-menuTitle: Label matchers
-title: How label matching works
-weight: 117
----
-
-# How label matching works
-
-Use labels and label matchers to link alert rules to notification policies and silences. This allows for a very flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.
-
-A label matchers consists of 3 distinct parts, the **label**, the **value** and the **operator**.
-
-- The **Label** field is the name of the label to match. It must exactly match the label name.
-
-- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
-
-- The **Operator** field is the operator to match against the label value. The available operators are:
-
-| Operator | Description |
-| -------- | -------------------------------------------------- |
-| `=` | Select labels that are exactly equal to the value. |
-| `!=` | Select labels that are not equal to the value. |
-| `=~` | Select labels that regex-match the value. |
-| `!~` | Select labels that do not regex-match the value. |
-
-If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.
-
-## Example scenario
-
-If you define the following set of labels for your alert:
-
-`{ foo=bar, baz=qux, id=12 }`
-
-then:
-
-- A label matcher defined as `foo=bar` matches this alert rule.
-- A label matcher defined as `foo!=bar` does _not_ match this alert rule.
-- A label matcher defined as `id=~[0-9]+` matches this alert rule.
-- A label matcher defined as `baz!~[0-9]+` matches this alert rule.
-- Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.
-
-## Exclude labels
-
-You can also write label matchers to exclude labels.
-
-Here is an example that shows how to exclude the label `Team`. You can choose between any of the values below to exclude labels.
-
-| Label | Operator | Value |
-| ------ | -------- | ----- |
-| `team` | `=` | `""` |
-| `team` | `!~` | `.+` |
-| `team` | `=~` | `^$` |
diff --git a/docs/sources/alerting/fundamentals/annotation-label/variables-label-annotation.md b/docs/sources/alerting/fundamentals/annotation-label/variables-label-annotation.md
deleted file mode 100644
index c3d67b9eac6..00000000000
--- a/docs/sources/alerting/fundamentals/annotation-label/variables-label-annotation.md
+++ /dev/null
@@ -1,450 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/variables-label-annotation/
-description: Learn about how to template labels and annotations
-keywords:
- - grafana
- - alerting
- - templating
- - labels
- - annotations
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Templating labels and annotations
-weight: 117
----
-
-# Templating labels and annotations
-
-You can use templates to include data from queries and expressions in labels and annotations. For example, you might want to set the severity label for an alert based on the value of the query, or use the instance label from the query in a summary annotation so you know which server is experiencing high CPU usage.
-
-All templates should be written in [text/template](https://pkg.go.dev/text/template). Regardless of whether you are templating a label or an annotation, you should write each template inline inside the label or annotation that you are templating. This means you cannot share templates between labels and annotations, and instead you will need to copy templates wherever you want to use them.
-
-Each template is evaluated whenever the alert rule is evaluated, and is evaluated for every alert separately. For example, if your alert rule has a templated summary annotation, and the alert rule has 10 firing alerts, then the template will be executed 10 times, once for each alert. You should try to avoid doing expensive computations in your templates as much as possible.
-
-## Examples
-
-Rather than write a complete tutorial on text/template, the following examples attempt to show the most common use-cases we have seen for templates. You can use these examples verbatim, or adapt them as necessary for your use case. For more information on how to write text/template refer to the [text/template](https://pkg.go.dev/text/template) documentation.
-
-### Print all labels, comma separated
-
-To print all labels, comma separated, print the `$labels` variable:
-
-```
-{{ $labels }}
-```
-
-For example, given an alert with the labels `alertname=High CPU usage`, `grafana_folder=CPU alerts` and `instance=server1`, this would print:
-
-```
-alertname=High CPU usage, grafana_folder=CPU alerts, instance=server1
-```
-
-> If you are using classic conditions then `$labels` will not contain any labels from the query. Refer to [the $labels variable](#the-labels-variable) for more information.
-
-### Print all labels, one per line
-
-To print all labels, one per line, use a `range` to iterate over each key/value pair and print them individually. Here `$k` refers to the name and `$v` refers to the value of the current label:
-
-```
-{{ range $k, $v := $labels -}}
-{{ $k }}={{ $v }}
-{{ end }}
-```
-
-For example, given an alert with the labels `alertname=High CPU usage`, `grafana_folder=CPU alerts` and `instance=server1`, this would print:
-
-```
-alertname=High CPU usage
-grafana_folder=CPU alerts
-instance=server1
-```
-
-> If you are using classic conditions then `$labels` will not contain any labels from the query. Refer to [the $labels variable](#the-labels-variable) for more information.
-
-### Print an individual label
-
-To print an individual label use the `index` function with the `$labels` variable:
-
-```
-The host {{ index $labels "instance" }} has exceeded 80% CPU usage for the last 5 minutes
-```
-
-For example, given an alert with the labels `instance=server1`, this would print:
-
-```
-The host server1 has exceeded 80% CPU usage for the last 5 minutes
-```
-
-> If you are using classic conditions then `$labels` will not contain any labels from the query. Refer to [the $labels variable](#the-labels-variable) for more information.
-
-### Print the value of a query
-
-To print the value of an instant query you can print its Ref ID using the `index` function and the `$values` variable:
-
-```
-{{ index $values "A" }}
-```
-
-For example, given an instant query that returns the value 81.2345, this will print:
-
-```
-81.2345
-```
-
-To print the value of a range query you must first reduce it from a time series to an instant vector with a reduce expression. You can then print the result of the reduce expression by using its Ref ID instead. For example, if the reduce expression takes the average of A and has the Ref ID B you would write:
-
-```
-{{ index $values "B" }}
-```
-
-### Print the humanized value of a query
-
-To print the humanized value of an instant query use the `humanize` function:
-
-```
-{{ humanize (index $values "A").Value }}
-```
-
-For example, given an instant query that returns the value 81.2345, this will print:
-
-```
-81.234
-```
-
-To print the humanized value of a range query you must first reduce it from a time series to an instant vector with a reduce expression. You can then print the result of the reduce expression by using its Ref ID instead. For example, if the reduce expression takes the average of A and has the Ref ID B you would write:
-
-```
-{{ humanize (index $values "B").Value }}
-```
-
-### Print the value of a query as a percentage
-
-To print the value of an instant query as a percentage use the `humanizePercentage` function:
-
-```
-{{ humanizePercentage (index $values "A").Value }}
-```
-
-This function expects the value to be a decimal number between 0 and 1. If the value is instead a decimal number between 0 and 100 you can either divide it by 100 in your query or using a math expression. If the query is a range query you must first reduce it from a time series to an instant vector with a reduce expression.
-
-### Set a severity from the value of a query
-
-To set a severity label from the value of a query use an if statement and the greater than comparison function. Make sure to use decimals (`80.0`, `50.0`, `0.0`, etc) when doing comparisons against `$values` as text/template does not support type coercion. You can find a list of all the supported comparison functions [here](https://pkg.go.dev/text/template#hdr-Functions).
-
-```
-{{ if (gt $values.A.Value 80.0) -}}
-high
-{{ else if (gt $values.A.Value 50.0) -}}
-medium
-{{ else -}}
-low
-{{- end }}
-```
-
-### Print all labels from a classic condition
-
-You cannot use `$labels` to print labels from the query if you are using classic conditions, and must use `$values` instead. The reason for this is classic conditions discard these labels to enforce uni-dimensional behavior (at most one alert per alert rule). If classic conditions didn't discard these labels, then queries that returned many time series would cause alerts to flap between firing and resolved constantly as the labels would change every time the alert rule was evaluated.
-
-Instead, the `$values` variable contains the reduced values of all time series for all conditions that are firing. For example, if you have an alert rule with a query A that returns two time series, and a classic condition B with two conditions, then `$values` would contain `B0`, `B1`, `B2` and `B3`. If the classic condition B had just one condition, then `$values` would contain just `B0` and `B1`.
-
-To print all labels of all firing time series use the following template (make sure to replace `B` in the regular expression with the Ref ID of the classic condition if it's different):
-
-```
-{{ range $k, $v := $values -}}
-{{ if (match "B[0-9]+" $k) -}}
-{{ $k }}: {{ $v.Labels }}{{ end }}
-{{ end }}
-```
-
-For example, a classic condition for two time series exceeding a single condition would print:
-
-```
-B0: instance=server1
-B1: instance=server2
-```
-
-If the classic condition has two or more conditions, and a time series exceeds multiple conditions at the same time, then its labels will be duplicated for each condition that is exceeded:
-
-```
-B0: instance=server1
-B1: instance=server2
-B2: instance=server1
-B3: instance=server2
-```
-
-If you need to print unique labels you should consider changing your alert rules from uni-dimensional to multi-dimensional instead. You can do this by replacing your classic condition with reduce and math expressions.
-
-### Print all values from a classic condition
-
-To print all values from a classic condition take the previous example and replace `$v.Labels` with `$v.Value`:
-
-```
-{{ range $k, $v := $values -}}
-{{ if (match "B[0-9]+" $k) -}}
-{{ $k }}: {{ $v.Value }}{{ end }}
-{{ end }}
-```
-
-For example, a classic condition for two time series exceeding a single condition would print:
-
-```
-B0: 81.2345
-B1: 84.5678
-```
-
-If the classic condition has two or more conditions, and a time series exceeds multiple conditions at the same time, then `$values` will contain the values of all conditions:
-
-```
-B0: 81.2345
-B1: 92.3456
-B2: 84.5678
-B3: 95.6789
-```
-
-## Variables
-
-The following variables are available to you when templating labels and annotations:
-
-### The labels variable
-
-The `$labels` variable contains all labels from the query. For example, suppose you have a query that returns CPU usage for all of your servers, and you have an alert rule that fires when any of your servers have exceeded 80% CPU usage for the last 5 minutes. You want to add a summary annotation to the alert that tells you which server is experiencing high CPU usage. With the `$labels` variable you can write a template that prints a human-readable sentence such as:
-
-```
-CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes
-```
-
-> If you are using a classic condition then `$labels` will not contain any labels from the query. Classic conditions discard these labels in order to enforce uni-dimensional behavior (at most one alert per alert rule). If you want to use labels from the query in your template then use the example [here](#print-all-labels-from-a-classic-condition).
-
-### The value variable
-
-The `$value` variable is a string containing the labels and values of all instant queries; threshold, reduce and math expressions, and classic conditions in the alert rule. It does not contain the results of range queries, as these can return anywhere from 10s to 10,000s of rows or metrics. If it did, for especially large queries a single alert could use 10s of MBs of memory and Grafana would run out of memory very quickly.
-
-To print the `$value` variable in the summary you would write something like this:
-
-```
-CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes: {{ $value }}
-```
-
-And would look something like this:
-
-```
-CPU usage for instance1 has exceeded 80% for the last 5 minutes: [ var='A' labels={instance=instance1} value=81.234 ]
-```
-
-Here `var='A'` refers to the instant query with Ref ID A, `labels={instance=instance1}` refers to the labels, and `value=81.234` refers to the average CPU usage over the last 5 minutes.
-
-If you want to print just some of the string instead of the full string then use the `$values` variable. It contains the same information as `$value`, but in a structured table, and is much easier to use then writing a regular expression to match just the text you want.
-
-### The values variable
-
-The `$values` variable is a table containing the labels and floating point values of all instant queries and expressions, indexed by their Ref IDs.
-
-To print the value of the instant query with Ref ID A:
-
-```
-CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes: {{ index $values "A" }}
-```
-
-For example, given an alert with the labels `instance=server1` and an instant query with the value `81.2345`, this would print:
-
-```
-CPU usage for instance1 has exceeded 80% for the last 5 minutes: 81.2345
-```
-
-If the query in Ref ID A is a range query rather than an instant query then add a reduce expression with Ref ID B and replace `(index $values "A")` with `(index $values "B")`:
-
-```
-CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes: {{ index $values "B" }}
-```
-
-## Functions
-
-The following functions are available to you when templating labels and annotations:
-
-### args
-
-The `args` function translates a list of objects to a map with keys arg0, arg1 etc. This is intended to allow multiple arguments to be passed to templates:
-
-```
-{{define "x"}}{{.arg0}} {{.arg1}}{{end}}{{template "x" (args 1 "2")}}
-```
-
-```
-1 2
-```
-
-### externalURL
-
-The `externalURL` function returns the external URL of the Grafana server as configured in the ini file(s):
-
-```
-{{ externalURL }}
-```
-
-```
-https://example.com/grafana
-```
-
-### graphLink
-
-The `graphLink` function returns the path to the graphical view in [Explore][explore] for the given expression and data source:
-
-```
-{{ graphLink "{\"expr\": \"up\", \"datasource\": \"gdev-prometheus\"}" }}
-```
-
-```
-/explore?left=["now-1h","now","gdev-prometheus",{"datasource":"gdev-prometheus","expr":"up","instant":false,"range":true}]
-```
-
-### humanize
-
-The `humanize` function humanizes decimal numbers:
-
-```
-{{ humanize 1000.0 }}
-```
-
-```
-1k
-```
-
-### humanize1024
-
-The `humanize1024` works similar to `humanize` but but uses 1024 as the base rather than 1000:
-
-```
-{{ humanize1024 1024.0 }}
-```
-
-```
-1ki
-```
-
-### humanizeDuration
-
-The `humanizeDuration` function humanizes a duration in seconds:
-
-```
-{{ humanizeDuration 60.0 }}
-```
-
-```
-1m 0s
-```
-
-### humanizePercentage
-
-The `humanizePercentage` function humanizes a ratio value to a percentage:
-
-```
-{{ humanizePercentage 0.2 }}
-```
-
-```
-20%
-```
-
-### humanizeTimestamp
-
-The `humanizeTimestamp` function humanizes a Unix timestamp:
-
-```
-{{ humanizeTimestamp 1577836800.0 }}
-```
-
-```
-2020-01-01 00:00:00 +0000 UTC
-```
-
-### match
-
-The `match` function matches the text against a regular expression pattern:
-
-```
-{{ match "a.*" "abc" }}
-```
-
-```
-true
-```
-
-### pathPrefix
-
-The `pathPrefix` function returns the path of the Grafana server as configured in the ini file(s):
-
-```
-{{ pathPrefix }}
-```
-
-```
-/grafana
-```
-
-### tableLink
-
-The `tableLink` function returns the path to the tabular view in [Explore][explore] for the given expression and data source:
-
-```
-{{ tableLink "{\"expr\": \"up\", \"datasource\": \"gdev-prometheus\"}" }}
-```
-
-```
-/explore?left=["now-1h","now","gdev-prometheus",{"datasource":"gdev-prometheus","expr":"up","instant":true,"range":false}]
-```
-
-### title
-
-The `title` function capitalizes the first character of each word:
-
-```
-{{ title "hello, world!" }}
-```
-
-```
-Hello, World!
-```
-
-### toLower
-
-The `toLower` function returns all text in lowercase:
-
-```
-{{ toLower "Hello, world!" }}
-```
-
-```
-hello, world!
-```
-
-### toUpper
-
-The `toUpper` function returns all text in uppercase:
-
-```
-{{ toUpper "Hello, world!" }}
-```
-
-```
-HELLO, WORLD!
-```
-
-### reReplaceAll
-
-The `reReplaceAll` function replaces text matching the regular expression:
-
-```
-{{ reReplaceAll "localhost:(.*)" "example.com:$1" "localhost:8080" }}
-```
-
-```
-example.com:8080
-```
-
-{{% docs/reference %}}
-[explore]: "/docs/grafana/ -> /docs/grafana//explore"
-[explore]: "/docs/grafana-cloud/ -> /docs/grafana//explore"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/contact-points/index.md b/docs/sources/alerting/fundamentals/contact-points/index.md
deleted file mode 100644
index 5494a9b6c0a..00000000000
--- a/docs/sources/alerting/fundamentals/contact-points/index.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-aliases:
- - /docs/grafana/latest/alerting/contact-points/
- - /docs/grafana/latest/alerting/unified-alerting/contact-points/
- - /docs/grafana/latest/alerting/fundamentals/contact-points/contact-point-types/
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/contact-points/
-description: Learn about contact points and the supported contact point integrations
-keywords:
- - grafana
- - alerting
- - guide
- - contact point
- - notification channel
- - create
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Contact points
-weight: 150
----
-
-# Contact points
-
-Contact points contain the configuration for sending notifications. A contact point is a list of integrations, each of which sends a notification to a particular email address, service or URL. Contact points can have multiple integrations of the same kind, or a combination of integrations of different kinds. For example, a contact point could contain a Pagerduty integration; an email and Slack integration; or a Pagerduty integration, a Slack integration, and two email integrations. You can also configure a contact point with no integrations; in which case no notifications are sent.
-
-A contact point cannot send notifications until it has been added to a notification policy. A notification policy can only send alerts to one contact point, but a contact point can be added to a number of notification policies at the same time. When an alert matches a notification policy, the alert is sent to the contact point in that notification policy, which then sends a notification to each integration in its configuration.
-
-Contact points can be configured for the Grafana Alertmanager as well as external alertmanagers.
-
-You can also use notification templating to customize notification messages for contact point integrations.
-
-**Note:**
-
-If you've created an OnCall contact point in the Grafana OnCall application, you can view it in the Alerting application.
-
-## Supported contact point integrations
-
-The following table lists the contact point integrations supported by Grafana.
-
-| Name | Type | Grafana Alertmanager | Other Alertmanagers |
-| ------------------------------------------------ | ------------------------- | -------------------- | -------------------------------------------------------------------------------------------------------- |
-| [DingDing](https://www.dingtalk.com/en) | `dingding` | Supported | N/A |
-| [Discord](https://discord.com/) | `discord` | Supported | N/A |
-| Email | `email` | Supported | Supported |
-| [Google Chat](https://chat.google.com/) | `googlechat` | Supported | N/A |
-| [Kafka](https://kafka.apache.org/) | `kafka` | Supported | N/A |
-| [Line](https://line.me/en/) | `line` | Supported | N/A |
-| [Microsoft Teams](https://teams.microsoft.com/) | `teams` | Supported | Supported |
-| [Opsgenie](https://atlassian.com/opsgenie/) | `opsgenie` | Supported | Supported |
-| [Pagerduty](https://www.pagerduty.com/) | `pagerduty` | Supported | Supported |
-| [Prometheus Alertmanager](https://prometheus.io) | `prometheus-alertmanager` | Supported | N/A |
-| [Pushover](https://pushover.net/) | `pushover` | Supported | Supported |
-| [Sensu Go](https://docs.sensu.io/sensu-go/) | `sensugo` | Supported | N/A |
-| [Slack](https://slack.com/) | `slack` | Supported | Supported |
-| [Telegram](https://telegram.org/) | `telegram` | Supported | N/A |
-| [Threema](https://threema.ch/) | `threema` | Supported | N/A |
-| [VictorOps](https://help.victorops.com/) | `victorops` | Supported | Supported |
-| Webhook | `webhook` | Supported | Supported ([different format](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config)) |
-| Cisco Webex Teams | `webex` | Supported | Supported |
-| WeCom | `wecom` | Supported | N/A |
-| [Zenduty](https://www.zenduty.com/) | `webhook` | Supported | N/A |
diff --git a/docs/sources/alerting/fundamentals/data-source-alerting.md b/docs/sources/alerting/fundamentals/data-source-alerting.md
deleted file mode 100644
index b7471e23f42..00000000000
--- a/docs/sources/alerting/fundamentals/data-source-alerting.md
+++ /dev/null
@@ -1,95 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/data-source-alerting/
-description: Learn about the data sources supported by Grafana Alerting
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Data sources and Grafana Alerting
-weight: 100
----
-
-# Data sources and Grafana Alerting
-
-There are a number of data sources that are compatible with Grafana Alerting. Each data source is supported by a plugin. You can use one of the built-in data sources listed below, use [external data source plugins](/grafana/plugins/?type=datasource), or create your own data source plugin.
-
-If you are creating your own data source plugin, make sure it is a backend plugin as Grafana Alerting requires this in order to be able to evaluate rules using the data source. Frontend data sources are not supported, because the evaluation engine runs on the backend.
-
-Specifying `{ "alerting": true, “backend”: true }` in the plugin.json file indicates that the data source plugin is compatible with Grafana Alerting and includes the backend data-fetching code. For more information, refer to [Build a data source backend plugin](/tutorials/build-a-data-source-backend-plugin/).
-
-These are the data sources that are compatible with and supported by Grafana Alerting.
-
-- [AWS CloudWatch][]
-- [Azure Monitor][]
-- [Elasticsearch][]
-- [Google Cloud Monitoring][]
-- [Graphite][]
-- [InfluxDB][]
-- [Loki][]
-- [Microsoft SQL Server (MSSQL)][]
-- [MySQL][]
-- [Open TSDB][]
-- [PostgreSQL][]
-- [Prometheus][]
-- [Jaeger][]
-- [Zipkin][]
-- [Tempo][]
-- [Testdata][]
-
-## Useful links
-
-- [Grafana data sources][]
-
-{{% docs/reference %}}
-[Grafana data sources]: "/docs/grafana/ -> /docs/grafana//datasources"
-[Grafana data sources]: "/docs/grafana-cloud/ -> /docs/grafana//datasources"
-
-[AWS CloudWatch]: "/docs/grafana/ -> /docs/grafana//datasources/aws-cloudwatch"
-[AWS CloudWatch]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/aws-cloudwatch"
-
-[Azure Monitor]: "/docs/grafana/ -> /docs/grafana//datasources/azure-monitor"
-[Azure Monitor]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/azure-monitor"
-
-[Elasticsearch]: "/docs/grafana/ -> /docs/grafana//datasources/elasticsearch"
-[Elasticsearch]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/elasticsearch"
-
-[Google Cloud Monitoring]: "/docs/grafana/ -> /docs/grafana//datasources/google-cloud-monitoring"
-[Google Cloud Monitoring]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/google-cloud-monitoring"
-
-[Graphite]: "/docs/grafana/ -> /docs/grafana//datasources/graphite"
-[Graphite]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/graphite"
-
-[InfluxDB]: "/docs/grafana/ -> /docs/grafana//datasources/influxdb"
-[InfluxDB]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/influxdb"
-
-[Loki]: "/docs/grafana/ -> /docs/grafana//datasources/loki"
-[Loki]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/loki"
-
-[Microsoft SQL Server (MSSQL)]: "/docs/grafana/ -> /docs/grafana//datasources/mssql"
-[Microsoft SQL Server (MSSQL)]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/mssql"
-
-[MySQL]: "/docs/grafana/ -> /docs/grafana//datasources/mysql"
-[MySQL]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/mysql"
-
-[Open TSDB]: "/docs/grafana/ -> /docs/grafana//datasources/opentsdb"
-[Open TSDB]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/opentsdb"
-
-[PostgreSQL]: "/docs/grafana/ -> /docs/grafana//datasources/postgres"
-[PostgreSQL]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/postgres"
-
-[Prometheus]: "/docs/grafana/ -> /docs/grafana//datasources/prometheus"
-[Prometheus]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/prometheus"
-
-[Jaeger]: "/docs/grafana/ -> /docs/grafana//datasources/jaeger"
-[Jaeger]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/jaeger"
-
-[Zipkin]: "/docs/grafana/ -> /docs/grafana//datasources/zipkin"
-[Zipkin]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/zipkin"
-
-[Tempo]: "/docs/grafana/ -> /docs/grafana//datasources/tempo"
-[Tempo]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/tempo"
-
-[Testdata]: "/docs/grafana/ -> /docs/grafana//datasources/testdata"
-[Testdata]: "/docs/grafana-cloud/ -> /docs/grafana//datasources/testdata"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/evaluate-grafana-alerts.md b/docs/sources/alerting/fundamentals/evaluate-grafana-alerts.md
deleted file mode 100644
index 2a5ca6996a2..00000000000
--- a/docs/sources/alerting/fundamentals/evaluate-grafana-alerts.md
+++ /dev/null
@@ -1,114 +0,0 @@
----
-aliases:
- - ../metrics/
- - ../unified-alerting/fundamentals/evaluate-grafana-alerts/
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/evaluate-grafana-alerts/
-description: Learn how how Grafana-managed alerts are evaluated by the backend engine as well as how Grafana handles alerting on numeric rather than time series data
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Alerting on numeric data
-weight: 110
----
-
-# Alerting on numeric data
-
-This topic describes how Grafana managed alerts are evaluated by the backend engine as well as how Grafana handles alerting on numeric rather than time series data.
-
-- [Alerting on numeric data](#alerting-on-numeric-data)
- - [Alert evaluation](#alert-evaluation)
- - [Metrics from the alerting engine](#metrics-from-the-alerting-engine)
- - [Alerting on numeric data](#alerting-on-numeric-data-1)
- - [Tabular Data](#tabular-data)
- - [Example](#example)
-
-## Alert evaluation
-
-Grafana managed alerts query the following backend data sources that have alerting enabled:
-
-- built-in data sources or those developed and maintained by Grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`,
- `Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Monitor`
-- community developed backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json](/developers/plugin-tools/reference-plugin-json)
-
-### Metrics from the alerting engine
-
-The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics][set-up-grafana-monitoring].
-
-| Metric Name | Type | Description |
-| ------------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- |
-| `grafana_alerting_alerts` | gauge | How many alerts by state |
-| `grafana_alerting_request_duration` | histogram | Histogram of requests to the Alerting API |
-| `grafana_alerting_active_configurations` | gauge | The number of active, non default Alertmanager configurations for grafana managed alerts |
-| `grafana_alerting_rule_evaluations_total` | counter | The total number of rule evaluations |
-| `grafana_alerting_rule_evaluation_failures_total` | counter | The total number of rule evaluation failures |
-| `grafana_alerting_rule_evaluation_duration` | summary | The duration for a rule to execute |
-| `grafana_alerting_rule_group_rules` | gauge | The number of rules |
-
-## Alerting on numeric data
-
-Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules.
-When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.
-
-### Tabular Data
-
-This feature is supported with backend data sources that query tabular data:
-
-- SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.
-- The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.
-
-A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:
-
-- The "Format AS" option is set to "Table" in the data source query.
-- The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.
-
-If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.
-
-### Example
-
-For a MySQL table called "DiskSpace":
-
-| Time | Host | Disk | PercentFree |
-| ----------- | ---- | ---- | ----------- |
-| 2021-June-7 | web1 | /etc | 3 |
-| 2021-June-7 | web2 | /var | 4 |
-| 2021-June-7 | web3 | /var | 8 |
-| ... | ... | ... | ... |
-
-You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:
-
-```sql
-SELECT Host, Disk, CASE WHEN PercentFree < 5.0 THEN PercentFree ELSE 0 END FROM (
- SELECT
- Host,
- Disk,
- Avg(PercentFree)
- FROM DiskSpace
- Group By
- Host,
- Disk
- Where __timeFilter(Time)
-```
-
-This query returns the following Table response to Grafana:
-
-| Host | Disk | PercentFree |
-| ---- | ---- | ----------- |
-| web1 | /etc | 3 |
-| web2 | /var | 4 |
-| web3 | /var | 0 |
-
-When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:
-
-| Labels | Status |
-| --------------------- | -------- |
-| {Host=web1,disk=/etc} | Alerting |
-| {Host=web2,disk=/var} | Alerting |
-| {Host=web3,disk=/var} | Normal |
-
-{{% docs/reference %}}
-
-[set-up-grafana-monitoring]: "/docs/grafana/ -> /docs/grafana//setup-grafana/set-up-grafana-monitoring"
-[set-up-grafana-monitoring]: "/docs/grafana-cloud/ -> /docs/grafana//setup-grafana/set-up-grafana-monitoring"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/high-availability/_index.md b/docs/sources/alerting/fundamentals/high-availability/_index.md
deleted file mode 100644
index c1d5db933b8..00000000000
--- a/docs/sources/alerting/fundamentals/high-availability/_index.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-aliases:
- - ../high-availability/
- - ../unified-alerting/high-availability/
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/high-availability/
-description: Learn about high availability in Grafana Alerting
-keywords:
- - grafana
- - alerting
- - tutorials
- - ha
- - high availability
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Alerting high availability
-weight: 170
----
-
-# Alerting high availability
-
-Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.
-
-{{< figure src="/static/img/docs/alerting/unified/high-availability-ua.png" class="docs-image--no-shadow" max-width= "750px" caption="High availability" >}}
-
-When running multiple instances of Grafana, all alert rules are evaluated on all instances. You can think of the evaluation of alert rules as being duplicated. This is how Grafana Alerting makes sure that as long as at least one Grafana instance is working, alert rules will still be evaluated and notifications for alerts will still be sent. You will see this duplication in state history, and is a good way to tell if you are using high availability.
-
-While the alert generator evaluates all alert rules on all instances, the alert receiver makes a best-effort attempt to avoid sending duplicate notifications. Alertmanager chooses availability over consistency, which may result in occasional duplicated or out-of-order notifications. It takes the opinion that duplicate or out-of-order notifications are better than no notifications.
-
-The Alertmanager uses a gossip protocol to share information about notifications between Grafana instances. It also gossips silences, which means a silence created on one Grafana instance is replicated to all other Grafana instances. Both notifications and silences are persisted to the database periodically, and during graceful shut down.
-
-It is important to make sure that gossiping is configured and tested. You can find the documentation on how to do that [here][configure-high-availability].
-
-## Useful links
-
-[Configure alerting high availability][configure-high-availability]
-
-{{% docs/reference %}}
-[configure-high-availability]: "/docs/grafana/ -> /docs/grafana//alerting/set-up/configure-high-availability"
-[configure-high-availability]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-high-availability"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/notification-policies/_index.md b/docs/sources/alerting/fundamentals/notification-policies/_index.md
deleted file mode 100644
index 56dc8e06bd3..00000000000
--- a/docs/sources/alerting/fundamentals/notification-policies/_index.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notification-policies/
-description: Learn about how notification policies work
-keywords:
- - grafana
- - alerting
- - notification policies
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Notifications
-weight: 160
----
-
-# Notifications
-
-Choosing how, when, and where to send your alert notifications is an important part of setting up your alerting system. These decisions will have a direct impact on your ability to resolve issues quickly and not miss anything important.
-
-As a first step, define your contact points; where to send your alert notifications to. A contact point is a set of one or more integrations that are used to deliver notifications. Add notification templates to contact points for reuse and consistent messaging in your notifications.
-
-Next, create a notification policy which is a set of rules for where, when and how your alerts are routed to contact points. In a notification policy, you define where to send your alert notifications by choosing one of the contact points you created.
-
-## Alertmanagers
-
-Grafana uses Alertmanagers to send notifications for firing and resolved alerts. Grafana has its own Alertmanager, referred to as "Grafana" in the user interface, but also supports sending notifications from other Alertmanagers too, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/). The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.
-
-## Notification policies
-
-Notification policies control when and where notifications are sent. A notification policy can choose to send all alerts together in the same notification, send alerts in grouped notifications based on a set of labels, or send alerts as separate notifications. You can configure each notification policy to control how often notifications should be sent as well as having one or more mute timings to inhibit notifications at certain times of the day and on certain days of the week.
-
-Notification policies are organized in a tree structure where at the root of the tree there is a notification policy called the default policy. There can be only one default policy and the default policy cannot be deleted.
-
-Specific routing policies are children of the default policy and can be used to match either all alerts or a subset of alerts based on a set of matching labels. A notification policy matches an alert when its matching labels match the labels in the alert.
-
-A nested policy can have its own nested policies, which allow for additional matching of alerts. An example of a nested policy could be sending infrastructure alerts to the Ops team; while a nested policy might send high priority alerts to Pagerduty and low priority alerts as emails.
-
-All alerts, irrespective of their labels, match the default policy. However, when the default policy receives an alert it looks at each nested policy and sends the alert to the first nested policy that matches the alert. If the nested policy has further nested policies, then it can attempt to the match the alert against one of its nested policies. If no nested policies match the alert then the policy itself is the matching policy. If there are no nested policies, or no nested policies match the alert, then the default policy is the matching policy.
-
-
-
-## Notification templates
-
-You can customize notifications with templates. For example, templates can be used to change the subject and message of an email, or the title and message of notifications sent to Slack.
-
-Templates are not limited to an individual integration or contact point, but instead can be used in a number of integrations in the same contact point and even integrations across different contact points. For example, a Grafana user can create a template called `custom_subject_or_title` and use it for both templating subjects in emails and titles of Slack messages without having to create two separate templates.
-
-All notifications templates are written in [Go's templating language](https://pkg.go.dev/text/template), and are in the Contact points tab on the Alerting page.
-
-## Silences
-
-You can use silences to mute notifications from one or more firing rules. Silences do not stop alerts from firing or being resolved, or hide firing alerts in the user interface. A silence lasts as long as its duration which can be configured in minutes, hours, days, months or years.
diff --git a/docs/sources/alerting/fundamentals/notification-policies/notifications.md b/docs/sources/alerting/fundamentals/notification-policies/notifications.md
deleted file mode 100644
index 0bb333a5eb2..00000000000
--- a/docs/sources/alerting/fundamentals/notification-policies/notifications.md
+++ /dev/null
@@ -1,137 +0,0 @@
----
-aliases:
- - ../notifications/
- - alerting/manage-notifications/create-notification-policy/
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notification-policies/notifications/
-description: Learn about how notification policies work and are structured
-keywords:
- - grafana
- - alerting
- - alertmanager
- - notification policies
- - contact points
- - silences
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Notification policies
-weight: 410
----
-
-# Notification policies
-
-Notification policies provide you with a flexible way of routing alerts to various different receivers. Using label matchers, you can modify alert notification delivery without having to update every individual alert rule.
-
-Learn more about how notification policies work and are structured, so that you can make the most out of setting up your notification policies.
-
-## Policy tree
-
-Notification policies are _not_ a list, but rather are structured according to a [tree structure](https://en.wikipedia.org/wiki/Tree_structure). This means that each policy can have child policies, and so on. The root of the notification policy tree is called the **Default notification policy**.
-
-Each policy consists of a set of label matchers (0 or more) that specify which labels they are or aren't interested in handling.
-
-For more information on label matching, see [how label matching works][labels-and-label-matchers].
-
-{{% admonition type="note" %}}
-If you haven't configured any label matchers for your notification policy, your notification policy will match _all_ alert instances. This may prevent child policies from being evaluated unless you have enabled **Continue matching siblings** on the notification policy.
-{{% /admonition %}}
-
-## Routing
-
-To determine which notification policy will handle which alert instances, you have to start by looking at the existing set of notification policies, starting with the default notification policy.
-
-If no policies other than the default policy are configured, the default policy will handle the alert instance.
-
-If policies other than the default policy are defined, it will evaluate those notification policies in the order they are displayed.
-
-If a notification policy has label matchers that match the labels of the alert instance, it will descend in to its child policies and, if there are any, will continue to look for any child policies that might have label matchers that further narrow down the set of labels, and so forth until no more child policies have been found.
-
-If no child policies are defined in a notification policy or if none of the child policies have any label matchers that match the alert instance's labels, the default notification policy is used.
-
-As soon as a matching policy is found, the system does not continue to look for other matching policies. If you want to continue to look for other policies that may match, enable **Continue matching siblings** on that particular policy.
-
-Lastly, if none of the notification policies are selected the default notification policy is used.
-
-### Routing example
-
-Here is an example of a relatively simple notification policy tree and some alert instances.
-
-{{< figure src="/media/docs/alerting/notification-routing.png" max-width="750px" caption="Notification policy routing" >}}
-
-Here's a breakdown of how these policies are selected:
-
-**Pod stuck in CrashLoop** does not have a `severity` label, so none of its child policies are matched. It does have a `team=operations` label, so the first policy is matched.
-
-The `team=security` policy is not evaluated since we already found a match and **Continue matching siblings** was not configured for that policy.
-
-**Disk Usage – 80%** has both a `team` and `severity` label, and matches a child policy of the operations team.
-
-**Unauthorized log entry** has a `team` label but does not match the first policy (`team=operations`) since the values are not the same, so it will continue searching and match the `team=security` policy. It does not have any child policies, so the additional `severity=high` label is ignored.
-
-## Inheritance
-
-In addition to child policies being a useful concept for routing alert instances, they also inherit properties from their parent policy. This also applies to any policies that are child policies of the default notification policy.
-
-The following properties are inherited by child policies:
-
-- Contact point
-- Grouping options
-- Timing options
-- Mute timings
-
-Each of these properties can be overwritten by an individual policy should you wish to override the inherited properties.
-
-To inherit a contact point from the parent policy, leave it blank. To override the inherited grouping options, enable **Override grouping**. To override the inherited timing options, enable **Override general timings**.
-
-### Inheritance example
-
-The example below shows how the notification policy tree from our previous example allows the child policies of the `team=operations` to inherit its contact point.
-
-In this way, we can avoid having to specify the same contact point multiple times for each child policy.
-
-{{< figure src="/media/docs/alerting/notification-inheritance.png" max-width="750px" caption="Notification policy inheritance" >}}
-
-## Additional configuration options
-
-### Grouping
-
-Grouping is an important feature of Grafana Alerting as it allows you to batch relevant alerts together into a smaller number of notifications. This is particularly important if notifications are delivered to first-responders, such as engineers on-call, where receiving lots of notifications in a short period of time can be overwhelming and in some cases can negatively impact a first-responders ability to respond to an incident. For example, consider a large outage where many of your systems are down. In this case, grouping can be the difference between receiving 1 phone call and 100 phone calls.
-
-You choose how alerts are grouped together using the Group by option in a notification policy. By default, notification policies in Grafana group alerts together by alert rule using the `alertname` and `grafana_folder` labels (since alert names are not unique across multiple folders). Should you wish to group alerts by something other than the alert rule, change the grouping to any other combination of labels.
-
-#### Disable grouping
-
-Should you wish to receive every alert as a separate notification, you can do so by grouping by a special label called `...`. This is useful when your alerts are being delivered to an automated system instead of a first-responder.
-
-#### A single group for all alerts
-
-Should you wish to receive all alerts together in a single notification, you can do so by leaving Group by empty.
-
-### Timing options
-
-The timing options decide how often notifications are sent for each group of alerts. There are three timers that you need to know about: Group wait, Group interval, and Repeat interval.
-
-#### Group wait
-
-Group wait is the amount of time Grafana waits before sending the first notification for a new group of alerts. The longer Group wait is the more time you have for other alerts to arrive. The shorter Group wait is the earlier the first notification will be sent, but at the risk of sending incomplete notifications. You should always choose a Group wait that makes the most sense for your use case.
-
-**Default** 30 seconds
-
-#### Group interval
-
-Once the first notification has been sent for a new group of alerts, Grafana starts the Group interval timer. This is the amount of time Grafana waits before sending notifications about changes to the group. For example, another firing alert might have just been added to the group while an existing alert might have resolved. If an alert was too late to be included in the first notification due to Group wait, it will be included in subsequent notifications after Group interval. Once Group interval has elapsed, Grafana resets the Group interval timer. This repeats until there are no more alerts in the group after which the group is deleted.
-
-**Default** 5 minutes
-
-#### Repeat interval
-
-Repeat interval decides how often notifications are repeated if the group has not changed since the last notification. You can think of these as reminders that some alerts are still firing. Repeat interval is closely related to Group interval, which means your Repeat interval must not only be greater than or equal to Group interval, but also must be a multiple of Group interval. If Repeat interval is not a multiple of Group interval it will be coerced into one. For example, if your Group interval is 5 minutes, and your Repeat interval is 9 minutes, the Repeat interval will be rounded up to the nearest multiple of 5 which is 10 minutes.
-
-**Default** 4 hours
-
-{{% docs/reference %}}
-[labels-and-label-matchers]: "/docs/grafana/ -> /docs/grafana//alerting/fundamentals/annotation-label/labels-and-label-matchers"
-[labels-and-label-matchers]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label/labels-and-label-matchers"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/notifications/_index.md b/docs/sources/alerting/fundamentals/notifications/_index.md
new file mode 100644
index 00000000000..693c673beda
--- /dev/null
+++ b/docs/sources/alerting/fundamentals/notifications/_index.md
@@ -0,0 +1,61 @@
+---
+aliases:
+ - ./notification-policies/ # /docs/grafana//alerting/fundamentals/notification-policies/
+canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/
+description: Learn about how notifications work
+keywords:
+ - grafana
+ - alerting
+ - notification policies
+labels:
+ products:
+ - cloud
+ - enterprise
+ - oss
+title: Notifications
+weight: 110
+---
+
+# Notifications
+
+Choosing how, when, and where to send your alert notifications is an important part of setting up your alerting system. These decisions will have a direct impact on your ability to resolve issues quickly and not miss anything important.
+
+As a first step, define your contact points; where to send your alert notifications to. A contact point is a set of one or more integrations that are used to deliver notifications. Add notification templates to contact points for reuse and consistent messaging in your notifications.
+
+Next, create a notification policy which is a set of rules for where, when and how your alerts are routed to contact points. In a notification policy, you define where to send your alert notifications by choosing one of the contact points you created.
+
+## Alertmanagers
+
+Grafana uses Alertmanagers to send notifications for firing and resolved alerts. Grafana has its own Alertmanager, referred to as "Grafana" in the user interface, but also supports sending notifications from other Alertmanagers too, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/). The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.
+
+## Contact points
+
+Contact points contain the configuration for sending alert notifications, specifying destinations like email, Slack, OnCall, webhooks, and their notification messages. They allow the customization of notification messages and the use of notification templates.
+
+A contact point is a list of integrations, each sending a message to a specific destination. You can configure them via notification policies or alert rules.
+
+## Notification policies
+
+Notification policies control when and where notifications are sent. A notification policy can choose to send all alerts together in the same notification, send alerts in grouped notifications based on a set of labels, or send alerts as separate notifications. You can configure each notification policy to control how often notifications should be sent as well as having one or more mute timings to inhibit notifications at certain times of the day and on certain days of the week.
+
+Notification policies are organized in a tree structure where at the root of the tree there is a notification policy called the default policy. There can be only one default policy and the default policy cannot be deleted.
+
+Specific routing policies are children of the default policy and can be used to match either all alerts or a subset of alerts based on a set of matching labels. A notification policy matches an alert when its matching labels match the labels in the alert.
+
+A nested policy can have its own nested policies, which allow for additional matching of alerts. An example of a nested policy could be sending infrastructure alerts to the Ops team; while a nested policy might send high priority alerts to Pagerduty and low priority alerts as emails.
+
+All alerts, irrespective of their labels, match the default policy. However, when the default policy receives an alert it looks at each nested policy and sends the alert to the first nested policy that matches the alert. If the nested policy has further nested policies, then it can attempt to the match the alert against one of its nested policies. If no nested policies match the alert then the policy itself is the matching policy. If there are no nested policies, or no nested policies match the alert, then the default policy is the matching policy.
+
+
+
+## Notification templates
+
+You can customize notifications with templates. For example, templates can be used to change the subject and message of an email, or the title and message of notifications sent to Slack.
+
+Templates are not limited to an individual integration or contact point, but instead can be used in a number of integrations in the same contact point and even integrations across different contact points. For example, a Grafana user can create a template called `custom_subject_or_title` and use it for both templating subjects in emails and titles of Slack messages without having to create two separate templates.
+
+All notifications templates are written in [Go's templating language](https://pkg.go.dev/text/template), and are in the Contact points tab on the Alerting page.
+
+## Silences
+
+You can use silences to mute notifications from one or more firing rules. Silences do not stop alerts from firing or being resolved, or hide firing alerts in the user interface. A silence lasts as long as its duration which can be configured in minutes, hours, days, months or years.
diff --git a/docs/sources/alerting/fundamentals/notifications/alertmanager.md b/docs/sources/alerting/fundamentals/notifications/alertmanager.md
new file mode 100644
index 00000000000..a7a55219947
--- /dev/null
+++ b/docs/sources/alerting/fundamentals/notifications/alertmanager.md
@@ -0,0 +1,46 @@
+---
+aliases:
+ - ../../fundamentals/alertmanager/ # /docs/grafana//alerting/fundamentals/alertmanager/
+ - ../../unified-alerting/fundamentals/alertmanager/ # /docs/grafana//alerting/unified-alerting/fundamentals/alertmanager/
+ - ../../manage-notifications/alertmanager/ # /docs/grafana//alerting/manage-notifications/alertmanager/
+canonical: https://grafana.com/docs/grafana/latest/alerting/notifications/alertmanager/
+description: Learn about Alertmanagers and the Alertmanager options for Grafana Alerting
+labels:
+ products:
+ - cloud
+ - enterprise
+ - oss
+title: Alertmanager
+weight: 111
+---
+
+# Alertmanager
+
+Grafana sends firing and resolved alerts to Alertmanagers. The Alertmanager receives alerts, handles silencing, inhibition, grouping, and routing by sending notifications out via your channel of choice, for example, email or Slack.
+
+Grafana has its own Alertmanager, referred to as "Grafana" in the user interface, but also supports sending alerts to other Alertmanagers, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/). You can use both internal and external Alertmanagers.
+
+The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.
+
+Alertmanagers are visible from the drop-down menu on the Alerting Contact Points, Notification Policies, and Silences pages.
+
+In Grafana, you can use the Cloud Alertmanager, Grafana Alertmanager, or an external Alertmanager. You can also run multiple Alertmanagers; your decision depends on your set up and where your alerts are being generated.
+
+- **Grafana Alertmanager** is an internal Alertmanager that is pre-configured and available for selection by default if you run Grafana on-premises or open-source.
+
+ The Grafana Alertmanager can receive alerts from Grafana, but it cannot receive alerts from outside Grafana, for example, from Mimir or Loki. Note that inhibition rules are not supported.
+
+- **Cloud Alertmanager** runs in Grafana Cloud and it can receive alerts from Grafana, Mimir, and Loki.
+
+- **External Alertmanager** can receive all your Grafana, Loki, Mimir, and Prometheus alerts. External Alertmanagers can be configured and administered from within Grafana itself.
+
+Here are two examples of when you may want to [add your own external alertmanager][configure-alertmanager] and send your alerts there instead of the Grafana Alertmanager:
+
+1. You may already have Alertmanagers on-premises in your own Cloud infrastructure that you have set up and still want to use, because you have other alert generators, such as Prometheus.
+
+2. You want to use both Prometheus on-premises and hosted Grafana to send alerts to the same Alertmanager that runs in your Cloud infrastructure.
+
+{{% docs/reference %}}
+[configure-alertmanager]: "/docs/grafana/ -> /docs/grafana//alerting/set-up/configure-alertmanager"
+[configure-alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-alertmanager"
+{{% /docs/reference %}}
diff --git a/docs/sources/alerting/fundamentals/notifications/contact-points.md b/docs/sources/alerting/fundamentals/notifications/contact-points.md
new file mode 100644
index 00000000000..98aa75a0f8c
--- /dev/null
+++ b/docs/sources/alerting/fundamentals/notifications/contact-points.md
@@ -0,0 +1,64 @@
+---
+aliases:
+ - ../../fundamentals/contact-points/ # /docs/grafana//alerting/fundamentals/contact-points/
+ - ../../fundamentals/contact-points/contact-point-types/ # /docs/grafana//alerting/fundamentals/contact-points/contact-point-types/
+ - ../../contact-points/ # /docs/grafana//alerting/contact-points/
+ - ../../unified-alerting/contact-points/ # /docs/grafana//alerting/unified-alerting/contact-points/
+canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/contact-points/
+description: Learn about contact points and the supported contact point integrations
+keywords:
+ - grafana
+ - alerting
+ - guide
+ - contact point
+ - notification channel
+ - create
+labels:
+ products:
+ - cloud
+ - enterprise
+ - oss
+title: Contact points
+weight: 112
+---
+
+# Contact points
+
+Contact points contain the configuration for sending notifications. A contact point is a list of integrations, each of which sends a notification to a particular email address, service or URL. Contact points can have multiple integrations of the same kind, or a combination of integrations of different kinds. For example, a contact point could contain a Pagerduty integration; an email and Slack integration; or a Pagerduty integration, a Slack integration, and two email integrations. You can also configure a contact point with no integrations; in which case no notifications are sent.
+
+A contact point cannot send notifications until it has been added to a notification policy. A notification policy can only send alerts to one contact point, but a contact point can be added to a number of notification policies at the same time. When an alert matches a notification policy, the alert is sent to the contact point in that notification policy, which then sends a notification to each integration in its configuration.
+
+Contact points can be configured for the Grafana Alertmanager as well as external alertmanagers.
+
+You can also use notification templating to customize notification messages for contact point integrations.
+
+**Note:**
+
+If you've created an OnCall contact point in the Grafana OnCall application, you can view it in the Alerting application.
+
+## Supported contact point integrations
+
+The following table lists the contact point integrations supported by Grafana.
+
+| Name | Type | Grafana Alertmanager | Other Alertmanagers |
+| ------------------------------------------------ | ------------------------- | -------------------- | -------------------------------------------------------------------------------------------------------- |
+| [DingDing](https://www.dingtalk.com/en) | `dingding` | Supported | N/A |
+| [Discord](https://discord.com/) | `discord` | Supported | N/A |
+| Email | `email` | Supported | Supported |
+| [Google Chat](https://chat.google.com/) | `googlechat` | Supported | N/A |
+| [Kafka](https://kafka.apache.org/) | `kafka` | Supported | N/A |
+| [Line](https://line.me/en/) | `line` | Supported | N/A |
+| [Microsoft Teams](https://teams.microsoft.com/) | `teams` | Supported | Supported |
+| [Opsgenie](https://atlassian.com/opsgenie/) | `opsgenie` | Supported | Supported |
+| [Pagerduty](https://www.pagerduty.com/) | `pagerduty` | Supported | Supported |
+| [Prometheus Alertmanager](https://prometheus.io) | `prometheus-alertmanager` | Supported | N/A |
+| [Pushover](https://pushover.net/) | `pushover` | Supported | Supported |
+| [Sensu Go](https://docs.sensu.io/sensu-go/) | `sensugo` | Supported | N/A |
+| [Slack](https://slack.com/) | `slack` | Supported | Supported |
+| [Telegram](https://telegram.org/) | `telegram` | Supported | N/A |
+| [Threema](https://threema.ch/) | `threema` | Supported | N/A |
+| [VictorOps](https://help.victorops.com/) | `victorops` | Supported | Supported |
+| Webhook | `webhook` | Supported | Supported ([different format](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config)) |
+| Cisco Webex Teams | `webex` | Supported | Supported |
+| WeCom | `wecom` | Supported | N/A |
+| [Zenduty](https://www.zenduty.com/) | `webhook` | Supported | N/A |
diff --git a/docs/sources/alerting/fundamentals/alert-rules/message-templating.md b/docs/sources/alerting/fundamentals/notifications/message-templating.md
similarity index 86%
rename from docs/sources/alerting/fundamentals/alert-rules/message-templating.md
rename to docs/sources/alerting/fundamentals/notifications/message-templating.md
index b9aa80e72ed..9e6d33eae91 100644
--- a/docs/sources/alerting/fundamentals/alert-rules/message-templating.md
+++ b/docs/sources/alerting/fundamentals/notifications/message-templating.md
@@ -1,9 +1,9 @@
---
aliases:
- - ../../contact-points/message-templating/
- - ../../message-templating/
- - ../../unified-alerting/message-templating/
-canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/message-templating/
+ - ../../contact-points/message-templating/ # /docs/grafana//alerting/contact-points/message-templating/
+ - ../../alert-rules/message-templating/ # /docs/grafana//alerting/alert-rules/message-templating/
+ - ../../unified-alerting/message-templating/ # /docs/grafana//alerting/unified-alerting/message-templating/
+canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/message-templating/
description: Learn about notification templating
keywords:
- grafana
@@ -16,11 +16,11 @@ labels:
- cloud
- enterprise
- oss
-title: Notification templating
-weight: 415
+title: Notification templates
+weight: 114
---
-# Notification templating
+# Notification templates
Notifications sent via contact points are built using notification templates. Grafana's default templates are based on the [Go templating system](https://golang.org/pkg/text/template) where some fields are evaluated as text, while others are evaluated as HTML (which can affect escaping).
diff --git a/docs/sources/alerting/fundamentals/notifications/notification-policies.md b/docs/sources/alerting/fundamentals/notifications/notification-policies.md
new file mode 100644
index 00000000000..1e4d1d2dcf6
--- /dev/null
+++ b/docs/sources/alerting/fundamentals/notifications/notification-policies.md
@@ -0,0 +1,136 @@
+---
+aliases:
+ - ../notification-policies/notifications/ # /docs/grafana//alerting/fundamentals/notification-policies/notifications/
+canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/notification-policies/
+description: Learn about how notification policies work and are structured
+keywords:
+ - grafana
+ - alerting
+ - alertmanager
+ - notification policies
+ - contact points
+ - silences
+labels:
+ products:
+ - cloud
+ - enterprise
+ - oss
+title: Notification policies
+weight: 113
+---
+
+# Notification policies
+
+Notification policies provide you with a flexible way of routing alerts to various different receivers. Using label matchers, you can modify alert notification delivery without having to update every individual alert rule.
+
+Learn more about how notification policies work and are structured, so that you can make the most out of setting up your notification policies.
+
+## Policy tree
+
+Notification policies are _not_ a list, but rather are structured according to a [tree structure](https://en.wikipedia.org/wiki/Tree_structure). This means that each policy can have child policies, and so on. The root of the notification policy tree is called the **Default notification policy**.
+
+Each policy consists of a set of label matchers (0 or more) that specify which labels they are or aren't interested in handling.
+
+For more information on label matching, see [how label matching works][labels-and-label-matchers].
+
+{{% admonition type="note" %}}
+If you haven't configured any label matchers for your notification policy, your notification policy will match _all_ alert instances. This may prevent child policies from being evaluated unless you have enabled **Continue matching siblings** on the notification policy.
+{{% /admonition %}}
+
+## Routing
+
+To determine which notification policy will handle which alert instances, you have to start by looking at the existing set of notification policies, starting with the default notification policy.
+
+If no policies other than the default policy are configured, the default policy will handle the alert instance.
+
+If policies other than the default policy are defined, it will evaluate those notification policies in the order they are displayed.
+
+If a notification policy has label matchers that match the labels of the alert instance, it will descend in to its child policies and, if there are any, will continue to look for any child policies that might have label matchers that further narrow down the set of labels, and so forth until no more child policies have been found.
+
+If no child policies are defined in a notification policy or if none of the child policies have any label matchers that match the alert instance's labels, the default notification policy is used.
+
+As soon as a matching policy is found, the system does not continue to look for other matching policies. If you want to continue to look for other policies that may match, enable **Continue matching siblings** on that particular policy.
+
+Lastly, if none of the notification policies are selected the default notification policy is used.
+
+### Routing example
+
+Here is an example of a relatively simple notification policy tree and some alert instances.
+
+{{< figure src="/media/docs/alerting/notification-routing.png" max-width="750px" caption="Notification policy routing" >}}
+
+Here's a breakdown of how these policies are selected:
+
+**Pod stuck in CrashLoop** does not have a `severity` label, so none of its child policies are matched. It does have a `team=operations` label, so the first policy is matched.
+
+The `team=security` policy is not evaluated since we already found a match and **Continue matching siblings** was not configured for that policy.
+
+**Disk Usage – 80%** has both a `team` and `severity` label, and matches a child policy of the operations team.
+
+**Unauthorized log entry** has a `team` label but does not match the first policy (`team=operations`) since the values are not the same, so it will continue searching and match the `team=security` policy. It does not have any child policies, so the additional `severity=high` label is ignored.
+
+## Inheritance
+
+In addition to child policies being a useful concept for routing alert instances, they also inherit properties from their parent policy. This also applies to any policies that are child policies of the default notification policy.
+
+The following properties are inherited by child policies:
+
+- Contact point
+- Grouping options
+- Timing options
+- Mute timings
+
+Each of these properties can be overwritten by an individual policy should you wish to override the inherited properties.
+
+To inherit a contact point from the parent policy, leave it blank. To override the inherited grouping options, enable **Override grouping**. To override the inherited timing options, enable **Override general timings**.
+
+### Inheritance example
+
+The example below shows how the notification policy tree from our previous example allows the child policies of the `team=operations` to inherit its contact point.
+
+In this way, we can avoid having to specify the same contact point multiple times for each child policy.
+
+{{< figure src="/media/docs/alerting/notification-inheritance.png" max-width="750px" caption="Notification policy inheritance" >}}
+
+## Additional configuration options
+
+### Grouping
+
+Grouping is an important feature of Grafana Alerting as it allows you to batch relevant alerts together into a smaller number of notifications. This is particularly important if notifications are delivered to first-responders, such as engineers on-call, where receiving lots of notifications in a short period of time can be overwhelming and in some cases can negatively impact a first-responders ability to respond to an incident. For example, consider a large outage where many of your systems are down. In this case, grouping can be the difference between receiving 1 phone call and 100 phone calls.
+
+You choose how alerts are grouped together using the Group by option in a notification policy. By default, notification policies in Grafana group alerts together by alert rule using the `alertname` and `grafana_folder` labels (since alert names are not unique across multiple folders). Should you wish to group alerts by something other than the alert rule, change the grouping to any other combination of labels.
+
+#### Disable grouping
+
+Should you wish to receive every alert as a separate notification, you can do so by grouping by a special label called `...`. This is useful when your alerts are being delivered to an automated system instead of a first-responder.
+
+#### A single group for all alerts
+
+Should you wish to receive all alerts together in a single notification, you can do so by leaving Group by empty.
+
+### Timing options
+
+The timing options decide how often notifications are sent for each group of alerts. There are three timers that you need to know about: Group wait, Group interval, and Repeat interval.
+
+#### Group wait
+
+Group wait is the amount of time Grafana waits before sending the first notification for a new group of alerts. The longer Group wait is the more time you have for other alerts to arrive. The shorter Group wait is the earlier the first notification will be sent, but at the risk of sending incomplete notifications. You should always choose a Group wait that makes the most sense for your use case.
+
+**Default** 30 seconds
+
+#### Group interval
+
+Once the first notification has been sent for a new group of alerts, Grafana starts the Group interval timer. This is the amount of time Grafana waits before sending notifications about changes to the group. For example, another firing alert might have just been added to the group while an existing alert might have resolved. If an alert was too late to be included in the first notification due to Group wait, it will be included in subsequent notifications after Group interval. Once Group interval has elapsed, Grafana resets the Group interval timer. This repeats until there are no more alerts in the group after which the group is deleted.
+
+**Default** 5 minutes
+
+#### Repeat interval
+
+Repeat interval decides how often notifications are repeated if the group has not changed since the last notification. You can think of these as reminders that some alerts are still firing. Repeat interval is closely related to Group interval, which means your Repeat interval must not only be greater than or equal to Group interval, but also must be a multiple of Group interval. If Repeat interval is not a multiple of Group interval it will be coerced into one. For example, if your Group interval is 5 minutes, and your Repeat interval is 9 minutes, the Repeat interval will be rounded up to the nearest multiple of 5 which is 10 minutes.
+
+**Default** 4 hours
+
+{{% docs/reference %}}
+[labels-and-label-matchers]: "/docs/grafana/ -> /docs/grafana//alerting/fundamentals/alert-rules/annotation-label#how-label-matching-works"
+[labels-and-label-matchers]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/annotation-label#how-label-matching-works"
+{{% /docs/reference %}}
diff --git a/docs/sources/alerting/manage-notifications/_index.md b/docs/sources/alerting/manage-notifications/_index.md
index 1ea9bc24920..9adb839afcc 100644
--- a/docs/sources/alerting/manage-notifications/_index.md
+++ b/docs/sources/alerting/manage-notifications/_index.md
@@ -1,47 +1,22 @@
---
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/
-description: Manage your alerts by creating silences, mute timings, and more
+description: Detect and respond for day-to-day triage and analysis of what’s going on and action you need to take
keywords:
- grafana
- - alert
- - notifications
+ - detect
+ - respond
labels:
products:
- cloud
- enterprise
- oss
-menuTitle: Manage
-title: Manage your alerts
+menuTitle: Detect and respond
+title: Detect and respond
weight: 130
---
-# Manage your alerts
+# Detect and respond
-Once you have set up your alert rules, contact points, and notification policies, you can use Grafana Alerting to:
+Use Grafana Alerting to track and generate alerts and send notifications, providing an efficient way for engineers to monitor, respond, and triage issues within their services.
-[Create silences][create-silence]
-
-[Create mute timings][mute-timings]
-
-[Declare incidents from firing alerts][declare-incident-from-firing-alert]
-
-[View the state and health of alert rules][view-state-health]
-
-[View and filter alert rules][view-alert-rules]
-
-{{% docs/reference %}}
-[create-silence]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/create-silence"
-[create-silence]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/create-silence"
-
-[declare-incident-from-firing-alert]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/declare-incident-from-alert"
-[declare-incident-from-firing-alert]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/declare-incident-from-alert"
-
-[mute-timings]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/mute-timings"
-[mute-timings]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/mute-timings"
-
-[view-alert-rules]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/view-alert-rules"
-[view-alert-rules]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/view-alert-rules"
-
-[view-state-health]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/view-state-health"
-[view-state-health]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/view-state-health"
-{{% /docs/reference %}}
+Alerts and alert notifications provide a lot of value as key indicators to issues during the triage process, providing engineers with the information they need to understand what is going on in their system or service.
diff --git a/docs/sources/alerting/manage-notifications/declare-incident-from-alert.md b/docs/sources/alerting/manage-notifications/declare-incident-from-alert.md
index 10459fa355e..019efae8699 100644
--- a/docs/sources/alerting/manage-notifications/declare-incident-from-alert.md
+++ b/docs/sources/alerting/manage-notifications/declare-incident-from-alert.md
@@ -1,6 +1,6 @@
---
aliases:
- - alerting/alerting-rules/declare-incident-from-alert/
+ - ../../alerting/alerting-rules/declare-incident-from-alert/ # /docs/grafana//alerting/alerting-rules/declare-incident-from-alert/
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/declare-incident-from-alert/
description: Declare an incident from a firing alert
keywords:
diff --git a/docs/sources/alerting/manage-notifications/manage-contact-points.md b/docs/sources/alerting/manage-notifications/manage-contact-points.md
deleted file mode 100644
index bbfd98e4b1e..00000000000
--- a/docs/sources/alerting/manage-notifications/manage-contact-points.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/manage-contact-points/
-description: View, edit, copy, or delete your contact points and notification templates
-keywords:
- - grafana
- - alerting
- - contact points
- - search
- - export
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Manage contact points
-weight: 410
----
-
-# Manage contact points
-
-The Contact points list view lists all existing contact points and notification templates.
-
-On the **Contact Points** tab, you can:
-
-- Search for name and type of contact points and integrations
-- View all existing contact points and integrations
-- View how many notification policies each contact point is being used for and navigate directly to the linked notification policies
-- View the status of notification deliveries
-- Export individual contact points or all contact points in JSON, YAML, or Terraform format
-- Delete contact points that are not in use by a notification policy
-
-On the **Notification templates** tab, you can:
-
-- View, edit, copy or delete existing notification templates
diff --git a/docs/sources/alerting/manage-notifications/template-notifications/_index.md b/docs/sources/alerting/manage-notifications/template-notifications/_index.md
deleted file mode 100644
index 1a5507f577a..00000000000
--- a/docs/sources/alerting/manage-notifications/template-notifications/_index.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/template-notifications/
-description: Customize your notifications using notification templates
-keywords:
- - grafana
- - alerting
- - notifications
- - templates
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Customize notifications
-weight: 400
----
-
-# Customize notifications
-
-Customize your notifications with notifications templates.
-
-You can use notification templates to change the title, message, and format of the message in your notifications.
-
-Notification templates are not tied to specific contact point integrations, such as email or Slack. However, you can choose to create separate notification templates for different contact point integrations.
-
-You can use notification templates to:
-
-- Customize the subject of an email or the title of a message.
-- Add, change or remove text in notifications. For example, to select or omit certain labels, annotations and links.
-- Format text in bold and italic, and add or remove line breaks.
-
-You cannot use notification templates to:
-
-- Add HTML and CSS to email notifications to change their visual appearance.
-- Change the design of notifications in instant messaging services such as Slack and Microsoft Teams. For example, to add or remove custom blocks with Slack Block Kit or adaptive cards with Microsoft Teams.
-- Choose the number and size of images, or where in the notification images are shown.
-- Customize the data in webhooks, including the fields or structure of the JSON data or send the data in other formats such as XML.
-- Add or remove HTTP headers in webhooks other than those in the contact point configuration.
-
-[Using Go's templating language][using-go-templating-language]
-
-Learn how to write the content of your notification templates in Go’s templating language.
-
-Create reusable notification templates for your contact points.
-
-[Use notification templates][use-notification-templates]
-
-Use notification templates to send notifications to your contact points.
-
-[Reference][reference]
-
-Data that is available when writing templates.
-
-{{% docs/reference %}}
-[reference]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/template-notifications/reference"
-[reference]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/template-notifications/reference"
-
-[use-notification-templates]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/template-notifications/use-notification-templates"
-[use-notification-templates]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/template-notifications/use-notification-templates"
-
-[using-go-templating-language]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/template-notifications/using-go-templating-language"
-[using-go-templating-language]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/template-notifications/using-go-templating-language"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/manage-notifications/template-notifications/use-notification-templates.md b/docs/sources/alerting/manage-notifications/template-notifications/use-notification-templates.md
deleted file mode 100644
index a2a341e02c0..00000000000
--- a/docs/sources/alerting/manage-notifications/template-notifications/use-notification-templates.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/template-notifications/use-notification-templates/
-description: Use notification templates in contact points to customize your notifications
-keywords:
- - grafana
- - alerting
- - notifications
- - templates
- - use templates
-labels:
- products:
- - cloud
- - enterprise
- - oss
-title: Use notification templates
-weight: 300
----
-
-# Use notification templates
-
-Use templates in contact points to customize your notifications.
-
-In the Contact points tab, you can see a list of your contact points.
-
-1. To create a new contact point, click New.
-
- **Note:** You can edit an existing contact by clicking the Edit icon.
-
-1. Execute a template from one or more fields such as Message and Subject:
-
- {{< figure max-width="940px" src="/static/img/docs/alerting/unified/use-notification-template-9-4.png" caption="Use notification template" >}}
-
- For more information on how to write and execute templates, refer to [Using Go's templating language][using-go-templating-language] and [Create notification templates][create-notification-templates].
-
-1. Click **Save contact point**.
-
-{{% docs/reference %}}
-[create-notification-templates]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/template-notifications/create-notification-templates"
-[create-notification-templates]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/template-notifications/create-notification-templates"
-
-[using-go-templating-language]: "/docs/grafana/ -> /docs/grafana//alerting/manage-notifications/template-notifications/using-go-templating-language"
-[using-go-templating-language]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/manage-notifications/template-notifications/using-go-templating-language"
-{{% /docs/reference %}}
diff --git a/docs/sources/alerting/manage-notifications/view-alert-groups.md b/docs/sources/alerting/manage-notifications/view-alert-groups.md
index 105806767dd..f050af7373f 100644
--- a/docs/sources/alerting/manage-notifications/view-alert-groups.md
+++ b/docs/sources/alerting/manage-notifications/view-alert-groups.md
@@ -1,10 +1,9 @@
---
aliases:
- - -docs/grafana/latest/alerting/manage-notifications/view-alert-groups/
- - ../alert-groups/
- - ../alert-groups/filter-alerts/
- - ../alert-groups/view-alert-grouping/
- - ../unified-alerting/alert-groups/
+ - ../../alerting/alert-groups/ # /docs/grafana//alerting/alert-groups/
+ - ../../alerting/alert-groups/filter-alerts/ # /docs/grafana//alerting/alert-groups/filter-alerts/
+ - ../../alerting/alert-groups/view-alert-grouping/ # /docs/grafana//alerting/alert-groups/view-alert-grouping/
+ - ../../alerting/unified-alerting/alert-groups/ # /docs/grafana//alerting/unified-alerting/alert-groups/
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/view-alert-groups/
description: Alert groups
keywords:
diff --git a/docs/sources/alerting/manage-notifications/view-alert-rules.md b/docs/sources/alerting/manage-notifications/view-alert-rules.md
index 3bc2ff79585..8f70591b733 100644
--- a/docs/sources/alerting/manage-notifications/view-alert-rules.md
+++ b/docs/sources/alerting/manage-notifications/view-alert-rules.md
@@ -1,10 +1,10 @@
---
aliases:
- - ../unified-alerting/alerting-rules/rule-list/
- - ../view-alert-rules/
- - rule-list/
+ - ../../alerting/unified-alerting/alerting-rules/rule-list/ # /docs/grafana//alerting/unified-alerting/alerting-rules/rule-list
+ - ../../alerting/alerting-rules/view-alert-rules/ # /docs/grafana//alerting/alerting-rules/view-alert-rules
+ - ../../alerting/alerting-rules/rule-list/ # /docs/grafana//alerting/alerting-rules/rule-list
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/view-alert-rules/
-description: View and filter by alert rules
+description: View alert rules
keywords:
- grafana
- alerting
@@ -16,66 +16,61 @@ labels:
- cloud
- enterprise
- oss
-title: View and filter alert rules
+title: View alert rules
weight: 410
---
-# View and filter alert rules
+# View alert rules
-The Alerting page lists all existing alert rules. By default, rules are grouped by types of data sources. The Grafana section lists all Grafana managed rules. Alert rules for Prometheus compatible data sources are also listed here. You can view alert rules for Prometheus compatible data sources but you cannot edit them.
-
-The Mimir/Cortex/Loki rules section lists all rules for Mimir, Cortex, or Loki data sources. Cloud alert rules are also listed in this section.
+The Alert rules list view page lists all existing alert rules. By default, alert rules are grouped by alert rule type: Grafana-managed (Grafana) or data source-managed (Mimir/Cortex/Loki). The Grafana section also contains alert rules for Prometheus-compatible data sources. You can view alert rules for Prometheus compatible data sources, but you cannot edit them.
When managing large volumes of alerts, you can use extended alert rule search capabilities to filter on folders, evaluation groups, and rules. Additionally, you can filter alert rules by their properties like labels, state, type, and health.
-- [View and filter alert rules](#view-and-filter-alert-rules)
- - [View alert rules](#view-alert-rules)
- - [Export alert rules](#export-alert-rules)
- - [View query definitions for provisioned alerts](#view-query-definitions-for-provisioned-alerts)
- - [Grouped view](#grouped-view)
- - [State view](#state-view)
- - [Filter alert rules](#filter-alert-rules)
+From the Alert rule list page, you can also duplicate alert rules to help you reuse existing alert rules.
## View alert rules
-To view alerting details:
+To view your alert rules, complete the following steps.
+
+1. Click **Alerts & IRM** -> **Alert rules**.
+1. In **View as**, toggle between Grouped, List, or State views.
+1. Expand each alert rule row to view the state, health, evaluation details, as well as a list of alert instances resulting from the alert rule.
+1. Use **Search by data sources** to view alert rules that query the selected data source.
-1. Click **Alerts & IRM** -> **Alert rules**. By default, the List view displays.
-1. In **View as**, toggle between Grouped, List, or State views by clicking the relevant option. See [Grouped view](#grouped-view) and [State view](#state-view) for more information.
-1. Expand the rule row to view the rule labels, annotations, data sources the rule queries, and a list of alert instances resulting from this rule.
+### Grouped view
-{{< figure src="/static/img/docs/alerting/unified/rule-details-8-0.png" max-width="650px" caption="Alerting rule details" >}}
+Grouped view shows Grafana alert rules grouped by folder and Loki or Prometheus alert rules grouped by `namespace` + `group`. This is the default rule list view, intended for managing alert rules. You can expand each group to view a list of rules in this group.
-From the Alert list page, you can also make copies of alert rules to help you reuse existing alert rules.
+### State view
-## Export alert rules
+State view shows alert rules grouped by state. Use this view to get an overview of which rules are in which state. You can expand each group to view more details.
-Click the **Export rule group** icon next to each alert rule group to export to YAML, JSON, or Terraform.
+## View alert rule details
-Click **Export rules** to export all Grafana-managed alert rules to YAML, JSON, or Terraform.
+To view alert rule details, complete the following steps.
-Click **More** -> **Modify export** next to each individual alert rule within a group to edit provisioned alert rules and export a modified version.
+1. Click **Alerts & IRM** -> **Alert rules**.
+1. Click to expand an alert rule.
+1. In **Actions**, click **View** (the eye icon).
-## View query definitions for provisioned alerts
+ The namespace and group are shown in the breadcrumb navigation. They are interactive and can be used to filter rules by namespace or group.
-View read-only query definitions for provisioned alerts. Check quickly if your alert rule queries are correct, without diving into your "as-code" repository for rule definitions.
+ The rest of the alert detail content is split up into tabs:
-### Grouped view
+ **Query and conditions**
-Grouped view shows Grafana alert rules grouped by folder and Loki or Prometheus alert rules grouped by `namespace` + `group`. This is the default rule list view, intended for managing rules. You can expand each group to view a list of rules in this group. Expand a rule further to view its details. You can also expand action buttons and alerts resulting from the rule to view their details.
+ View the details of the query that is used for the alert rule, including the expressions and intermediate values for each step of the expression pipeline. A graph view is included for range queries and data sources that return time series-like data frames.
-{{< figure src="/static/img/docs/alerting/unified/rule-list-group-view-8-0.png" max-width="800px" caption="Alerting grouped view" >}}
+ **Instances**
-### State view
+ Explore each alert instance, its status, labels and various other metadata for multi-dimensional alert rules.
-State view shows alert rules grouped by state. Use this view to get an overview of which rules are in what state. Each rule can be expanded to view its details. Action buttons and any alerts generated by this rule, and each alert can be further expanded to view its details.
+ Use **Search by label** to enter search criteria using label selectors. For example, `environment=production,region=~US|EU,severity!=warning`.
-{{< figure src="/static/img/docs/alerting/unified/rule-list-state-view-8-0.png" max-width="800px" caption="Alerting state view" >}}
+ **History**
-## Filter alert rules
+ Explore the recorded history for an alert rule. You can also filter by alert state.
-To filter alert rules:
+ **Details**
-- From **Select data sources**, select a data source. You can see alert rules that query the selected data source.
-- In the **Search by label**, enter search criteria using label selectors. For example, `environment=production,region=~US|EU,severity!=warning`.
-- From **Filter alerts by state**, select an alerting state you want to see. You can see alerting rules that match the state. Rules matching other states are hidden.
+ Debug or audit using the alert rule metadata and view the alert rule annotations.
diff --git a/docs/sources/alerting/manage-notifications/view-state-health.md b/docs/sources/alerting/manage-notifications/view-state-health.md
index 0b4a5fb7190..0a376099295 100644
--- a/docs/sources/alerting/manage-notifications/view-state-health.md
+++ b/docs/sources/alerting/manage-notifications/view-state-health.md
@@ -1,8 +1,6 @@
---
aliases:
- - ../fundamentals/state-and-health/
- - ../unified-alerting/alerting-rules/state-and-health/
- - ../view-state-health/
+ - ../../alerting/alerting-rules/view-state-health/ # /docs/grafana//alerting/alerting-rules/view-state-health
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/view-state-health/
description: View the state and health of alert rules
keywords:
diff --git a/docs/sources/alerting/set-up/_index.md b/docs/sources/alerting/set-up/_index.md
index 37ec1194491..e98b0e5e12d 100644
--- a/docs/sources/alerting/set-up/_index.md
+++ b/docs/sources/alerting/set-up/_index.md
@@ -1,6 +1,6 @@
---
aliases:
- - unified-alerting/set-up/
+ - unified-alerting/set-up/ # /docs/grafana//alerting/unified-alerting/set-up/
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/
description: Set up or upgrade your implementation of Grafana Alerting
labels:
@@ -57,7 +57,7 @@ Grafana Alerting supports many additional configuration options, from configurin
The following topics provide you with advanced configuration options for Grafana Alerting.
-- [Provision alert rules using file provisioning](/docs/grafana//alerting/set-up/provision-alerting-resources/file-provisioning)
+- [Provision alert rules using file provisioning][file-provisioning]
- [Provision alert rules using Terraform][terraform-provisioning]
- [Add an external Alertmanager][configure-alertmanager]
- [Configure high availability][configure-high-availability]
@@ -69,11 +69,12 @@ The following topics provide you with advanced configuration options for Grafana
[configure-high-availability]: "/docs/grafana/ -> /docs/grafana/