Skip to content
This repository was archived by the owner on Sep 2, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 13 additions & 1 deletion _includes/gdi/available-azure.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
You can collect data from the following Azure services out-of-the-box:
By default Splunk Observability Cloud collects metrics from the Azure services listed on the table below as explained in :ref:`connect-to-azure`.

.. list-table::
:header-rows: 1
Expand Down Expand Up @@ -251,3 +251,15 @@ You can collect data from the following Azure services out-of-the-box:
* - VPN Gateway
- microsoft.network/virtualnetworkgateways

Add additional services
============================================

If you want to collect data from other Azure services you need to add them as a custom service in the UI, or with the field ``additionalServices`` if you're using the API. Splunk Observability Cloud syncs resource types that you specify in services and custom services. If you add a resource type to both fields, Splunk Observability Cloud ignores the duplication.

Any resource type you specify as a custom service must meet the following criteria:

* The resource must be an Azure GenericResource type.

* If the resource type has a hierarchical structure, only the root resource type is a GenericResource. For example, a Storage Account type can have a File Service type, which in turn can have a File Storage type. In this case, only Storage Account is a GenericResource.

* The resource type stores its metrics in Azure Monitor. To learn more about Azure Monitor, refer to the Microsoft Azure documentation.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,9 @@ Basic settings

* - :strong:`Cycle length`
- Integer >= 1, followed by time indicator (s, m, h, d, w). For example, 30s, 10m, 2h, 5d, 1w. Set this value to be significantly larger than the native resolution.
- The time range that reflects the cyclicity of your signal. For example, a value of 1w indicates your signal follows a weekly cycle (you want to compare data for a Monday morning with previous Monday mornings). A value of 1d indicates your signal follows a daily cycle (you want to compare today's data with data from the same time yesterday, the day before, and so on.)
- | The time range that reflects the cycle of your signal. For example, a value of ``1w`` indicates your signal follows a weekly cycle, and a value of ``1d`` indicates your signal follows a daily cycle.
| Cycle length works in conjunction with the duration of the time window used for data comparison, represented by the :strong:`Current window` parameter. Data from the current window will be compared against data from one or more previous cycles to detect historical anomaly, depending on the value of the :strong:`Number of previous cycles` parameter.
| For example, if the current window is ``1h`` and the cycle length is ``1w``, data in the past hour ([-1h, now]) is compared against data from the [-1w1h, -1w] hour, [-2w1h, -2w] hour, and so on.

* - :strong:`Alert when`
- ``Too high``, ``Too low``, ``Too high or Too low``
Expand Down Expand Up @@ -62,7 +64,7 @@ Advanced settings
- If the short-term variation in a signal is small relative to the scale of the signal, and the scale is somehow natural, using ``Mean plus percentage change`` is recommended; using ``Mean plus standard deviation`` might trigger alerts even for a large number of standard deviations. In addition, ``Mean plus percentage change`` is recommended for metrics which admit a direct business interpretation. For instance, if ``user_sessions`` drops by 20%, revenue drops by 5%.

* - :strong:`Current window`
- Integer >= 1, followed by time indicator (s, m, h, d, w). For example, 30s, 10m, 2h, 5d, 1w. Set this value to be smaller than Cycle length, and significantly larger than the native resolution.
- Integer >= 1, followed by time indicator (s, m, h, d, w). For example, 30s, 10m, 2h, 5d, 1w. Set this value to be shorter than cycle length, and significantly larger than the native resolution.
- The time range against which to compare the data; you can think of this as the moving average window. Higher values compute the mean over more data points, which generally smoothes the value, resulting in lower sensitivity and potentially fewer alerts.

* - :strong:`Number of previous cycles`
Expand Down
11 changes: 3 additions & 8 deletions gdi/get-data-in/connect/azure/azure-metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,13 @@ Azure metrics in Splunk Observability Cloud
.. meta::
:description: These are the metrics available for the Azure integration with Splunk Observability Cloud, grouped according to Azure resource.

By default Splunk Observability Cloud includes all available metrics from any Azure integration.
.. include:: /_includes/gdi/available-azure.rst

Azure services metrics
=================================
Azure services metric information
================================================

Metric names and descriptions are generated dynamically from data provided by Microsoft. See all details in Microsoft's :new-page:`Supported metrics with Azure Monitor <https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported>`.

.. include:: /_includes/gdi/available-azure.rst

Types of available metrics
-------------------------------------------

Every metric can either be a counter or a gauge, depending on what dimension is being looked at. If the MTS contains the dimension ``aggregation_type: total`` or ``aggregation_type: count``, then it is sent as a counter. Otherwise, it is sent as a gauge. To learn more, see :ref:`metric-types` and :ref:`metric-time-series`.

Azure functions metrics
Expand Down
72 changes: 39 additions & 33 deletions metrics-and-metadata/relatedcontent.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ The following table describes when and where in Splunk Observability Cloud you c
Use the Splunk Distribution of the OpenTelemetry Collector to enable Related Content
==========================================================================================================

Splunk Observability Cloud uses OpenTelemetry to correlate telemetry types. To enable this ability, your telemetry field names or metadata key names must exactly match the metadata key names used by both OpenTelemetry and Splunk Observability Cloud.
Splunk Observability Cloud uses OpenTelemetry to correlate telemetry types. To do this, your telemetry field names or metadata key names must exactly match the metadata key names used by both OpenTelemetry and Splunk Observability Cloud.

Related Content works out-of-the-box when you deploy the Splunk Distribution of the OpenTelemetry Collector with its default configuration to send your telemetry data to Splunk Observability Cloud. With the default configuration the Collector automatically maps your metadata key names correctly. To learn more about the Collector, see :ref:`otel-intro`.

Expand Down Expand Up @@ -108,7 +108,7 @@ When the field names in APM and Log Observer match, the trace and the log with t
Required Collector components
=================================================================

If you're using the Splunk Distribution of the OpenTelemetry Collector, another distribution of the Collector, or the :ref:`upstream Collector <using-upstream-otel>` and want to ensure Related Content in Splunk Observability Cloud behaves correctly, verify that the SignalFx exporter is included in your configuration. This exporter aggregates the metrics from the ``hostmetrics`` receiver and must be enabled for the ``metrics`` and ``traces`` pipelines.
If you're using the Splunk Distribution of the OpenTelemetry Collector, any other distribution of the Collector, or the :ref:`upstream Collector <using-upstream-otel>` and want to ensure Related Content in Splunk Observability Cloud behaves correctly, verify that the SignalFx exporter is included in your configuration. This exporter aggregates the metrics from the ``hostmetrics`` receiver and must be enabled for the ``metrics`` and ``traces`` pipelines.

The Collector uses the correlation flag of the SignalFx exporter to make relevant API calls to correlate your spans with the infrastructure metrics. This flag is enabled by default. To adjust the correlation option further, see the SignalFx exporter's options at :ref:`signalfx-exporter-settings`.

Expand All @@ -124,10 +124,12 @@ The following sections list the metadata key names required to enable Related Co
Splunk APM
-----------------------------------------------------------------

The following APM span tags are required to enable Related Content:
To enable Related Content for APM use one of these span tags:

- ``service.name``
- ``deployment.environment``
- ``trace_id``

Optionally, you can also use ``deployment.environment`` with ``service.name``.

The default configuration of the Splunk Distribution of the OpenTelemetry Collector already provides these span tags. To ensure full functionality of Related Content, do not change any of the metadata key names or span tags provided by the Splunk OTel Collector.

Expand All @@ -154,39 +156,58 @@ For example, consider a scenario in which Related Content needs to return data f
Splunk Infrastructure Monitoring
-----------------------------------------------------------------

The following Infrastructure Monitoring metadata keys are required to enable Related Content:
To enable Related Content for IM use one of these metadata combinations:

- ``host.name``
- ``host.name``. It falls back on ``host``, ``aws_private_dns_name`` (AWS), ``instance_name`` (GCP), ``azure_computer_name`` (Azure)
- ``k8s.cluster.name``
- ``k8s.node.name``
- ``k8s.pod.name``
- ``container.id``
- ``k8s.namespace.name``
- ``kubernetes.workload.name``
- ``k8s.cluster.name`` + ``k8s.node.name``
- ``k8s.cluster.name`` + ``k8s.node.name`` (optional) + ``k8s.pod.name``
- ``k8s.cluster.name`` + ``k8s.node.name`` (optional) + ``k8s.pod.name`` (optional) + ``container.id``
- ``service.name``
- ``service.name`` + ``deployment.environment`` (optional) + ``k8s.cluster.name`` (optional)

If you're using the default configuration of the Splunk Distribution of the OpenTelemetry Collector for Kubernetes, the required Infrastructure Monitoring metadata is provided. See more at :ref:`otel-install-k8s`.

If you're using other distributions of the OpenTelemetry Collector or non-default configurations of the Splunk Distribution to collect infrastructure data, Related Content won't work out of the box.

.. _relatedcontent-log-observer:

Splunk Log Observer
Splunk logs
-----------------------------------------------------------------

.. include:: /_includes/log-observer-transition.rst

The following key names are required to enable Related Content for Log Observer:
To enable Related Content for logs use one of these fields:

- ``service.name``
- ``deployment.environment``
- ``host.name``
- ``trace_id``
- ``service.name``
- ``span_id``
- ``trace_id``

To ensure full functionality of both Log Observer and Related Content, verify that your log events fields are correctly mapped. Correct log field mappings enable built-in log filtering, embed logs in APM and Infrastructure Monitoring functionality, and enable fast searches as well as the Related Content bar.

If the key names in the preceding list use different names in your log fields, remap them to the key names listed here. For example, if you don't see values for :strong:`host.name` in the Log Observer UI, check to see whether your logs use a different field name, such as :strong:`host_name`. If your logs do not contain the default field names exactly as they appear in the preceding list, remap your logs using one of the methods in the following section.

.. include:: /_includes/log-observer-transition.rst

Kubernetes log fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The Splunk Distribution of the OpenTelemetry Collector injects the following fields into your Kubernetes logs. Do not modify them if you want to use Related Content.

- ``k8s.cluster.name``
- ``k8s.node.name``
- ``k8s.pod.name``
- ``container.id``
- ``k8s.namespace.name``
- ``kubernetes.workload.name``

Use one of these tag combinations to enable Related Content:

- ``k8s.cluster.name`` + ``k8s.node.name``
- ``k8s.cluster.name`` + ``k8s.node.name`` (optional) + ``k8s.pod.name``
- ``k8s.cluster.name`` + ``k8s.node.name`` (optional) + ``k8s.pod.name`` (optional) + ``container.id``

Learn more about the Collector for Kubernetes at :ref:`collector-kubernetes-intro`.

.. _remap-log-fields:

Remap log fields
Expand All @@ -207,7 +228,6 @@ The following table describes the four methods for remapping log fields:
* - Client-side
- Configure your app to remap the necessary fields.


When to use Log Field Aliasing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand All @@ -218,20 +238,6 @@ Use Log Field Aliasing to remap fields in Splunk Observability Cloud when you ca
- You do not want to transform your data at index time.
- You want the new alias to affect every log message, even those that came in from a time before you created the alias.

Kubernetes log fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The Splunk Distribution of the OpenTelemetry Collector injects the following fields into your Kubernetes logs. Do not modify them if you want to use Related Content.

- ``k8s.cluster.name``
- ``k8s.node.name``
- ``k8s.pod.name``
- ``container.id``
- ``k8s.namespace.name``
- ``kubernetes.workload.name``

Learn more about the Collector for Kubernetes at :ref:`collector-kubernetes-intro`.

How to change your metadata key names
=================================================================

Expand Down
2 changes: 1 addition & 1 deletion rum/rum-rules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ This example shows how to use a ``<?>`` symbol to apply a single token wildcard
<??> Wildcard for one or more trailing tokens
--------------------------------------------------------

This example shows how to use a ``<??>`` wildcard to group together URLs by one or more tokens. The ``<??>`` wildcard is supported only as the last wild card in a pattern at this time.
This example shows how to use a ``<??>`` wildcard to group together URLs by one or more tokens. The ``<??>`` wildcard is supported only as the last wildcard in a pattern at this time.


.. list-table::
Expand Down
14 changes: 8 additions & 6 deletions synthetics/test-config/synth-alerts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,15 @@ You can set up a detector while initially creating or editing a test, or from th

To set up a detector, do one of the following:

* While creating or editing a test, select :guilabel:`+ Create detector`. The detector dialog box opens.
* From the :guilabel:`Test results` page for a particular test, select :guilabel:`+ Create detector`. The detector dialog box opens.
* While creating or editing a test, select :guilabel:`Create detector`. The detector dialog box opens.
* From the :guilabel:`Test results` page for a particular test, select :guilabel:`Create detector`. The detector dialog box opens.

In the detector dialog box, enter the following fields:

#. In the test name list, select the tests you want to include in your detector. If you want to include all tests of the same type, select :strong:`All tests`.
#. In the test name list, select the tests you want to include in your detector. If you want to include all tests you see in the list, select the :strong:`All tests` check box.

.. note:: The :strong:`All tests` option uses wildcard ( * ) in the program text and always covers all tests of the same type.

#. In the metric list, select the metric you want to receive alerts for. By default, a detector tracks :strong:`Uptime` metric.
#. The default :guilabel:`Static threshold` alert condition can't be changed.
#. Select :strong:`+ Add filters` to scope the alerts by dimension. For Browser tests, you can use this selector to scope the detector to the entire test, a particular page within the test, or a particular synthetic transaction within the test. See the following sections for details:
Expand All @@ -63,13 +66,12 @@ In the detector dialog box, enter the following fields:
#. In the :guilabel:`Alert details` section, enter the following:

* :guilabel:`Trigger threshold`: The threshold to trigger the alert.
* :guilabel:`Orientation`: Specify whether the metric must fall below or exceed the threshold to trigger the alert.
* :guilabel:`Orientation`: Only available for uptime metric. Specify whether the metric must fall below or exceed the threshold to trigger the alert.
* :guilabel:`Violates threshold`: How many times the metric must violate the threshold to trigger the alert.
* :guilabel:`Split by location`: Select whether to split the detector by test location. If you don't filter by location, the detector monitors the average value across all locations.

#. Use the severity selector to select the severity of the alert.
#. Add recipients.
#. (Optional) Add a URL to a runbook.
#. Add recipients.
#. Select :guilabel:`Activate`.

.. _page-level-detector:
Expand Down
Loading