You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 2, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: _includes/gdi/available-azure.rst
+13-1Lines changed: 13 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
You can collect data from the following Azure services out-of-the-box:
1
+
By default Splunk Observability Cloud collects metrics from the Azure services listed on the table below as explained in :ref:`connect-to-azure`.
2
2
3
3
.. list-table::
4
4
:header-rows: 1
@@ -251,3 +251,15 @@ You can collect data from the following Azure services out-of-the-box:
251
251
* - VPN Gateway
252
252
- microsoft.network/virtualnetworkgateways
253
253
254
+
Add additional services
255
+
============================================
256
+
257
+
If you want to collect data from other Azure services you need to add them as a custom service in the UI, or with the field ``additionalServices`` if you're using the API. Splunk Observability Cloud syncs resource types that you specify in services and custom services. If you add a resource type to both fields, Splunk Observability Cloud ignores the duplication.
258
+
259
+
Any resource type you specify as a custom service must meet the following criteria:
260
+
261
+
* The resource must be an Azure GenericResource type.
262
+
263
+
* If the resource type has a hierarchical structure, only the root resource type is a GenericResource. For example, a Storage Account type can have a File Service type, which in turn can have a File Storage type. In this case, only Storage Account is a GenericResource.
264
+
265
+
* The resource type stores its metrics in Azure Monitor. To learn more about Azure Monitor, refer to the Microsoft Azure documentation.
Copy file name to clipboardExpand all lines: alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/hist-anomaly.rst
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,9 @@ Basic settings
34
34
35
35
* - :strong:`Cycle length`
36
36
- Integer >= 1, followed by time indicator (s, m, h, d, w). For example, 30s, 10m, 2h, 5d, 1w. Set this value to be significantly larger than the native resolution.
37
-
- The time range that reflects the cyclicity of your signal. For example, a value of 1w indicates your signal follows a weekly cycle (you want to compare data for a Monday morning with previous Monday mornings). A value of 1d indicates your signal follows a daily cycle (you want to compare today's data with data from the same time yesterday, the day before, and so on.)
37
+
- | The time range that reflects the cycle of your signal. For example, a value of ``1w`` indicates your signal follows a weekly cycle, and a value of ``1d`` indicates your signal follows a daily cycle.
38
+
|Cycle length works in conjunction with the duration of the time window used for data comparison, represented by the :strong:`Current window` parameter. Data from the current window will be compared against data from one or more previous cycles to detect historical anomaly, depending on the value of the :strong:`Number of previous cycles` parameter.
39
+
|For example, if the current window is ``1h`` and the cycle length is ``1w``, data in the past hour ([-1h, now]) is compared against data from the [-1w1h, -1w] hour, [-2w1h, -2w] hour, and so on.
38
40
39
41
* - :strong:`Alert when`
40
42
- ``Too high``, ``Too low``, ``Too high or Too low``
@@ -62,7 +64,7 @@ Advanced settings
62
64
- If the short-term variation in a signal is small relative to the scale of the signal, and the scale is somehow natural, using ``Mean plus percentage change`` is recommended; using ``Mean plus standard deviation`` might trigger alerts even for a large number of standard deviations. In addition, ``Mean plus percentage change`` is recommended for metrics which admit a direct business interpretation. For instance, if ``user_sessions`` drops by 20%, revenue drops by 5%.
63
65
64
66
* - :strong:`Current window`
65
-
- Integer >= 1, followed by time indicator (s, m, h, d, w). For example, 30s, 10m, 2h, 5d, 1w. Set this value to be smaller than Cycle length, and significantly larger than the native resolution.
67
+
- Integer >= 1, followed by time indicator (s, m, h, d, w). For example, 30s, 10m, 2h, 5d, 1w. Set this value to be shorter than cycle length, and significantly larger than the native resolution.
66
68
- The time range against which to compare the data; you can think of this as the moving average window. Higher values compute the mean over more data points, which generally smoothes the value, resulting in lower sensitivity and potentially fewer alerts.
Copy file name to clipboardExpand all lines: gdi/get-data-in/connect/azure/azure-metrics.rst
+3-8Lines changed: 3 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,18 +7,13 @@ Azure metrics in Splunk Observability Cloud
7
7
.. meta::
8
8
:description: These are the metrics available for the Azure integration with Splunk Observability Cloud, grouped according to Azure resource.
9
9
10
-
By default Splunk Observability Cloud includes all available metrics from any Azure integration.
10
+
.. include:: /_includes/gdi/available-azure.rst
11
11
12
-
Azure services metrics
13
-
=================================
12
+
Azure services metric information
13
+
================================================
14
14
15
15
Metric names and descriptions are generated dynamically from data provided by Microsoft. See all details in Microsoft's :new-page:`Supported metrics with Azure Monitor <https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported>`.
16
16
17
-
.. include:: /_includes/gdi/available-azure.rst
18
-
19
-
Types of available metrics
20
-
-------------------------------------------
21
-
22
17
Every metric can either be a counter or a gauge, depending on what dimension is being looked at. If the MTS contains the dimension ``aggregation_type: total`` or ``aggregation_type: count``, then it is sent as a counter. Otherwise, it is sent as a gauge. To learn more, see :ref:`metric-types` and :ref:`metric-time-series`.
Splunk Observability Cloud uses OpenTelemetry to correlate telemetry types. To enable this ability, your telemetry field names or metadata key names must exactly match the metadata key names used by both OpenTelemetry and Splunk Observability Cloud.
76
+
Splunk Observability Cloud uses OpenTelemetry to correlate telemetry types. To do this, your telemetry field names or metadata key names must exactly match the metadata key names used by both OpenTelemetry and Splunk Observability Cloud.
77
77
78
78
Related Content works out-of-the-box when you deploy the Splunk Distribution of the OpenTelemetry Collector with its default configuration to send your telemetry data to Splunk Observability Cloud. With the default configuration the Collector automatically maps your metadata key names correctly. To learn more about the Collector, see :ref:`otel-intro`.
79
79
@@ -108,7 +108,7 @@ When the field names in APM and Log Observer match, the trace and the log with t
If you're using the Splunk Distribution of the OpenTelemetry Collector, another distribution of the Collector, or the :ref:`upstream Collector <using-upstream-otel>` and want to ensure Related Content in Splunk Observability Cloud behaves correctly, verify that the SignalFx exporter is included in your configuration. This exporter aggregates the metrics from the ``hostmetrics`` receiver and must be enabled for the ``metrics`` and ``traces`` pipelines.
111
+
If you're using the Splunk Distribution of the OpenTelemetry Collector, any other distribution of the Collector, or the :ref:`upstream Collector <using-upstream-otel>` and want to ensure Related Content in Splunk Observability Cloud behaves correctly, verify that the SignalFx exporter is included in your configuration. This exporter aggregates the metrics from the ``hostmetrics`` receiver and must be enabled for the ``metrics`` and ``traces`` pipelines.
112
112
113
113
The Collector uses the correlation flag of the SignalFx exporter to make relevant API calls to correlate your spans with the infrastructure metrics. This flag is enabled by default. To adjust the correlation option further, see the SignalFx exporter's options at :ref:`signalfx-exporter-settings`.
114
114
@@ -124,10 +124,12 @@ The following sections list the metadata key names required to enable Related Co
The following APM span tags are required to enable Related Content:
127
+
To enable Related Content for APM use one of these span tags:
128
128
129
129
- ``service.name``
130
-
- ``deployment.environment``
130
+
- ``trace_id``
131
+
132
+
Optionally, you can also use ``deployment.environment`` with ``service.name``.
131
133
132
134
The default configuration of the Splunk Distribution of the OpenTelemetry Collector already provides these span tags. To ensure full functionality of Related Content, do not change any of the metadata key names or span tags provided by the Splunk OTel Collector.
133
135
@@ -154,39 +156,58 @@ For example, consider a scenario in which Related Content needs to return data f
If you're using the default configuration of the Splunk Distribution of the OpenTelemetry Collector for Kubernetes, the required Infrastructure Monitoring metadata is provided. See more at :ref:`otel-install-k8s`.
168
170
169
171
If you're using other distributions of the OpenTelemetry Collector or non-default configurations of the Splunk Distribution to collect infrastructure data, Related Content won't work out of the box.
The following key names are required to enable Related Content for Log Observer:
178
+
To enable Related Content for logs use one of these fields:
179
179
180
-
- ``service.name``
181
-
- ``deployment.environment``
182
180
- ``host.name``
183
-
- ``trace_id``
181
+
- ``service.name``
184
182
- ``span_id``
183
+
- ``trace_id``
185
184
186
185
To ensure full functionality of both Log Observer and Related Content, verify that your log events fields are correctly mapped. Correct log field mappings enable built-in log filtering, embed logs in APM and Infrastructure Monitoring functionality, and enable fast searches as well as the Related Content bar.
187
186
188
187
If the key names in the preceding list use different names in your log fields, remap them to the key names listed here. For example, if you don't see values for :strong:`host.name` in the Log Observer UI, check to see whether your logs use a different field name, such as :strong:`host_name`. If your logs do not contain the default field names exactly as they appear in the preceding list, remap your logs using one of the methods in the following section.
The Splunk Distribution of the OpenTelemetry Collector injects the following fields into your Kubernetes logs. Do not modify them if you want to use Related Content.
195
+
196
+
- ``k8s.cluster.name``
197
+
- ``k8s.node.name``
198
+
- ``k8s.pod.name``
199
+
- ``container.id``
200
+
- ``k8s.namespace.name``
201
+
- ``kubernetes.workload.name``
202
+
203
+
Use one of these tag combinations to enable Related Content:
The Splunk Distribution of the OpenTelemetry Collector injects the following fields into your Kubernetes logs. Do not modify them if you want to use Related Content.
225
-
226
-
- ``k8s.cluster.name``
227
-
- ``k8s.node.name``
228
-
- ``k8s.pod.name``
229
-
- ``container.id``
230
-
- ``k8s.namespace.name``
231
-
- ``kubernetes.workload.name``
232
-
233
-
Learn more about the Collector for Kubernetes at :ref:`collector-kubernetes-intro`.
This example shows how to use a ``<??>`` wildcard to group together URLs by one or more tokens. The ``<??>`` wildcard is supported only as the last wild card in a pattern at this time.
125
+
This example shows how to use a ``<??>`` wildcard to group together URLs by one or more tokens. The ``<??>`` wildcard is supported only as the last wildcard in a pattern at this time.
Copy file name to clipboardExpand all lines: synthetics/test-config/synth-alerts.rst
+8-6Lines changed: 8 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,12 +47,15 @@ You can set up a detector while initially creating or editing a test, or from th
47
47
48
48
To set up a detector, do one of the following:
49
49
50
-
* While creating or editing a test, select :guilabel:`+ Create detector`. The detector dialog box opens.
51
-
* From the :guilabel:`Test results` page for a particular test, select :guilabel:`+ Create detector`. The detector dialog box opens.
50
+
* While creating or editing a test, select :guilabel:`Create detector`. The detector dialog box opens.
51
+
* From the :guilabel:`Test results` page for a particular test, select :guilabel:`Create detector`. The detector dialog box opens.
52
52
53
53
In the detector dialog box, enter the following fields:
54
54
55
-
#. In the test name list, select the tests you want to include in your detector. If you want to include all tests of the same type, select :strong:`All tests`.
55
+
#. In the test name list, select the tests you want to include in your detector. If you want to include all tests you see in the list, select the :strong:`All tests` check box.
56
+
57
+
.. note:: The :strong:`All tests` option uses wildcard ( * ) in the program text and always covers all tests of the same type.
58
+
56
59
#. In the metric list, select the metric you want to receive alerts for. By default, a detector tracks :strong:`Uptime` metric.
57
60
#. The default :guilabel:`Static threshold` alert condition can't be changed.
58
61
#. Select :strong:`+ Add filters` to scope the alerts by dimension. For Browser tests, you can use this selector to scope the detector to the entire test, a particular page within the test, or a particular synthetic transaction within the test. See the following sections for details:
@@ -63,13 +66,12 @@ In the detector dialog box, enter the following fields:
63
66
#. In the :guilabel:`Alert details` section, enter the following:
64
67
65
68
* :guilabel:`Trigger threshold`: The threshold to trigger the alert.
66
-
* :guilabel:`Orientation`: Specify whether the metric must fall below or exceed the threshold to trigger the alert.
69
+
* :guilabel:`Orientation`: Only available for uptime metric. Specify whether the metric must fall below or exceed the threshold to trigger the alert.
67
70
* :guilabel:`Violates threshold`: How many times the metric must violate the threshold to trigger the alert.
68
71
* :guilabel:`Split by location`: Select whether to split the detector by test location. If you don't filter by location, the detector monitors the average value across all locations.
69
72
70
73
#. Use the severity selector to select the severity of the alert.
0 commit comments