You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 2, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: alerts-detectors-notifications/slo/create-slo.rst
+15-14Lines changed: 15 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Follow these steps to create an SLO.
19
19
#. From the landing page of Splunk Observability Cloud, go to :strong:`Detectors & SLOs`.
20
20
#. Select the :strong:`SLOs` tab.
21
21
#. Select :guilabel:`Create SLO`.
22
-
#. Configure the service level indicator (SLI) for your SLO.
22
+
#. Configure the service level indicator (SLI) for your SLO. You can use a service or any metric of your choice as the system health indicator.
23
23
24
24
To use a service as the system health indicator for your SLI configuration, follow these steps:
25
25
@@ -46,21 +46,22 @@ Follow these steps to create an SLO.
46
46
* - :guilabel:`Filters`
47
47
- Enter any additional dimension names and values you want to apply this SLO to. Alternatively, use the ``NOT`` filter, represented by an exclamation point ( ! ), to exclude any dimension values from this SLO configuration.
48
48
49
-
To use a custom metric as the system health indicator for your SLI configuration, follow these steps:
49
+
To use a metric of your choice as the system health indicator for your SLI configuration, follow these steps:
50
50
51
-
.. list-table::
52
-
:header-rows: 1
53
-
:widths: 40 60
54
-
:width: 100%
51
+
#. For the :guilabel:`Metric type` field, select :guilabel:`Custom metric` from the dropdown menu. The SignalFlow editor appears.
52
+
#. In the SignalFlow editor, you can see the following code sample:
55
53
56
-
* - :strong:`Field name`
57
-
- :strong:`Actions`
58
-
* - :guilabel:`Metric type`
59
-
- Select :guilabel:`Custom metric` from the dropdown menu
60
-
* - :guilabel:`Good events (numerator)`
61
-
- Search for the metric you want to use for the success request count
62
-
* - :guilabel:`Total events (denominator)`
63
-
- Search for the metric you want to use for the total request count
54
+
.. code-block:: python
55
+
56
+
G = data('good.metric', filter=filter('sf_error', 'false'))
57
+
T = data('total.metric')
58
+
59
+
* Line 1 defines ``G`` as a data stream of ``good.metric`` metric time series (MTS). The SignalFlow ``filter()`` function queries for a collection of MTS with value ``false`` for the ``sf_error`` dimension. The filter distinguishes successful requests from total requests, making ``G`` the good events variable.
60
+
* Line 2 defines ``T`` as a data stream ``total.metric`` MTS. ``T`` is the total events variable.
61
+
62
+
Replace the code sample with your own SignalFlow program. You can define good events and total events variables using any metric and supported SignalFlow function. For more information, see :new-page:`Analyze data using SignalFlow <https://dev.splunk.com/observability/docs/signalflow>` in the Splunk Observability Cloud Developer Guide.
63
+
64
+
#. Select appropriate variable names for the :guilabel:`Good events (numerator)` and :guilabel:`Total events (denominator)` dropdown menus.
64
65
65
66
.. note:: Custom metric SLO works by calculating the percentage of successful requests over a given compliance period. This calculation works better for counter and histogram metrics than for gauge metrics. Gauge metrics are not suitable for custom metric SLO, so you might get confusing data when selecting gauge metrics in your configuration.
Copy file name to clipboardExpand all lines: alerts-detectors-notifications/slo/custom-metric-scenario.rst
+13-26Lines changed: 13 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,32 +17,22 @@ Use custom metric as service level indicator (SLI)
17
17
18
18
From the :guilabel:`Detectors &SLOs` page, Kai configures the SLI and sets up a target for their SLO. Kai follows these steps:
19
19
20
-
#. Kai wants to use custom metrics as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu.
21
-
#. Kai enters the custom metrics they want to measure in the following fields:
20
+
#. Kai wants to use a Synthetics metric as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu.
21
+
#. Kai enters following program into the SignalFlow editor:
T = data('synthetics.run.count', filter=filter('test', 'Monitoring Services - Emby check'))
31
27
32
-
* - :guilabel:`Good events (numerator)`
33
-
- :strong:`synthetics.run.count`
34
-
- Kai adds the following filters for this metric:
35
-
36
-
* :strong:`test = Emby check`
37
-
* :strong:`success = true`
38
-
- Kai uses the :strong:`success = true` filter to count the number of successful requests for the Emby service on the Buttercup Games website.
28
+
Kai defines variables ``G`` and ``T`` as two streams of ``synthetics.run.count`` metric time series (MTS) measuring the health of requests sent to the Emby service. To distinguish between the two data streams, Kai applies an additional filter on the ``success`` dimension in the definition for ``G``. This filter queries for a specific collection of MTS that track successful requests for the Emby service. In Kai's SignalFlow program, ``G`` is a data stream of good events and ``T`` is a data stream of total events.
:alt:This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters.
43
33
44
-
* :strong:`test = Emby check`
45
-
- Kai uses the same metric name and the :strong:`test = Emby check` filter to track the same Synthetics Browser test. However, Kai doesn't include the :strong:`success = true` dimension filter in order to count the number of total requests for the Emby service on the Buttercup Games website.
34
+
35
+
#. Kai assigns ``G`` to the :guilabel:`Good events (numerator)` dropdown menu and ``T`` to the :guilabel:`Total events (denominator)` dropdown menu.
46
36
47
37
#. Kai enters the following fields to define a target for their SLO:
48
38
@@ -64,11 +54,6 @@ From the :guilabel:`Detectors & SLOs` page, Kai configures the SLI and sets up a
64
54
65
55
#. Kai subscribes to receive an alert whenever there is a breach event for the SLO target.
:alt:This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters.
70
-
71
-
72
57
Summary
73
58
=======================
74
59
@@ -80,3 +65,5 @@ Learn more
80
65
For more information about creating an SLO, see :ref:`create-slo`.
81
66
82
67
For more information about the Synthetics Browser test, see :ref:`browser-test`.
68
+
69
+
For more information on SignalFlow, see :new-page:`Analyze data using SignalFlow <https://dev.splunk.com/observability/docs/signalflow>` in the Splunk Observability Cloud Developer Guide.
Copy file name to clipboardExpand all lines: apm/intro-to-apm.rst
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,8 @@ Introduction to Splunk APM
8
8
9
9
Collect :ref:`traces and spans<apm-traces-spans>` to monitor your distributed applications with Splunk Application Performance Monitoring (APM). A trace is a collection of actions, or spans, that occur to complete a transaction. Splunk APM collects and analyzes every span and trace from each of the services that you have connected to Splunk Observability Cloud to give you full-fidelity access to all of your application data.
10
10
11
+
To keep up to date with changes in APM, see the Splunk Observability Cloud :ref:`release notes <release-notes-overview>`.
12
+
11
13
For scenarios using Splunk APM, see :ref:`apm-scenarios-intro`.
Copy file name to clipboardExpand all lines: infrastructure/intro-to-infrastructure.rst
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,7 @@ Introduction to Splunk Infrastructure Monitoring
10
10
11
11
Gain insights into and perform powerful, capable analytics on your infrastructure and resources across hybrid and multi-cloud environments with Splunk Infrastructure Monitoring. Infrastructure Monitoring offers support for a broad range of integrations for collecting all kinds of data, from system metrics for infrastructure components to custom data from your applications.
12
12
13
+
To keep up to date with changes in Infrastructure Monitoring, see the Splunk Observability Cloud :ref:`release notes <release-notes-overview>`.
Copy file name to clipboardExpand all lines: references/glossary.rst
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,12 @@ A
23
23
analytics
24
24
Analytics are the mathematical functions that can be applied to a collection of data points. For a full list of analytics that can be applied in Splunk Infrastructure Monitoring, see the :ref:`analytics-ref`.
25
25
26
+
automatic discovery
27
+
Automatic discovery is a feature of the Splunk Distribution of the OpenTelemetry Collector that identifies the services, such as third-party databases and web servers, running in your environment and sends telemetry data from them to Splunk Application Performance Monitoring (APM) and Infrastructure Monitoring. The Collector configures service-specific receivers that collect data from an endpoint exposed on each service. For more information, see :ref:`discovery_mode`.
28
+
29
+
automatic instrumentation
30
+
Automatic instrumentation allows you to instrument your applications and export telemetry data without having to modify the application source files. The language-specific instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud back end. Automatic instrumentation is available for applications written in Java, Node.js, .NET, Go, Python, Ruby, and PHP and automatically collects telemetry data for code written using supported libraries in each language. For more information, see :ref:`get-started-application`.
Splunk Observability Cloud released the following new features and enhancements on October 1, 2024. This is not an exhaustive list of changes in the observability ecosystem. For a detailed breakdown of changes in versioned components, see the :ref:`list of changelogs <changelogs>`.
8
+
9
+
.. _loc-2024-10-01:
10
+
11
+
Log Observer Connect
12
+
====================
13
+
14
+
.. list-table::
15
+
:header-rows: 1
16
+
:widths: 1 2
17
+
:width: 100%
18
+
19
+
* - New feature or enhancement
20
+
- Description
21
+
* - Splunk virtual compute (SVC) optimization
22
+
- You can optimize SVC, resulting in performance improvements and cost savings, by using new :guilabel:`Play`, :guilabel:`Pause`, and :guilabel:`Run` search buttons in the UI. The default limit is 150,000 logs. For more information, see :ref:`logs-keyword`.
23
+
24
+
.. _ingest-2024-20-01:
25
+
26
+
Data ingest
27
+
===========
28
+
29
+
.. list-table::
30
+
:header-rows: 1
31
+
:widths: 1 2
32
+
:width: 100%
33
+
34
+
* - New feature or enhancement
35
+
- Description
36
+
* - Kubernetes control plane metrics
37
+
- In a continued effort to replace Smart Agent monitors with OpenTelemetry Collector receivers, a collection of Kubernetes control plane metrics are available using OpenTelemetry Prometheus receivers that target Prometheus endpoints. For more information see :ref:`kubernetes-control-plane-prometheus`.
38
+
39
+
.. _data-mngt-2024-10-01:
40
+
41
+
Data management
42
+
===============
43
+
44
+
.. list-table::
45
+
:header-rows: 1
46
+
:widths: 1 2
47
+
:width: 100%
48
+
49
+
* - New feature or enhancement
50
+
- Description
51
+
* - Data retention for archived metrics extended from 8 to 31 days
52
+
- To facilitate long-term data and historical trend analysis, you can store archived metrics for up to 31 days. You can also customize your restoration time window when creating exception rules.
53
+
* - Terraform implementation
54
+
- You can use Terraform to archive metrics and create exception rules, such as routing a subset of metrics to the real-time tier rather than the archival tier.
55
+
56
+
.. _slo-2024-10-01:
57
+
58
+
Service level objective (SLO)
59
+
=============================
60
+
61
+
.. list-table::
62
+
:header-rows: 1
63
+
:widths: 1 2
64
+
:width: 100%
65
+
66
+
* - New feature or enhancement
67
+
- Description
68
+
* - SignalFlow editor for custom metrics SLO
69
+
- You can use SignalFlow to define metrics and filters when creating a custom metric SLO. For more information, see :ref:`create-slo`. The feature released on October 2, 2024.
0 commit comments