Skip to content
This repository was archived by the owner on Sep 2, 2025. It is now read-only.

Commit 33dea21

Browse files
author
Tracey Carter
committed
resolved merge conflict
2 parents 762d59d + 04139ee commit 33dea21

File tree

13 files changed

+199
-42
lines changed

13 files changed

+199
-42
lines changed
-263 KB
Loading

admin/subscription-usage/synthetics-usage.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,8 @@ Splunk Synthetic Monitoring offers metrics you can use to track your subscriptio
2525
- Total number of synthetic runs by organization. To filter by test type:
2626
- ``test_type=browser``
2727
- ``test_type=API``
28-
- ``test_type=uptime``
28+
- ``test_type=http``
29+
- ``test_type=port``
2930

3031

3132
See also

alerts-detectors-notifications/slo/create-slo.rst

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Follow these steps to create an SLO.
1919
#. From the landing page of Splunk Observability Cloud, go to :strong:`Detectors & SLOs`.
2020
#. Select the :strong:`SLOs` tab.
2121
#. Select :guilabel:`Create SLO`.
22-
#. Configure the service level indicator (SLI) for your SLO.
22+
#. Configure the service level indicator (SLI) for your SLO. You can use a service or any metric of your choice as the system health indicator.
2323

2424
To use a service as the system health indicator for your SLI configuration, follow these steps:
2525

@@ -46,21 +46,22 @@ Follow these steps to create an SLO.
4646
* - :guilabel:`Filters`
4747
- Enter any additional dimension names and values you want to apply this SLO to. Alternatively, use the ``NOT`` filter, represented by an exclamation point ( ! ), to exclude any dimension values from this SLO configuration.
4848

49-
To use a custom metric as the system health indicator for your SLI configuration, follow these steps:
49+
To use a metric of your choice as the system health indicator for your SLI configuration, follow these steps:
5050

51-
.. list-table::
52-
:header-rows: 1
53-
:widths: 40 60
54-
:width: 100%
51+
#. For the :guilabel:`Metric type` field, select :guilabel:`Custom metric` from the dropdown menu. The SignalFlow editor appears.
52+
#. In the SignalFlow editor, you can see the following code sample:
5553

56-
* - :strong:`Field name`
57-
- :strong:`Actions`
58-
* - :guilabel:`Metric type`
59-
- Select :guilabel:`Custom metric` from the dropdown menu
60-
* - :guilabel:`Good events (numerator)`
61-
- Search for the metric you want to use for the success request count
62-
* - :guilabel:`Total events (denominator)`
63-
- Search for the metric you want to use for the total request count
54+
.. code-block:: python
55+
56+
G = data('good.metric', filter=filter('sf_error', 'false'))
57+
T = data('total.metric')
58+
59+
* Line 1 defines ``G`` as a data stream of ``good.metric`` metric time series (MTS). The SignalFlow ``filter()`` function queries for a collection of MTS with value ``false`` for the ``sf_error`` dimension. The filter distinguishes successful requests from total requests, making ``G`` the good events variable.
60+
* Line 2 defines ``T`` as a data stream ``total.metric`` MTS. ``T`` is the total events variable.
61+
62+
Replace the code sample with your own SignalFlow program. You can define good events and total events variables using any metric and supported SignalFlow function. For more information, see :new-page:`Analyze data using SignalFlow <https://dev.splunk.com/observability/docs/signalflow>` in the Splunk Observability Cloud Developer Guide.
63+
64+
#. Select appropriate variable names for the :guilabel:`Good events (numerator)` and :guilabel:`Total events (denominator)` dropdown menus.
6465

6566
.. note:: Custom metric SLO works by calculating the percentage of successful requests over a given compliance period. This calculation works better for counter and histogram metrics than for gauge metrics. Gauge metrics are not suitable for custom metric SLO, so you might get confusing data when selecting gauge metrics in your configuration.
6667

alerts-detectors-notifications/slo/custom-metric-scenario.rst

Lines changed: 13 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -17,32 +17,22 @@ Use custom metric as service level indicator (SLI)
1717

1818
From the :guilabel:`Detectors & SLOs` page, Kai configures the SLI and sets up a target for their SLO. Kai follows these steps:
1919

20-
#. Kai wants to use custom metrics as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu.
21-
#. Kai enters the custom metrics they want to measure in the following fields:
20+
#. Kai wants to use a Synthetics metric as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu.
21+
#. Kai enters following program into the SignalFlow editor:
2222

23-
.. list-table::
24-
:header-rows: 1
25-
:widths: 10 20 30 40
23+
.. code-block:: python
2624
27-
* - Field
28-
- Metric name
29-
- Filters
30-
- Description
25+
G = data('synthetics.run.count', filter=filter('test', 'Monitoring Services - Emby check') and filter('success', 'true'))
26+
T = data('synthetics.run.count', filter=filter('test', 'Monitoring Services - Emby check'))
3127
32-
* - :guilabel:`Good events (numerator)`
33-
- :strong:`synthetics.run.count`
34-
- Kai adds the following filters for this metric:
35-
36-
* :strong:`test = Emby check`
37-
* :strong:`success = true`
38-
- Kai uses the :strong:`success = true` filter to count the number of successful requests for the Emby service on the Buttercup Games website.
28+
Kai defines variables ``G`` and ``T`` as two streams of ``synthetics.run.count`` metric time series (MTS) measuring the health of requests sent to the Emby service. To distinguish between the two data streams, Kai applies an additional filter on the ``success`` dimension in the definition for ``G``. This filter queries for a specific collection of MTS that track successful requests for the Emby service. In Kai's SignalFlow program, ``G`` is a data stream of good events and ``T`` is a data stream of total events.
3929

40-
* - :guilabel:`Total events (denominator)`
41-
- :strong:`synthetics.run.count`
42-
- Kai adds the following filter for this metric:
30+
.. image:: /_images/images-slo/custom-metric-slo-scenario.png
31+
:width: 100%
32+
:alt: This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters.
4333

44-
* :strong:`test = Emby check`
45-
- Kai uses the same metric name and the :strong:`test = Emby check` filter to track the same Synthetics Browser test. However, Kai doesn't include the :strong:`success = true` dimension filter in order to count the number of total requests for the Emby service on the Buttercup Games website.
34+
35+
#. Kai assigns ``G`` to the :guilabel:`Good events (numerator)` dropdown menu and ``T`` to the :guilabel:`Total events (denominator)` dropdown menu.
4636

4737
#. Kai enters the following fields to define a target for their SLO:
4838

@@ -64,11 +54,6 @@ From the :guilabel:`Detectors & SLOs` page, Kai configures the SLI and sets up a
6454

6555
#. Kai subscribes to receive an alert whenever there is a breach event for the SLO target.
6656

67-
.. image:: /_images/images-slo/custom-metric-slo-scenario.png
68-
:width: 100%
69-
:alt: This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters.
70-
71-
7257
Summary
7358
=======================
7459

@@ -80,3 +65,5 @@ Learn more
8065
For more information about creating an SLO, see :ref:`create-slo`.
8166

8267
For more information about the Synthetics Browser test, see :ref:`browser-test`.
68+
69+
For more information on SignalFlow, see :new-page:`Analyze data using SignalFlow <https://dev.splunk.com/observability/docs/signalflow>` in the Splunk Observability Cloud Developer Guide.

apm/intro-to-apm.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@ Introduction to Splunk APM
88

99
Collect :ref:`traces and spans<apm-traces-spans>` to monitor your distributed applications with Splunk Application Performance Monitoring (APM). A trace is a collection of actions, or spans, that occur to complete a transaction. Splunk APM collects and analyzes every span and trace from each of the services that you have connected to Splunk Observability Cloud to give you full-fidelity access to all of your application data.
1010

11+
To keep up to date with changes in APM, see the Splunk Observability Cloud :ref:`release notes <release-notes-overview>`.
12+
1113
For scenarios using Splunk APM, see :ref:`apm-scenarios-intro`.
1214

1315
.. raw:: html

index.rst

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -259,6 +259,12 @@ Collect traces :ref:`get-started-cpp`
259259
:strong:`All supported integrations`
260260
View a list of all supported integrations :ref:`supported-data-sources`
261261

262+
.. role:: icon-info
263+
.. rst-class:: newparawithicon
264+
265+
:icon-info:`.` :strong:`Release notes`
266+
To keep up to date with changes in the products, see the Splunk Observability Cloud :ref:`release notes <release-notes-overview>`.
267+
262268
.. ----- This comment separates the landing page from the TOC -----
263269
264270
.. toctree::
@@ -886,7 +892,13 @@ View a list of all supported integrations :ref:`supported-data-sources`
886892
.. toctree::
887893
:maxdepth: 3
888894

889-
Integrations with Splunk On-Call TOGGLE <sp-oncall/spoc-integrations/integrations-main>
895+
Integrations with Splunk On-Call TOGGLE <sp-oncall/spoc-integrations/integrations-main>
896+
897+
.. toctree::
898+
:caption: Release notes
899+
:maxdepth: 3
900+
901+
Release notes overview TOGGLE <release-notes/release-notes-overview.rst>
890902

891903
.. toctree::
892904
:caption: Reference and Legal

infrastructure/intro-to-infrastructure.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ Introduction to Splunk Infrastructure Monitoring
1010

1111
Gain insights into and perform powerful, capable analytics on your infrastructure and resources across hybrid and multi-cloud environments with Splunk Infrastructure Monitoring. Infrastructure Monitoring offers support for a broad range of integrations for collecting all kinds of data, from system metrics for infrastructure components to custom data from your applications.
1212

13+
To keep up to date with changes in Infrastructure Monitoring, see the Splunk Observability Cloud :ref:`release notes <release-notes-overview>`.
1314

1415
==========================================================
1516
Splunk Infrastructure Monitoring hierarchy

logs/lo-connect-landing.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,3 +79,4 @@ Splunk Log Observer Connect
7979

8080
- :ref:`lo-connect-limits`
8181

82+
To keep up to date with changes in Log Observer Connect, see the Splunk Observability Cloud :ref:`release notes <release-notes-overview>`.

references/glossary.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,12 @@ A
2323
analytics
2424
Analytics are the mathematical functions that can be applied to a collection of data points. For a full list of analytics that can be applied in Splunk Infrastructure Monitoring, see the :ref:`analytics-ref`.
2525

26+
automatic discovery
27+
Automatic discovery is a feature of the Splunk Distribution of the OpenTelemetry Collector that identifies the services, such as third-party databases and web servers, running in your environment and sends telemetry data from them to Splunk Application Performance Monitoring (APM) and Infrastructure Monitoring. The Collector configures service-specific receivers that collect data from an endpoint exposed on each service. For more information, see :ref:`discovery_mode`.
28+
29+
automatic instrumentation
30+
Automatic instrumentation allows you to instrument your applications and export telemetry data without having to modify the application source files. The language-specific instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud back end. Automatic instrumentation is available for applications written in Java, Node.js, .NET, Go, Python, Ruby, and PHP and automatically collects telemetry data for code written using supported libraries in each language. For more information, see :ref:`get-started-application`.
31+
2632
C
2733
==
2834

release-notes/2024-10-01-rn.rst

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
.. _2024-10-01-rn:
2+
3+
***************
4+
October 1, 2024
5+
***************
6+
7+
Splunk Observability Cloud released the following new features and enhancements on October 1, 2024. This is not an exhaustive list of changes in the observability ecosystem. For a detailed breakdown of changes in versioned components, see the :ref:`list of changelogs <changelogs>`.
8+
9+
.. _loc-2024-10-01:
10+
11+
Log Observer Connect
12+
====================
13+
14+
.. list-table::
15+
:header-rows: 1
16+
:widths: 1 2
17+
:width: 100%
18+
19+
* - New feature or enhancement
20+
- Description
21+
* - Splunk virtual compute (SVC) optimization
22+
- You can optimize SVC, resulting in performance improvements and cost savings, by using new :guilabel:`Play`, :guilabel:`Pause`, and :guilabel:`Run` search buttons in the UI. The default limit is 150,000 logs. For more information, see :ref:`logs-keyword`.
23+
24+
.. _ingest-2024-20-01:
25+
26+
Data ingest
27+
===========
28+
29+
.. list-table::
30+
:header-rows: 1
31+
:widths: 1 2
32+
:width: 100%
33+
34+
* - New feature or enhancement
35+
- Description
36+
* - Kubernetes control plane metrics
37+
- In a continued effort to replace Smart Agent monitors with OpenTelemetry Collector receivers, a collection of Kubernetes control plane metrics are available using OpenTelemetry Prometheus receivers that target Prometheus endpoints. For more information see :ref:`kubernetes-control-plane-prometheus`.
38+
39+
.. _data-mngt-2024-10-01:
40+
41+
Data management
42+
===============
43+
44+
.. list-table::
45+
:header-rows: 1
46+
:widths: 1 2
47+
:width: 100%
48+
49+
* - New feature or enhancement
50+
- Description
51+
* - Data retention for archived metrics extended from 8 to 31 days
52+
- To facilitate long-term data and historical trend analysis, you can store archived metrics for up to 31 days. You can also customize your restoration time window when creating exception rules.
53+
* - Terraform implementation
54+
- You can use Terraform to archive metrics and create exception rules, such as routing a subset of metrics to the real-time tier rather than the archival tier.
55+
56+
.. _slo-2024-10-01:
57+
58+
Service level objective (SLO)
59+
=============================
60+
61+
.. list-table::
62+
:header-rows: 1
63+
:widths: 1 2
64+
:width: 100%
65+
66+
* - New feature or enhancement
67+
- Description
68+
* - SignalFlow editor for custom metrics SLO
69+
- You can use SignalFlow to define metrics and filters when creating a custom metric SLO. For more information, see :ref:`create-slo`. The feature released on October 2, 2024.

0 commit comments

Comments
 (0)