diff --git a/_images/images-slo/custom-metric-slo-scenario.png b/_images/images-slo/custom-metric-slo-scenario.png index b46cffeba..3c9296689 100644 Binary files a/_images/images-slo/custom-metric-slo-scenario.png and b/_images/images-slo/custom-metric-slo-scenario.png differ diff --git a/_images/logs/LogObserverEnhancementsUI.png b/_images/logs/LogObserverEnhancementsUI.png new file mode 100644 index 000000000..d5d8e853e Binary files /dev/null and b/_images/logs/LogObserverEnhancementsUI.png differ diff --git a/_images/logs/lo-openinsplunk.png b/_images/logs/lo-openinsplunk.png index 089eb2782..1e3b162b3 100644 Binary files a/_images/logs/lo-openinsplunk.png and b/_images/logs/lo-openinsplunk.png differ diff --git a/_includes/logs/query-logs.rst b/_includes/logs/query-logs.rst index 0f630e954..adcb5f7d4 100644 --- a/_includes/logs/query-logs.rst +++ b/_includes/logs/query-logs.rst @@ -1,12 +1,27 @@ -#. Navigate to :guilabel:`Log Observer`. In the content control bar, enter a time range in the time picker if you know it. -#. Select :guilabel:`Index` next to :guilabel:`Saved Queries`, then select the indexes you want to query. If you want to search your Splunk platform (Splunk Cloud Platform or Splunk Enterprise) data, select the integration for the appropriate Splunk platform instance first, then select which index you want to query in Log Observer. You can only query indexes from one Splunk platform instance or Splunk Observability Cloud instance at a time. You can only query Splunk platform indexes if you have the appropriate role and permissions in the Splunk platform instance. Select :guilabel:`Apply`. -#. In the content control bar next to the index picker, select :guilabel:`Add Filter`. -#. To search on a keyword, select the :guilabel:`Keyword` tab, type the keyword or phrase you want to search on, then press Enter. If you want to search on a field, select the :guilabel:`Fields` tab, enter the field name, then press Enter. -#. To continue adding keywords or fields to the search, select :guilabel:`Add Filter`. -#. Review the top values for your query on the the :guilabel:`Fields` panel on right. This list includes the count of each value in the log records. To include log records with a particular value, select the field name, then select ``=``. To exclude log records with a particular value from your results, select the field name, then select ``!=``. To see the full list of values and distribution for this field, select :guilabel:`Explore all values`. -#. Optionally, if you are viewing Splunk platform (Splunk Cloud Platform or Splunk Enterprise) data, you can open your query results in the Splunk platform to use SPL to further filter or work with the query results. You must have an account in Splunk platform. To open the log results in the Splunk platform, select the :guilabel:`Open in Splunk platform` icon at the top of the Logs table. +1. Navigate to :guilabel:`Log Observer`. Upon opening, Log Observer runs an initial search of all indexes you have access to and returns the most recent 150,000 logs. The search then defaults to Pause in order to save Splunk Virtual Compute (SVC) resources. Control your SVC resources, which impact performance and cost, by leaving your search on Pause when you are not monitoring incoming logs, and select Play when you want to see more incoming logs. + + .. image:: /_images/logs/LogObserverEnhancementsUI.png + :width: 90% + :alt: The Log Observer UI is displayed. + + +2. In the content control bar, enter a time range in the time picker if you want to see logs from a specific historical period. To select a time range, you must select :guilabel:`Infinite` from the :guilabel:`Search Records` field in step 5 below. When you select :guilabel:`150,000`, Log Observer returns only the most recent 150,000 logs regardless of the time range you select. + +3. Select :guilabel:`Index` next to :guilabel:`Saved Queries`, then select the indexes you want to query. When you do not select an index, Log Observer runs your query on all indexes to which you have access. If you want to search your Splunk platform (Splunk Cloud Platform or Splunk Enterprise) data, select the integration for the appropriate Splunk platform instance first, then select which index you want to query in Log Observer. You can query indexes from only one Splunk platform instance or Splunk Observability Cloud instance at a time. You can query Splunk platform indexes only if you have the appropriate role and permissions. + +4. In the content control bar next to the index picker, select :guilabel:`Add Filter`. Select the :guilabel:`Keyword` tab to search on a keyword or phrase. Select the :guilabel:`Fields` tab to search on a field. Then press Enter. To continue adding keywords or fields to the search, select :guilabel:`Add Filter` again. + +5. Next, select :guilabel:`Unlimited` or :guilabel:`150,000` from the :guilabel:`Search Records` field to determine the number of logs you want to return on a single search. Select :guilabel:`150,000` to optimize your Splunk Virtual Compute (SVC) resources and control performance and cost. However, only the most recent 150,000 logs display. To see a specific time range, you must select :guilabel:`Infinite`. + +6. To narrow your search, use the :guilabel:`Group by` drop-down list to select the field or fields by which you want to group your results, then select :guilabel:`Apply`. To learn more about aggregations, see :ref:`logs-aggregations`. + +7. Select :guilabel:`Run search`. + +8. Review the top values for your query on the the :guilabel:`Fields` panel on right. This list includes the count of each value in the log records. To include log records with a particular value, select the field name, then select ``=``. To exclude log records with a particular value from your results, select the field name, then select ``!=``. To see the full list of values and distribution for this field, select :guilabel:`Explore all values`. + +9. Optionally, if you are viewing Splunk platform data, you can open your query results in the Splunk platform and use SPL to further query the resulting logs. You must have an account in Splunk platform. To open the log results in the Splunk platform, select the :guilabel:`Open in Splunk platform` icon at the top of the Logs table. .. image:: /_images/logs/lo-openinsplunk.png - :width: 100% + :width: 90% :alt: The Open in Splunk platform icon is at the top, right-hand side of the Logs table. \ No newline at end of file diff --git a/admin/references/data-retention.rst b/admin/references/data-retention.rst index bdea4bfa4..e4bf4a598 100644 --- a/admin/references/data-retention.rst +++ b/admin/references/data-retention.rst @@ -87,7 +87,8 @@ The following table shows the retention time period for each data type in APM. Data retention in Log Observer ============================================ -The retention period for indexed logs in Splunk Log Observer is 30 days. If you send logs to S3 through the Infinite Logging feature, then the data retention period depends on the policy you purchased for your Amazon S3 bucket. To learn how to set up Infinite Logging rules, see :ref:`logs-infinite`. +The retention period for indexed logs in Splunk Log Observer is 30 days. + .. _oncall-data-retention: diff --git a/admin/subscription-usage/subscription-usage-overview.rst b/admin/subscription-usage/subscription-usage-overview.rst index 7d8da0a3d..e1d6ddf90 100644 --- a/admin/subscription-usage/subscription-usage-overview.rst +++ b/admin/subscription-usage/subscription-usage-overview.rst @@ -64,6 +64,6 @@ Learn more at :ref:`per-product-limits` and the following docs: * Data ingest can be limited at the source by Cloud providers. You can track this with the metric ``sf.org.num.ServiceClientCallCountThrottles``. -* :ref:`Log Observer Connect limits ` and :ref:`Log Observer limits ` +* :ref:`Log Observer Connect limits ` * :ref:`System limits for Splunk RUM ` \ No newline at end of file diff --git a/admin/subscription-usage/synthetics-usage.rst b/admin/subscription-usage/synthetics-usage.rst index 875f0baac..d157de765 100644 --- a/admin/subscription-usage/synthetics-usage.rst +++ b/admin/subscription-usage/synthetics-usage.rst @@ -25,7 +25,8 @@ Splunk Synthetic Monitoring offers metrics you can use to track your subscriptio - Total number of synthetic runs by organization. To filter by test type: - ``test_type=browser`` - ``test_type=API`` - - ``test_type=uptime`` + - ``test_type=http`` + - ``test_type=port`` See also diff --git a/alerts-detectors-notifications/slo/create-slo.rst b/alerts-detectors-notifications/slo/create-slo.rst index 17a21f83a..54897b55f 100644 --- a/alerts-detectors-notifications/slo/create-slo.rst +++ b/alerts-detectors-notifications/slo/create-slo.rst @@ -19,7 +19,7 @@ Follow these steps to create an SLO. #. From the landing page of Splunk Observability Cloud, go to :strong:`Detectors & SLOs`. #. Select the :strong:`SLOs` tab. #. Select :guilabel:`Create SLO`. -#. Configure the service level indicator (SLI) for your SLO. +#. Configure the service level indicator (SLI) for your SLO. You can use a service or any metric of your choice as the system health indicator. To use a service as the system health indicator for your SLI configuration, follow these steps: @@ -46,21 +46,22 @@ Follow these steps to create an SLO. * - :guilabel:`Filters` - Enter any additional dimension names and values you want to apply this SLO to. Alternatively, use the ``NOT`` filter, represented by an exclamation point ( ! ), to exclude any dimension values from this SLO configuration. - To use a custom metric as the system health indicator for your SLI configuration, follow these steps: + To use a metric of your choice as the system health indicator for your SLI configuration, follow these steps: - .. list-table:: - :header-rows: 1 - :widths: 40 60 - :width: 100% + #. For the :guilabel:`Metric type` field, select :guilabel:`Custom metric` from the dropdown menu. The SignalFlow editor appears. + #. In the SignalFlow editor, you can see the following code sample: - * - :strong:`Field name` - - :strong:`Actions` - * - :guilabel:`Metric type` - - Select :guilabel:`Custom metric` from the dropdown menu - * - :guilabel:`Good events (numerator)` - - Search for the metric you want to use for the success request count - * - :guilabel:`Total events (denominator)` - - Search for the metric you want to use for the total request count + .. code-block:: python + + G = data('good.metric', filter=filter('sf_error', 'false')) + T = data('total.metric') + + * Line 1 defines ``G`` as a data stream of ``good.metric`` metric time series (MTS). The SignalFlow ``filter()`` function queries for a collection of MTS with value ``false`` for the ``sf_error`` dimension. The filter distinguishes successful requests from total requests, making ``G`` the good events variable. + * Line 2 defines ``T`` as a data stream ``total.metric`` MTS. ``T`` is the total events variable. + + Replace the code sample with your own SignalFlow program. You can define good events and total events variables using any metric and supported SignalFlow function. For more information, see :new-page:`Analyze data using SignalFlow ` in the Splunk Observability Cloud Developer Guide. + + #. Select appropriate variable names for the :guilabel:`Good events (numerator)` and :guilabel:`Total events (denominator)` dropdown menus. .. note:: Custom metric SLO works by calculating the percentage of successful requests over a given compliance period. This calculation works better for counter and histogram metrics than for gauge metrics. Gauge metrics are not suitable for custom metric SLO, so you might get confusing data when selecting gauge metrics in your configuration. diff --git a/alerts-detectors-notifications/slo/custom-metric-scenario.rst b/alerts-detectors-notifications/slo/custom-metric-scenario.rst index 89925bad5..31335a117 100644 --- a/alerts-detectors-notifications/slo/custom-metric-scenario.rst +++ b/alerts-detectors-notifications/slo/custom-metric-scenario.rst @@ -17,32 +17,22 @@ Use custom metric as service level indicator (SLI) From the :guilabel:`Detectors & SLOs` page, Kai configures the SLI and sets up a target for their SLO. Kai follows these steps: -#. Kai wants to use custom metrics as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu. -#. Kai enters the custom metrics they want to measure in the following fields: +#. Kai wants to use a Synthetics metric as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu. +#. Kai enters following program into the SignalFlow editor: - .. list-table:: - :header-rows: 1 - :widths: 10 20 30 40 + .. code-block:: python - * - Field - - Metric name - - Filters - - Description + G = data('synthetics.run.count', filter=filter('test', 'Monitoring Services - Emby check') and filter('success', 'true')) + T = data('synthetics.run.count', filter=filter('test', 'Monitoring Services - Emby check')) - * - :guilabel:`Good events (numerator)` - - :strong:`synthetics.run.count` - - Kai adds the following filters for this metric: - - * :strong:`test = Emby check` - * :strong:`success = true` - - Kai uses the :strong:`success = true` filter to count the number of successful requests for the Emby service on the Buttercup Games website. + Kai defines variables ``G`` and ``T`` as two streams of ``synthetics.run.count`` metric time series (MTS) measuring the health of requests sent to the Emby service. To distinguish between the two data streams, Kai applies an additional filter on the ``success`` dimension in the definition for ``G``. This filter queries for a specific collection of MTS that track successful requests for the Emby service. In Kai's SignalFlow program, ``G`` is a data stream of good events and ``T`` is a data stream of total events. - * - :guilabel:`Total events (denominator)` - - :strong:`synthetics.run.count` - - Kai adds the following filter for this metric: + .. image:: /_images/images-slo/custom-metric-slo-scenario.png + :width: 100% + :alt: This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters. - * :strong:`test = Emby check` - - Kai uses the same metric name and the :strong:`test = Emby check` filter to track the same Synthetics Browser test. However, Kai doesn't include the :strong:`success = true` dimension filter in order to count the number of total requests for the Emby service on the Buttercup Games website. + +#. Kai assigns ``G`` to the :guilabel:`Good events (numerator)` dropdown menu and ``T`` to the :guilabel:`Total events (denominator)` dropdown menu. #. Kai enters the following fields to define a target for their SLO: @@ -64,11 +54,6 @@ From the :guilabel:`Detectors & SLOs` page, Kai configures the SLI and sets up a #. Kai subscribes to receive an alert whenever there is a breach event for the SLO target. -.. image:: /_images/images-slo/custom-metric-slo-scenario.png - :width: 100% - :alt: This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters. - - Summary ======================= @@ -80,3 +65,5 @@ Learn more For more information about creating an SLO, see :ref:`create-slo`. For more information about the Synthetics Browser test, see :ref:`browser-test`. + +For more information on SignalFlow, see :new-page:`Analyze data using SignalFlow ` in the Splunk Observability Cloud Developer Guide. \ No newline at end of file diff --git a/apm/apm-scenarios/troubleshoot-business-workflows.rst b/apm/apm-scenarios/troubleshoot-business-workflows.rst index 5411ec245..87ea9e114 100644 --- a/apm/apm-scenarios/troubleshoot-business-workflows.rst +++ b/apm/apm-scenarios/troubleshoot-business-workflows.rst @@ -81,4 +81,4 @@ Learn more * For details about business workflows, see :ref:`apm-workflows`. * For details about using Related Content, see :ref:`get-started-relatedcontent`. -* For more information about using Splunk Log Observer to detect the source of problems, see :ref:`get-started-logs`. +* For more information about using Splunk Log Observer Connect to detect the source of problems, see :ref:`logs-intro-logconnect`. diff --git a/apm/apm-scenarios/troubleshoot-tag-spotlight.rst b/apm/apm-scenarios/troubleshoot-tag-spotlight.rst index 3abe4b2e2..8f40b9779 100644 --- a/apm/apm-scenarios/troubleshoot-tag-spotlight.rst +++ b/apm/apm-scenarios/troubleshoot-tag-spotlight.rst @@ -81,4 +81,4 @@ Learn more * For details about Tag Spotlight, see :ref:`apm-tag-spotlight`. * For details about using Related Content, see :ref:`get-started-relatedcontent`. -* For more information about using Splunk Log Observer to detect the source of problems, see :ref:`get-started-logs`. +* For more information about using Splunk Log Observer Connect to detect the source of problems, see :ref:`logs-intro-logconnect`. diff --git a/apm/intro-to-apm.rst b/apm/intro-to-apm.rst index 20b43ea66..5da7d12a3 100644 --- a/apm/intro-to-apm.rst +++ b/apm/intro-to-apm.rst @@ -8,6 +8,8 @@ Introduction to Splunk APM Collect :ref:`traces and spans` to monitor your distributed applications with Splunk Application Performance Monitoring (APM). A trace is a collection of actions, or spans, that occur to complete a transaction. Splunk APM collects and analyzes every span and trace from each of the services that you have connected to Splunk Observability Cloud to give you full-fidelity access to all of your application data. +To keep up to date with changes in APM, see the Splunk Observability Cloud :ref:`release notes `. + For scenarios using Splunk APM, see :ref:`apm-scenarios-intro`. .. raw:: html diff --git a/gdi/get-data-in/application/otel-dotnet/sfx/instrumentation/connect-traces-logs.rst b/gdi/get-data-in/application/otel-dotnet/sfx/instrumentation/connect-traces-logs.rst index 0c782d940..fb06d0a74 100644 --- a/gdi/get-data-in/application/otel-dotnet/sfx/instrumentation/connect-traces-logs.rst +++ b/gdi/get-data-in/application/otel-dotnet/sfx/instrumentation/connect-traces-logs.rst @@ -159,7 +159,6 @@ The instrumentation uses the underscore character as separator for field names ( - ``service_version`` to ``service.version`` - ``deployment_environment`` to ``deployment.environment`` -See :ref:`logs-processors` for more information on how to define log transformation rules. ILogger ------------------------- diff --git a/gdi/get-data-in/gdi-guide/additional-resources.rst b/gdi/get-data-in/gdi-guide/additional-resources.rst index c487664c2..8ce2f0920 100644 --- a/gdi/get-data-in/gdi-guide/additional-resources.rst +++ b/gdi/get-data-in/gdi-guide/additional-resources.rst @@ -36,5 +36,5 @@ See the following resources for more information about each component in Splunk - :ref:`get-started-apm` - :ref:`get-started-infrastructure` -- :ref:`get-started-logs` +- :ref:`logs-intro-logconnect` - :ref:`get-started-rum` \ No newline at end of file diff --git a/gdi/get-data-in/get-data-in.rst b/gdi/get-data-in/get-data-in.rst index 6fffb3fdc..e36fb20ef 100644 --- a/gdi/get-data-in/get-data-in.rst +++ b/gdi/get-data-in/get-data-in.rst @@ -21,7 +21,7 @@ Use Splunk Observability Cloud to achieve full-stack observability of all your d - :ref:`Splunk Infrastructure Monitoring ` - :ref:`Splunk Application Performance Monitoring (APM) ` - :ref:`Splunk Real User Monitoring (RUM) ` -- :ref:`Splunk Log Observer ` and :ref:`Log Observer Connect ` +- :ref:`Splunk Log Observer Connect ` This guide provides four chapters that guide you through the process of setting up each component of Splunk Observability Cloud. diff --git a/gdi/monitors-cache/opcache.rst b/gdi/monitors-cache/opcache.rst index afe697ae3..4aaab9de2 100644 --- a/gdi/monitors-cache/opcache.rst +++ b/gdi/monitors-cache/opcache.rst @@ -6,10 +6,7 @@ OPcache .. meta:: :description: Use this Splunk Observability Cloud integration for the Collectd OPcache monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``collectd/opcache`` monitor type to retrieve metrics from OPcache using -the ``opcache_get_status()`` function, which improves PHP performance by -storing precompiled script bytecode in shared memory. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``collectd/opcache`` monitor type to retrieve metrics from OPcache using the ``opcache_get_status()`` function, which improves PHP performance by storing precompiled script bytecode in shared memory. This integration is available on Kubernetes and Linux. diff --git a/gdi/monitors-databases/apache-spark.rst b/gdi/monitors-databases/apache-spark.rst index 465a3d7c2..546de3f19 100644 --- a/gdi/monitors-databases/apache-spark.rst +++ b/gdi/monitors-databases/apache-spark.rst @@ -17,11 +17,7 @@ endpoints: - Mesos - Hadoop YARN -This collectd plugin is not compatible with Kubernetes cluster mode. You need -to select distinct monitor configurations and discovery rules -for primary and worker processes. For the primary configuration, set -``isMaster`` to ``true``. When you run Apache Spark on Hadoop YARN, this -integration can only report application metrics from the primary node. +This collectd plugin is not compatible with Kubernetes cluster mode. You need to select distinct monitor configurations and discovery rules for primary and worker processes. For the primary configuration, set ``isMaster`` to ``true``. When you run Apache Spark on Hadoop YARN, this integration can only report application metrics from the primary node. This integration is only available on Linux. diff --git a/gdi/monitors-databases/etcd.rst b/gdi/monitors-databases/etcd.rst index 0eb62f5a0..c86242bb0 100644 --- a/gdi/monitors-databases/etcd.rst +++ b/gdi/monitors-databases/etcd.rst @@ -6,11 +6,7 @@ etcd server .. meta:: :description: Use this Splunk Observability Cloud integration for the etcd monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -etcd monitor type to report etcd server metrics under the ``/metrics`` -path on its client port. Optionally, you can ediy location using -``--listen-metrics-urls``. This integration only collects metrics from -the Prometheus endpoint. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the etcd monitor type to report etcd server metrics under the ``/metrics`` path on its client port. Optionally, you can edit the location using ``--listen-metrics-urls``. This integration only collects metrics from the Prometheus endpoint. Benefits -------- diff --git a/gdi/monitors-databases/hadoop.rst b/gdi/monitors-databases/hadoop.rst index a67c4e6de..69bf270d0 100644 --- a/gdi/monitors-databases/hadoop.rst +++ b/gdi/monitors-databases/hadoop.rst @@ -6,7 +6,7 @@ Hadoop .. meta:: :description: Use this Splunk Observability Cloud integration for the hadoop monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the Hadoop monitor type to collect metrics from the following components of a Hadoop 2.0 or higher cluster: @@ -17,8 +17,7 @@ a Hadoop 2.0 or higher cluster: - MapReduce Jobs This integration uses the REST API. If a remote JMX port is exposed in -the Hadoop cluster, then you can also configure the ``hadoopjmx`` -monitor to collect additional metrics about the Hadoop cluster. +the Hadoop cluster, then you can also configure the ``hadoopjmx`` monitor to collect additional metrics about the Hadoop cluster. This integration is only available on Kubernetes and Linux. diff --git a/gdi/monitors-databases/logparser.rst b/gdi/monitors-databases/logparser.rst index 55fbb5193..df6627bcd 100644 --- a/gdi/monitors-databases/logparser.rst +++ b/gdi/monitors-databases/logparser.rst @@ -7,8 +7,7 @@ Logparser :description: Use this Splunk Observability Cloud integration for the telegraf/logparser plugin monitor. See benefits, install, configuration, and metrics. -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``telegraf/logparser`` monitor type to tail log files. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``telegraf/logparser`` monitor type to tail log files. This integration is based on the Telegraf logparser plugin, and all emitted metrics have the plugin dimension set to ``telegraf-logparser``. diff --git a/gdi/monitors-databases/logstash.rst b/gdi/monitors-databases/logstash.rst index 02ecd8099..5b7d05639 100644 --- a/gdi/monitors-databases/logstash.rst +++ b/gdi/monitors-databases/logstash.rst @@ -6,9 +6,7 @@ Logstash .. meta:: :description: Use this Splunk Observability Cloud integration for the Logstash monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``logstash`` monitor type to monitor the health and performance of -Logstash deployments through Logstash Monitoring APIs. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``logstash`` monitor type to monitor the health and performance of Logstash deployments through Logstash Monitoring APIs. Installation ------------ diff --git a/gdi/monitors-databases/postgresql.rst b/gdi/monitors-databases/postgresql.rst index 507e1dd83..1245241ef 100644 --- a/gdi/monitors-databases/postgresql.rst +++ b/gdi/monitors-databases/postgresql.rst @@ -12,9 +12,7 @@ PostgreSQL (deprecated) To monitor your PostgreSQL databases you can use the native OpenTelemetry PostgreSQL receiver instead. See more at :ref:`postgresql-receiver`. -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``postgresql`` monitor type to pull metrics from all PostgreSQL -databases from a specific Postgres server instance using SQL queries. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``postgresql`` monitor type to pull metrics from all PostgreSQL databases from a specific Postgres server instance using SQL queries. Configuration settings ---------------------- diff --git a/gdi/monitors-databases/redis.rst b/gdi/monitors-databases/redis.rst index 676922a95..a4c8a32bf 100644 --- a/gdi/monitors-databases/redis.rst +++ b/gdi/monitors-databases/redis.rst @@ -10,8 +10,7 @@ Redis (deprecated) To monitor your Redis databases, you can instead use the native OpenTelemetry Redis receiver. To learn more, see :ref:`redis-receiver`. -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``redis`` monitor type to capture the following metrics: +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``redis`` monitor type to capture the following metrics: - Memory used - Commands processed per second diff --git a/gdi/monitors-gitlab/gitlab.rst b/gdi/monitors-gitlab/gitlab.rst index f493b8abd..c6e99de65 100644 --- a/gdi/monitors-gitlab/gitlab.rst +++ b/gdi/monitors-gitlab/gitlab.rst @@ -24,8 +24,7 @@ This integration allows you to monitor the following: - GitLab Sidekiq: It scrapes the Gitlab Sidekiq Prometheus Exporter. - GitLab Unicorn server: It comes with a Prometheus exporter. The IP address of the container or host needs to be allowed for the - collector to access the endpoint. See the ``IP allowlist`` - documentation on GitLab Docs for more information. + collector to access the endpoint. See the ``IP allowlist`` documentation on GitLab Docs for more information. - GitLab Webservice: It provides the GitLab Rails webserver with two Webservice workers per pod. - GitLab Workhorse: The GitLab service that handles slow HTTP requests. Workhorse includes a built-in Prometheus exporter that this monitor diff --git a/gdi/monitors-hosts/collectd-plugin.rst b/gdi/monitors-hosts/collectd-plugin.rst index e150e74ec..58d26297e 100644 --- a/gdi/monitors-hosts/collectd-plugin.rst +++ b/gdi/monitors-hosts/collectd-plugin.rst @@ -6,9 +6,7 @@ Collectd custom plugin .. meta:: :description: Use this Splunk Observability Cloud integration for the Collectd custom plugin monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``collectd/custom`` monitor type to customize the collectd configuration -of your managed collectd instances. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``collectd/custom`` monitor type to customize the configuration of your managed collectd instances. This integration is only available on Kubernetes and Linux. @@ -128,7 +126,7 @@ integration: Metrics ------- -The Splunk Distribution of OpenTelemetry Collector does not do any +The Splunk Distribution of the OpenTelemetry Collector does not do any built-in filtering of metrics coming out of this integration. Troubleshooting diff --git a/gdi/monitors-hosts/collectd-uptime.rst b/gdi/monitors-hosts/collectd-uptime.rst index 8d864bc3c..81ccc5958 100644 --- a/gdi/monitors-hosts/collectd-uptime.rst +++ b/gdi/monitors-hosts/collectd-uptime.rst @@ -6,10 +6,7 @@ Collectd uptime .. meta:: :description: Use this Splunk Observability Cloud integration for the Collectd Uptime monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``collectd/uptime`` monitor type to send a single metric of the total -number of seconds the host has been up, using the collectd uptime -plugin. +The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the ``collectd/uptime`` monitor type to send a single metric of the total number of seconds the host has been up, using the collectd uptime plugin. This integration is only available on Kubernetes and Linux. diff --git a/gdi/monitors-hosts/coredns.rst b/gdi/monitors-hosts/coredns.rst index 8aca916e3..cf736276d 100644 --- a/gdi/monitors-hosts/coredns.rst +++ b/gdi/monitors-hosts/coredns.rst @@ -6,11 +6,11 @@ CoreDNS .. meta:: :description: Use this Splunk Observability Cloud integration for the CoreDNS monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -CoreDNS monitor type to scrape Prometheus metrics exposed by CoreDNS. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the CoreDNS monitor type to scrape Prometheus metrics exposed by CoreDNS. -The default port for these metrics are exposed on port 9153, at the -``/metrics`` path. +The default port for these metrics are exposed on port 9153, at the ``/metrics`` path. + +.. note:: If you're using the Splunk Distribution of the OpenTelemetry Collector and want to collect Prometheus metrics, see :ref:`prometheus-generic`. Benefits -------- diff --git a/gdi/monitors-hosts/docker.rst b/gdi/monitors-hosts/docker.rst index fb9b72d75..ce0f3841d 100644 --- a/gdi/monitors-hosts/docker.rst +++ b/gdi/monitors-hosts/docker.rst @@ -6,10 +6,7 @@ Docker Containers .. meta:: :description: Use this Splunk Observability Cloud integration for the Docker monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``docker-container-stats`` monitor type to read container stats from a -Docker API server. Note it doesn't currently support CPU share/quota -metrics. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``docker-container-stats`` monitor type to read container stats from a Docker API server. Note it doesn't currently support CPU share/quota metrics. This integration is available for Kubernetes, Linux, and Windows. diff --git a/gdi/monitors-hosts/elasticsearch.rst b/gdi/monitors-hosts/elasticsearch.rst index 6ae2da26f..12a51b277 100644 --- a/gdi/monitors-hosts/elasticsearch.rst +++ b/gdi/monitors-hosts/elasticsearch.rst @@ -6,14 +6,9 @@ Elasticsearch stats .. meta:: :description: Use this Splunk Observability Cloud integration for the Elasticsearch monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -Elasticsearch monitor type to collect node, cluster, and index level -stats from Elasticsearch. - -By default, this integration only collects cluster-level and index-level -stats from the current primary in an Elasticsearch cluster. You can -override this using the ``clusterHealthStatsMasterOnly`` and -``indexStatsMasterOnly`` configuration options respectively. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the Elasticsearch monitor type to collect node, cluster, and index level stats from Elasticsearch. + +By default, this integration only collects cluster-level and index-level stats from the current primary in an Elasticsearch cluster. You can override this using the ``clusterHealthStatsMasterOnly`` and ``indexStatsMasterOnly`` configuration options respectively. Benefits -------- diff --git a/gdi/monitors-hosts/jenkins.rst b/gdi/monitors-hosts/jenkins.rst index e5e7220f2..9f8a2137b 100644 --- a/gdi/monitors-hosts/jenkins.rst +++ b/gdi/monitors-hosts/jenkins.rst @@ -6,13 +6,12 @@ Jenkins .. meta:: :description: Use this Splunk Observability Cloud integration for the Jenkins monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``jenkins`` monitor type to collect metrics from Jenkins instances by hitting the following endpoints: -- Job metrics with the ``../api/json`` endpoint. -- Codahale or Dropwizard JVM metrics with the - ``metrics//..`` endpoint. +- Job metrics with the ``../api/json`` endpoint. +- Codahale or Dropwizard JVM metrics with the ``metrics//..`` endpoint. This integration is only available on Kubernetes and Linux. diff --git a/gdi/monitors-hosts/microsoft-windows-iis.rst b/gdi/monitors-hosts/microsoft-windows-iis.rst index 30aed774a..f8101c22e 100644 --- a/gdi/monitors-hosts/microsoft-windows-iis.rst +++ b/gdi/monitors-hosts/microsoft-windows-iis.rst @@ -6,8 +6,7 @@ Microsoft Windows IIS .. meta:: :description: Use this Splunk Observability Cloud integration for the Windows IIS monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``windows-iis`` monitor type to report metrics for Windows Internet +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``windows-iis`` monitor type to report metrics for Windows Internet Information Services (IIS) and drive the Windows IIS dashboard content. Windows Performance Counters are the underlying source for these diff --git a/gdi/monitors-hosts/nginx.rst b/gdi/monitors-hosts/nginx.rst index e6268c1c9..b3bc117de 100644 --- a/gdi/monitors-hosts/nginx.rst +++ b/gdi/monitors-hosts/nginx.rst @@ -6,8 +6,7 @@ NGINX .. meta:: :description: Use this Splunk Observability Cloud integration for the NGINX monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the -``nginx`` monitor type to to retrieve metrics from NGINX instances. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``nginx`` monitor type to retrieve metrics from NGINX instances. This integration is available on Linux and Windows. diff --git a/gdi/monitors-hosts/ntpq.rst b/gdi/monitors-hosts/ntpq.rst index a57d6cce2..d5a21d219 100644 --- a/gdi/monitors-hosts/ntpq.rst +++ b/gdi/monitors-hosts/ntpq.rst @@ -6,10 +6,7 @@ NTPQ .. meta:: :description: Use this Splunk Observability Cloud integration for the Telegraf NTPQ monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``net-io`` monitor type to retrieve metrics from NTPQ. This is an -embedded form of the Telegraf NTPQ plugin and requires the ``ntpq`` -executable available on the path of the agent. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``net-io`` monitor type to retrieve metrics from NTPQ. This is an embedded form of the Telegraf NTPQ plugin and requires the ``ntpq`` executable available on the path of the agent. This monitor is available on Kubernetes, Linux, and Windows. diff --git a/gdi/monitors-hosts/supervisor.rst b/gdi/monitors-hosts/supervisor.rst index 995c5c6ce..8297a8fc3 100644 --- a/gdi/monitors-hosts/supervisor.rst +++ b/gdi/monitors-hosts/supervisor.rst @@ -6,9 +6,7 @@ Supervisor .. meta:: :description: Use this Splunk Observability Cloud integration for the Supervisor monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``supervisor`` monitor type to retrieve the state of processes running -by the Supervisor. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``supervisor`` monitor type to retrieve the state of processes running by Supervisor. This integration is available for Kubernetes, Windows, and Linux. diff --git a/gdi/monitors-hosts/systemd.rst b/gdi/monitors-hosts/systemd.rst index 628a2d496..4ec62a568 100644 --- a/gdi/monitors-hosts/systemd.rst +++ b/gdi/monitors-hosts/systemd.rst @@ -6,9 +6,7 @@ systemd .. meta:: :description: Use this Splunk Observability Cloud integration for the Collectd Systemd monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``collectd/systemd`` monitor type to collect metrics about the state of -configured systemd services. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``collectd/systemd`` monitor type to collect metrics about the state of configured systemd services. This integration is available on Kubernetes and Linux. diff --git a/gdi/monitors-hosts/varnish.rst b/gdi/monitors-hosts/varnish.rst index 57762af98..1eb33cdd9 100644 --- a/gdi/monitors-hosts/varnish.rst +++ b/gdi/monitors-hosts/varnish.rst @@ -6,8 +6,7 @@ Varnish .. meta:: :description: Use this Splunk Observability Cloud integration for the Telegraf Varnish monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``telegraf/varnish`` monitor type to collect Varnish metrics. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``telegraf/varnish`` monitor type to collect Varnish metrics. This integration is available on Kubernetes and Linux. diff --git a/gdi/monitors-hosts/vsphere.rst b/gdi/monitors-hosts/vsphere.rst index b6fbd2dc7..276f1ab9f 100644 --- a/gdi/monitors-hosts/vsphere.rst +++ b/gdi/monitors-hosts/vsphere.rst @@ -6,9 +6,7 @@ VMware vSphere .. meta:: :description: Use this Splunk Observability Cloud integration for the vSphere monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``vsphere`` monitor type to collect metrics from vSphere through the -vSphere API. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``vsphere`` monitor type to collect metrics from vSphere through the vSphere API. This integration is available on Kubernetes, Linux, and Windows. You can install it on the same server used by vSphere if it's running on Linux diff --git a/gdi/monitors-messaging/rabbitmq.rst b/gdi/monitors-messaging/rabbitmq.rst index 9ae9dc665..938d235ed 100644 --- a/gdi/monitors-messaging/rabbitmq.rst +++ b/gdi/monitors-messaging/rabbitmq.rst @@ -6,8 +6,7 @@ RabbitMQ .. meta:: :description: Use this Splunk Observability Cloud integration for the RabbitMQ monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the -``rabbitmq`` monitor type to keep track of an instance of RabbitMQ. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``rabbitmq`` monitor type to keep track of an instance of RabbitMQ. .. note:: To monitor RabbitMQ instances with the OpenTelemetry Collector using native OpenTelemetry components refer to the :ref:`rabbitmq-receiver`. diff --git a/gdi/monitors-network/statsd.rst b/gdi/monitors-network/statsd.rst index 97e260022..0d721cf24 100644 --- a/gdi/monitors-network/statsd.rst +++ b/gdi/monitors-network/statsd.rst @@ -6,9 +6,7 @@ Statsd .. meta:: :description: Use this Splunk Observability Cloud integration for the Statsd monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``statsd`` monitor type to collect statsd metrics. It listens on a -configured address and port to receive the statsd metrics. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``statsd`` monitor type to collect statsd metrics. It listens on a configured address and port to receive the statsd metrics. This integration supports the ``Counter``, ``Timer``, ``Gauge``, and ``Set`` types, which are dispatched as the Splunk Observability Cloud diff --git a/gdi/monitors-network/traefik.rst b/gdi/monitors-network/traefik.rst index 09882ea7f..faee7ea29 100644 --- a/gdi/monitors-network/traefik.rst +++ b/gdi/monitors-network/traefik.rst @@ -6,8 +6,7 @@ Traefik .. meta:: :description: Use this Splunk Observability Cloud integration for the Traefik monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``traefik`` monitor type to collect metrics from Traefik. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``traefik`` monitor type to collect metrics from Traefik. This monitor is available on Kubernetes, Linux, and Windows. diff --git a/gdi/monitors-prometheus/prometheus-exporter.rst b/gdi/monitors-prometheus/prometheus-exporter.rst index 5568683ed..8649c54f6 100644 --- a/gdi/monitors-prometheus/prometheus-exporter.rst +++ b/gdi/monitors-prometheus/prometheus-exporter.rst @@ -6,9 +6,7 @@ Prometheus Exporter .. meta:: :description: Use this Splunk Observability Cloud integration for the Prometheus Exporter monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``prometheus-exporter`` monitor type to read all metric types from a -Prometheus Exporter endpoint. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``prometheus-exporter`` monitor type to read all metric types from a Prometheus Exporter endpoint. A Prometheus Exporter is a piece of software that fetches statistics from another, non-Prometheus system, and turns them into Prometheus diff --git a/gdi/monitors-prometheus/prometheus-velero.rst b/gdi/monitors-prometheus/prometheus-velero.rst index 80c7792a5..47362ae74 100644 --- a/gdi/monitors-prometheus/prometheus-velero.rst +++ b/gdi/monitors-prometheus/prometheus-velero.rst @@ -6,8 +6,7 @@ Prometheus Velero .. meta:: :description: Use this Splunk Observability Cloud integration for the Prometehus Velero monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``prometheus/velero`` monitor type to gets metrics from Velero. +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``prometheus/velero`` monitor type to get metrics from Velero. This integration is available on Kubernetes, Linux, and Windows. diff --git a/gdi/opentelemetry/data-processing.rst b/gdi/opentelemetry/data-processing.rst index 4fbc20ee1..0a3553092 100644 --- a/gdi/opentelemetry/data-processing.rst +++ b/gdi/opentelemetry/data-processing.rst @@ -117,7 +117,6 @@ See and manage logs To see and manage your logs, use :ref:`lo-connect-landing`. -.. caution:: Splunk Log Observer is no longer available for new users. You can continue to use Log Observer if you already have an entitlement. Learn more in :ref:`logs-logs`. See and manage metrics --------------------------------------- diff --git a/get-started/o11y.rst b/get-started/o11y.rst index 6b7058fed..d934ea737 100644 --- a/get-started/o11y.rst +++ b/get-started/o11y.rst @@ -73,7 +73,7 @@ Once you have data coming into Splunk Observability Cloud, it's time to do some - Use :ref:`RUM ` to analyze the performance of web and mobile applications and keep track of how users are interacting with your front-end services, including page load times and responsiveness. -- Use :ref:`Log Observer ` or :ref:`Log Observer Connect ` to pinpoint interesting log events and troubleshoot issues with your infrastructure and cloud services. +- Use :ref:`Log Observer Connect ` to pinpoint interesting log events and troubleshoot issues with your infrastructure and cloud services. - As described in step :ref:`get-started-gdi`, if you turned on :ref:`get-started-relatedcontent` when setting up your data integrations, you can select options in the Related Content bar to seamlessly navigate between APM, Log Observer, and Infrastructure Monitoring with your selected filters and context automatically applied to each view. @@ -95,8 +95,6 @@ Now that you've explored and familiarized yourself with the data you have coming - Customize your APM experience by setting up business workflows and creating span tags that add metadata to traces sent to APM. For more information, see :ref:`apm-workflows` and :ref:`apm-add-context-trace-span`. -- Customize your :ref:`logs pipeline ` to add value to your raw logs. - .. _get-started-datalinks: diff --git a/get-started/welcome.rst b/get-started/welcome.rst index cebc9fee4..535c38628 100644 --- a/get-started/welcome.rst +++ b/get-started/welcome.rst @@ -107,10 +107,10 @@ For more information, see the :ref:`intro-synthetics`. .. _welcome-logobs: -Splunk Log Observer -=================== +Splunk Log Observer Connect +====================================== -Troubleshoot your application and infrastructure behavior using high-context logs in Splunk Observability Cloud. With Splunk Log Observer, you can perform codeless queries on logs to detect the source of problems in your systems. You can also extract fields from logs in Log Observer to set up log processing rules and transform your data as it arrives. +Troubleshoot your application and infrastructure behavior using high-context logs in Splunk Observability Cloud. With Splunk Log Observer Connect, you can perform codeless queries on logs to detect the source of problems in your systems. For more information, see :ref:`LogObserverFeatures`. diff --git a/index.rst b/index.rst index 5db944fc3..95e9fc165 100644 --- a/index.rst +++ b/index.rst @@ -144,11 +144,6 @@ Use span tags to add useful metadata to traces :ref:`apm-add-context-trace-span` .. rst-class:: newcard -:strong:`Logs pipeline` -Add value to your raw logs by customizing your pipeline :ref:`logs-pipeline` - -.. rst-class:: newcard - :strong:`Related Content` Enable users to seamlessly move across product views :ref:`get-started-relatedcontent` @@ -264,6 +259,12 @@ Collect traces :ref:`get-started-cpp` :strong:`All supported integrations` View a list of all supported integrations :ref:`supported-data-sources` +.. role:: icon-info +.. rst-class:: newparawithicon + +:icon-info:`.` :strong:`Release notes` +To keep up to date with changes in the products, see the Splunk Observability Cloud :ref:`release notes `. + .. ----- This comment separates the landing page from the TOC ----- .. toctree:: @@ -705,15 +706,10 @@ View a list of all supported integrations :ref:`supported-data-sources` Resolution and data retention (DPM) .. toctree:: - :caption: Log Observer + :caption: Log Observer Connect :maxdepth: 3 - Splunk Log Observer Connect TOGGLE - -.. toctree:: - :maxdepth: 3 - - Splunk Log Observer TOGGLE + logs/lo-connect-landing .. toctree:: :caption: Real User Monitoring @@ -896,7 +892,13 @@ View a list of all supported integrations :ref:`supported-data-sources` .. toctree:: :maxdepth: 3 - Integrations with Splunk On-Call TOGGLE + Integrations with Splunk On-Call TOGGLE + +.. toctree:: + :caption: Release notes + :maxdepth: 3 + + Release notes overview TOGGLE .. toctree:: :caption: Reference and Legal diff --git a/infrastructure/intro-to-infrastructure.rst b/infrastructure/intro-to-infrastructure.rst index cc7e14f52..3f1bbd758 100644 --- a/infrastructure/intro-to-infrastructure.rst +++ b/infrastructure/intro-to-infrastructure.rst @@ -10,6 +10,7 @@ Introduction to Splunk Infrastructure Monitoring Gain insights into and perform powerful, capable analytics on your infrastructure and resources across hybrid and multi-cloud environments with Splunk Infrastructure Monitoring. Infrastructure Monitoring offers support for a broad range of integrations for collecting all kinds of data, from system metrics for infrastructure components to custom data from your applications. +To keep up to date with changes in Infrastructure Monitoring, see the Splunk Observability Cloud :ref:`release notes `. ========================================================== Splunk Infrastructure Monitoring hierarchy diff --git a/infrastructure/monitor/hosts.rst b/infrastructure/monitor/hosts.rst index ba92ac5e0..83bad2f93 100644 --- a/infrastructure/monitor/hosts.rst +++ b/infrastructure/monitor/hosts.rst @@ -12,7 +12,7 @@ You can monitor host metrics with Splunk Observability Cloud. Before you can sta - :ref:`get-started-linux` - :ref:`get-started-windows` -Splunk Observability Cloud provides infrastructure monitoring capabilities powered by the :ref:`Splunk Distribution of OpenTelemetry Collector `. If you're also exporting logs from hosts and want to learn about how to view logs in Splunk Observability Cloud, see :ref:`get-started-logs`. +Splunk Observability Cloud provides infrastructure monitoring capabilities powered by the :ref:`Splunk Distribution of OpenTelemetry Collector `. You can also export and monitor data related to hosts, as described in the following table. diff --git a/infrastructure/monitor/k8s-nav.rst b/infrastructure/monitor/k8s-nav.rst index d941ac8dd..dbc76a432 100644 --- a/infrastructure/monitor/k8s-nav.rst +++ b/infrastructure/monitor/k8s-nav.rst @@ -158,9 +158,6 @@ The Analyzer displays overrepresented metrics properties for known conditions, s Next steps ===================== - -If you're also exporting logs from Kubernetes and want to learn about how to view logs in Splunk Observability Cloud, see :ref:`get-started-logs`. - You can also export and monitor data related to your Kubernetes clusters, as described in the following table. .. list-table:: diff --git a/infrastructure/monitor/k8s.rst b/infrastructure/monitor/k8s.rst index 3c9932e8f..b264093af 100644 --- a/infrastructure/monitor/k8s.rst +++ b/infrastructure/monitor/k8s.rst @@ -12,7 +12,7 @@ Monitor Kubernetes (classic version) Before you can start monitoring any Kubernetes resources, :ref:`get-started-k8s`, and log in with your administrator credentials. -You can monitor Kubernetes metrics with Splunk Observability Cloud. Splunk Observability Cloud uses the Splunk Distribution of OpenTelemetry Collector for Kubernetes to provide robust infrastructure monitoring capabilities. If you're also exporting logs from Kubernetes and want to learn about how to view logs in Splunk Observability Cloud, see :ref:`get-started-logs`. +You can monitor Kubernetes metrics with Splunk Observability Cloud. Splunk Observability Cloud uses the Splunk Distribution of OpenTelemetry Collector for Kubernetes to provide robust infrastructure monitoring capabilities. You can also export and monitor data related to your Kubernetes clusters, as described in the following table. diff --git a/logs/LOconnect-scenario.rst b/logs/LOconnect-scenario.rst index 63eddba2c..a9e4f2900 100644 --- a/logs/LOconnect-scenario.rst +++ b/logs/LOconnect-scenario.rst @@ -8,7 +8,6 @@ Scenario: Aisha troubleshoots workflow failures with Log Observer Connect .. meta:: :description: Aisha troubleshoots problems in a workflow using Log Observer where Log Observer accesses Splunk platform logs through Log Observer Connect. -.. include:: /_includes/log-observer-transition.rst Buttercup Games, a fictitious company, runs an e-commerce site to sell its products. They analyze logs in Splunk Cloud Platform. They recently refactored their site to use a cloud-native approach with a microservices architecture and Kubernetes for the infrastructure. They purchased Splunk Observability Cloud as their observability solution. Buttercup Games analyzes their Splunk Cloud Platform logs in Log Observer, a point-and-click Splunk Observability Cloud tool, which they set up through Log Observer Connect. diff --git a/logs/aggregations.rst b/logs/aggregations.rst index 6e69153c8..5da34526f 100644 --- a/logs/aggregations.rst +++ b/logs/aggregations.rst @@ -7,8 +7,6 @@ Group logs by fields using log aggregation .. meta:: :description: Identify problems using log aggregation. Aggregate log records in groups, then perform analyses to see averages, sums, and other statistics for related logs. -.. include:: /_includes/log-observer-transition.rst - Aggregations group related data by one field and then perform a statistical calculation on other fields. Aggregating log records helps you visualize problems by showing averages, sums, and other statistics for related diff --git a/logs/alias.rst b/logs/alias.rst index ed093b1b9..3c16a8539 100644 --- a/logs/alias.rst +++ b/logs/alias.rst @@ -7,8 +7,6 @@ Create field aliases .. meta:: :description: Aliases are alternate names for a field that allows you to search for it by multiple names. Aliasing does not rename or remove the original field. -.. include:: /_includes/log-observer-transition.rst - An alias is an alternate name that you assign to a field, allowing you to use that name to search for events that contain that field. An alias is added to the event alongside the original field name to make it easier to find the data you want and to connect your data sources through :ref:`Related Content ` suggestions. :strong:`Field Aliasing` occurs at search time, not index time, so it does not transform your data. Field Aliasing does not rename or remove the original field name. When you alias a field, you can search for it by its original name or by any of its aliases. diff --git a/logs/forward-logs.rst b/logs/forward-logs.rst index 5b29b6b3d..8c6228bcb 100644 --- a/logs/forward-logs.rst +++ b/logs/forward-logs.rst @@ -6,11 +6,9 @@ Forward Log Observer logs data to the Splunk platform ***************************************************************** .. meta:: - :description: Learn how you can forward Log Observer logs to the Splunk platform as part of the Log Observer transition. + :description: Learn how you can forward Log Observer logs to the Splunk platform. -.. include:: /_includes/log-observer-transition.rst - -The Log Observer transition allows customers to analyze their Log Observer logs in the Splunk platform while still maintaining the ability to analyze them in Log Observer. Current Log Observer customers can forward their Log Observer logs data to a single index in a single instance of the Splunk platform. Splunk Observability Cloud uses an HEC token to forward new incoming Log Observer logs data to the Splunk platform in addition to storing it in Log Observer. +If you ingest logs into Log Observer, you can forward them to the Splunk platform for analysis, as well. You can only forward logs to a single index in a single instance of the Splunk platform. Splunk Observability Cloud uses an HEC token to forward new incoming Log Observer logs to the Splunk platform in addition to storing them in Log Observer. To forward logs data from Log Observer to the Splunk platform, you must do the following: diff --git a/logs/get-started-logs.rst b/logs/get-started-logs.rst deleted file mode 100644 index 335d97600..000000000 --- a/logs/get-started-logs.rst +++ /dev/null @@ -1,89 +0,0 @@ -.. _get-started-logs: - -************************************* -Introduction to Splunk Log Observer -************************************* - -.. meta:: - :description: Get started investigating issues with Splunk Log Observer. Resolve incidents faster through log filtering, aggregations, and analysis. - -.. include:: /_includes/log-observer-transition.rst - -If you do not have a Log Observer entitlement and instead use Log Observer Connect, see :ref:`logs-intro-logconnect`. - -========================================= -What is Log Observer? -========================================= - -Troubleshoot your application and infrastructure behavior using high-context logs in these applications: - -- Log Observer -- Log Observer Connect - -In Log Observer, you can perform codeless queries on logs to detect the source of problems in your systems. You can also extract fields from logs to set up log processing rules and transform your data as it arrives or send data to Infinite Logging S3 buckets for future use. See -:ref:`LogObserverFeatures` to learn more about Log Observer capabilities. - -In Log Observer Connect, you can perform codeless queries on your Splunk Enterprise or Splunk Cloud Platform logs. See :ref:`logs-intro-logconnect` to learn what you can do with the Splunk platform integration. - -.. _LogObserverFeatures: - -========================================= -What can I do with Log Observer? -========================================= -The following table lists features available to customers with a Log Observer entitlement. If you don't have a Log Observer entitlement in Splunk Observability Cloud, see :ref:`logs-intro-logconnect` to discover features available to customers of the Splunk platform integration. - -.. list-table:: - :header-rows: 1 - :widths: 40, 30, 30 - - * - :strong:`Do this` - - :strong:`With this tool` - - :strong:`Link to documentation` - - * - View your incoming logs grouped by severity over time and zoom in or out to the time period of your choice. - - Timeline - - :ref:`logs-timeline` - - * - Create a chart to see trends in your logs. - - Log metricization rules - - :ref:`logs-metricization` - - * - Find out which path in your API has the slowest response time. - - Log aggregations - - :ref:`logs-aggregations` - - * - Filter your logs to see only logs that contain the field :guilabel:`error`. - - Logs table - - :ref:`logs-keyword` - - * - Redact data to mask personally identifiable information in your logs. - - Field redaction processors - - :ref:`field-redaction-processors` - - * - Confirm that a recent fix stopped a problem. - - Live Tail - - :ref:`logs-live-tail` - - * - Apply processing rules across historical data to find a problem in the past. - - Search-time rules - - :ref:`logs-search-time-rules` - - * - Transform your data or a subset of your data as it arrives in Splunk Observability Cloud. - - Log processing rules - - :ref:`logs-processors` - - * - Minimize expense by archiving unindexed logs in Amazon S3 buckets for potential future use. - - Infinite Logging rules - - :ref:`logs-infinite` - - * - See the metrics, traces, and infrastructure related to a specific log. - - Related Content - - :ref:`get-started-scenario` - - -========================================= -Get started with Log Observer -========================================= -If you have a Log Observer entitlement and want to set up Log Observer and start performing queries on your logs, see :ref:`logs-logs`. - -If you don't have a Log Observer entitlement in Splunk Observability Cloud, see :ref:`logs-set-up-logconnect` or :ref:`logs-scp` to learn how to set up Log Observer Connect and begin querying your Splunk platform logs. \ No newline at end of file diff --git a/logs/individual-log.rst b/logs/individual-log.rst deleted file mode 100644 index 224dd8b65..000000000 --- a/logs/individual-log.rst +++ /dev/null @@ -1,36 +0,0 @@ -.. _logs-individual-log: - -*********************************************************************** -View individual log details and create a field extraction processor -*********************************************************************** - -.. meta:: - :description: View and search a log's fields and values in JSON. Link to related content. Extract a field to create a processing rule. - -.. include:: /_includes/log-observer-transition.rst - -After you find a set of log records that contain specific useful information, you can view the contents of an individual record to get a complete view of the data in the log, broken down by fields and values and displayed in JSON format in the :strong:`Fields` panel. You can also see the number of times each field appears in all of your logs. - -Once you have identified an interesting field, you can perform a field extraction and use it to transform your data. See :ref:`logs-processors` for more information. - -.. note:: Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can create a field extraction processor. If you are using Log Observer Connect, you can view and search Splunk Cloud Platform or Splunk Enterprise data in Log Observer, but you cannot transform it. - - -To view the contents of an individual log record and create a field extraction rule, follow these steps: - -#. Select a log record line in the Logs table to display the Log Details panel. - - This panel displays the entire record in JSON format as well as a table of each field and its value. - -#. To do more with a particular field in the table, select the field value. - - Log Observer displays a drop-down list with 5 options: - - * To copy the field value to the clipboard, select :menuselection:`Copy` - * To filter to the Logs table so it only displays log records containing the selected value, select :menuselection:`Add to filter`. - * To filter the Logs table so it doesn't display log records containing the selected value, select :menuselection:`Exclude from filter`. - * To create a new log processing rule based on the selected field, select :menuselection:`Extract Field`. To learn more about extracting fields to create log processors, see :new-page-ref:`logs-processors`. - * To add the field as a new column in the Logs table, select :menuselection:`Add field as column`. - * Select :menuselection:`View ` to go to the appropriate view in the Splunk Observability Cloud. For - example, if you select a field related to Kubernetes, Splunk Observability Cloud displays related data in the Kubernetes Navigator. - If you select fields related to APM, such as :menuselection:`View trace_id` or :menuselection:`View span_id`, Splunk Observability Cloud displays the trace or span in the APM Navigator. diff --git a/logs/infinite.rst b/logs/infinite.rst deleted file mode 100644 index 0d36ded67..000000000 --- a/logs/infinite.rst +++ /dev/null @@ -1,132 +0,0 @@ -.. _logs-infinite: - -***************************************************************** -Archive your logs with infinite logging rules -***************************************************************** - -.. meta:: - :description: Archive logs in Amazon S3 buckets using infinite logging rules. Reduce the amount of logs data you index. Increase logs' retention period. - -.. include:: /_includes/log-observer-transition.rst - -Create infinite logging rules to archive all or any subset of logs in Amazon S3 buckets for compliance or possible future use while not paying to index them unless and until you want to analyze them in Splunk Log Observer. - -Only customers with a Splunk Log Observer entitlement can use infinite logging rules. Those customers must transition to Log Observer Connect. - -After the transition to Log Observer Connect -============================================================================= -You can continue using existing infinite logging rules. You can turn your existing infinite logging rules off and on. However, you cannot create new infinite logging rules or edit existing rules. - -Going forward, determine the best option for your organization by discussing with your Splunk representative the following types of data storage you can use in the Splunk platform instead of infinite logging rules: - -.. list-table:: - :header-rows: 1 - :widths: 30, 40 - - * - :strong:`Storage type` - - :strong:`Documentation` - - * - Dynamic Data Active Archive - - See :new-page:`Store expired Splunk Cloud Platform data in a Splunk-managed archive ` - - * - Dynamic Data Self Storage - - See :new-page:`Store expired Splunk Cloud Platform data in your private archive ` - - * - Ingest actions - - See :new-page:`Use ingest actions to improve the data input process ` - - -Use cases for archiving your logs -============================================================================= -There are two primary use cases to archive your logs: - -- :ref:`To reduce the amount of data you index ` - -- :ref:`To retain logs data longer than 30 days ` - -.. _logs-reduce: - -Reduce the amount of data you indexed ------------------------------------------------------------------------------ -Some logs may not be useful on a day-to-day basis but may still be important in case of a future incident. For example, you might not always want to index logs from a non-production environment, or index every debug message. In either case, you can create an infinite logging rule to archive those logs in S3 buckets that your team owns in AWS. - -If you want to keep a sample of your archived logs to analyze in Log Observer, you can set the sampling rate in your infinite logging rule so that some amount of the data you archive will also be indexed. You pay for only the logs that you index and analyze in Log Observer. This way, you can monitor trends across all your logs while reducing the impact on your indexing capacity. See :ref:`order-of-execution` in the next section to learn more about using pipeline rules to help reduce your indexing capacity. - -.. _logs-retain: - -Retain logs longer than 30 days ------------------------------------------------------------------------------ -Storing logs in S3 buckets gives you full control over retention time, which can, for example, help you meet compliance and audit requirements. To retain logs longer than Log Observer's 30-day retention period, you can archive and index 100% of your logs. Logs that are archived and indexed will be available for analysis in Log Observer for 30 days and will also be stored in S3 buckets for as long as you want them. - -.. _order-of-execution: - -Order of execution of logs pipeline rules -============================================================================= -Logs pipeline rules execute in the following order: - -1. All log processing rules (field extraction, field copy, and field redaction processors) - -2. All log metricization rules - -3. All infinite logging rules - -Because infinite logging rules run last, you can create field extraction rules, then use the resulting fields in infinite logging rules. You can also metricize logs, then archive them through infinite logging without impacting your ingest capacity. For more information, see :ref:`logs-pipeline-sequence`. - -Prerequisites -================================================================================ -To create new infinite logging S3 connections, You must have an administrator role in Splunk Observability Cloud. If you have a power user role, you can send data to S3 buckets using an existing infinite logging S3 connection, but you cannot create new S3 connections. See AWS documentation for permissions required to create S3 buckets in the AWS Management Console. - -If you have a read_only or usage role, you cannot create new S3 connections or use existing connections to send data to S3 buckets. - -Create an infinite logging rule -================================================================================ - -To create an infinite logging rule, follow these steps: - -1. From the navigation menu, go to :guilabel:`Data Configuration > Logs Pipeline Management`. - -2. Click :guilabel:`New infinite logging Rule`. - -3. Decide where to archive your data. To send your logs to an existing S3 bucket, select the infinite logging connection you want, then skip to step 9. - -4. If you want to send your data to a new S3 bucket and you are a Splunk Observability Cloud admin, select :guilabel:`Create new connection`. The :guilabel:`Establish a New S3 Connection` guided setup appears. - -5. On the :guilabel:`Choose an AWS Region and Authentication Type` tab, do the following: - - a. Select the AWS region you want to connect to. - b. Select whether you want to use the :guilabel:`External ID` or :guilabel:`Security Token` authentication type. - c. Click :guilabel:`Next`. - -6. On the :guilabel:`Prepare AWS Account` tab, follow the steps in the guided setup to do the following in the AWS Management Console: - - a. Create an AWS policy. The guided setup provides the exact policy you must copy and paste into AWS. - b. Create a role and associate it with the AWS policy. - c. Create and configure an S3 bucket. - -7. On the :guilabel:`Establish Connection` tab, do the following: - - a. Give your new S3 connection a name. - b. Paste the Role ARN from the AWS Management Console into the :guilabel:`Role ARN` field in the guided setup. - c. Give your S3 bucket a name. - d. Select :guilabel:`Save`. - -8. Select the Amazon S3 infinite logging connection that you created on the first page of the guided setup. Your data goes to your S3 bucket in a file that you configure in the following two steps. - -9. (Optional) You can add a file prefix, which prepend to the front of the file you send to your S3 bucket. - -10. (Optional) In :guilabel:`Advanced Configuration Options`, you can select the compression and file formats of the file you will send to your S3 bucket. - -11. Select :guilabel:`Next`. - -12. On the :strong:`Filter Data` page, create a filter that matches the log lines you want to archive in your S3 bucket. Only logs matching the filter are archived. If you want to index a sample of the logs going to the archive, select a percentage in :guilabel:`Define indexing behavior`. Indexing a small percentage of logs in Log Observer lets you see trends in logs that are in S3 buckets. Select :guilabel:`Next`. - -13. Add a name and description for your infinite logging rule. - -14. Review your configuration choices, then select :guilabel:`Save`. - -Your infinite logging setup is now complete. Depending on your selections, your logs are archived, indexed in Splunk Observability Cloud for analysis, or both. - -Infinite logging rules limits -================================================================================ -An organization can create a total of 128 infinite logging rules. - diff --git a/logs/intro-logconnect.rst b/logs/intro-logconnect.rst index ff8001717..927d52d4c 100644 --- a/logs/intro-logconnect.rst +++ b/logs/intro-logconnect.rst @@ -10,8 +10,6 @@ Introduction to Splunk Log Observer Connect :description: Log Observer integration with Splunk Cloud Platform or Splunk Enterprise. The introduction is an overview describing all Log Observer Connect functionality. -If you have a Log Observer entitlement rather than Log Observer Connect, see :ref:`get-started-logs`. - Splunk Log Observer Connect is an integration that allows you to query your Splunk Enterprise or Splunk Cloud Platform logs using the capabilities of Splunk Log Observer and :ref:`Related Content ` in Splunk Observability Cloud. With Log Observer Connect, you can troubleshoot your application and infrastructure behavior using high-context logs. Perform codeless queries on your Splunk Enterprise or Splunk Cloud Platform logs to detect the source of problems in your systems, then jump to Related Content throughout Splunk Observability Cloud in one click. Seeing your logs data correlated with metrics and traces in Splunk Observability Cloud helps your team to locate and resolve problems exponentially faster. Region and version availability @@ -19,9 +17,11 @@ Region and version availability .. include:: /_includes/logs/loc-availability.rst +.. _LogObserverFeatures: + What can I do with Log Observer Connect? ============================================================== -The following table lists features available to customers who have integrated Splunk Enterprise or Splunk Cloud Platform with Log Observer, allowing them to use Log Observer Connect. If you have a Log Observer entitlement in Splunk Observability Cloud, see :ref:`get-started-logs` for a complete list of Log Observer features. +The following table lists features available to customers who have integrated Splunk Enterprise or Splunk Cloud Platform with Splunk Observability Cloud, allowing them to use Log Observer Connect. .. list-table:: :header-rows: 1 @@ -53,7 +53,7 @@ The following table lists features available to customers who have integrated Sp * - View the JSON schema of an individual log. - Log details - - :ref:`logs-individual-log` + - :ref:`logs-individual-log-connect` * - See the metrics, traces, and infrastructure related to a specific log. - Related Content diff --git a/logs/keyword.rst b/logs/keyword.rst index b41aa152f..eedb96bd5 100644 --- a/logs/keyword.rst +++ b/logs/keyword.rst @@ -7,14 +7,11 @@ Search logs by keywords or fields .. meta:: :description: Search and filter logs by keyword, field, or field values. -.. include:: /_includes/log-observer-transition.rst -You can search Splunk Observability Cloud logs if your Splunk Observability Cloud instance ingests logs. If your organization has integrated its Splunk platform (Splunk Cloud Platform or Splunk Enterprise) instance with its Splunk Observability Cloud instance, you can search Splunk platform logs that your Splunk platform role has permissions to see in Splunk platform. If you cannot access a log in your Splunk platform instance, you cannot access it in Splunk Observability Cloud. +In Log Observer Connect, you can search Splunk platform logs that your Splunk platform role has permissions to see. If you cannot access a log in your Splunk platform instance, you cannot access it in Splunk Observability Cloud. If your Splunk Observability Cloud instance ingests logs, you can search Splunk Observability Cloud logs. -You can search logs that you have permissions to see for particular keywords, field names, or field values. - -To search your logs, follow these steps: +You can search for keywords, field names, or field values. To search your logs, follow these steps: .. include:: /_includes/logs/query-logs.rst -When you add keywords, field names, or field values to the filters, Log Observer narrows the results in the Timeline and the Logs table so that only records containing the selected fields and values appear. To learn how you can use a productive search in the future, see :ref:`logs-save-share`. +When you add keywords, field names, or field values to the filters, Log Observer narrows the results in the Timeline and the Logs table so that only records containing the selected fields and values appear. To learn how you can reuse a productive search in the future, see :ref:`logs-save-share`. diff --git a/logs/limits.rst b/logs/limits.rst deleted file mode 100644 index cb81fa517..000000000 --- a/logs/limits.rst +++ /dev/null @@ -1,132 +0,0 @@ -.. _logs-limits: - -********************************************************************************************* -Log Observer limits -********************************************************************************************* - -.. meta:: - :description: See Log Observer limits on MB of data ingested or indexed per month, limits on the number and type of processing rules, and search query limits. - -.. include:: /_includes/log-observer-transition.rst - -This page documents Splunk Log Observer service limits and behavior. System protection limits are meant to allow for stability and availability of multi-tenant systems and are subject to fine-tuning and change without notice. - -Log Observer ingest and index limits -============================================================================================= - -The following table lists Log Observer's log ingestion and indexing limits: - -.. list-table:: - :header-rows: 1 - :widths: 50, 50 - - * - :strong:`Limit name` - - :strong:`Default limit value` - - * - MB ingested per month - - Determined by your subscription - - * - MB indexed per month - - Determined by your subscription - -MB ingested per month ---------------------------------------------------------------------------------------------- - -The :guilabel:`Log volume ingestion` entitlement is determined by your organization's contract. It can be translated from Host purchased, or can be based on GB usage per month. The amount of this monthly capacity that you can use per hour or per minute, or "burst limit", is a multiple of your contractual limit. You can increase your contract limit MB/month. You cannot increase the burst limit MB/hour or MB/minute as it is a system limit to ensure the protection of your data. - -:guilabel:`Important:` This limit is a system protection limit and is subject to change based on system availability and fine-tuning. - -What happens when the limit is hit? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Any log data that exceeds the limit in that bucket (hourly and minutely) will be dropped and not ingested. - -.. note:: Splunk can increase this limit on a customer's request. The customer is subject to overage charges. - -MB indexed per month ---------------------------------------------------------------------------------------------- - -The :guilabel:`Log volume indexed` entitlement is determined by your organization's contract. It can be translated from Host purchased, or can be based on GB usage per month. The amount of this monthly capacity that you can use per hour or per minute, or "burst limit", is a multiple of your contractual limit. You can increase your contract limit MB/month. You cannot increase the burst limit MB/hour or MB/minute as it is a system limit to ensure the protection of your data. - -:guilabel:`Important:` This limit is a system protection limit and is subject to change based on system availability and fine-tuning. - -What happens when the limit is hit? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Any log data that exceeds the limit in that bucket (hourly and minutely) will be dropped and not indexed.ß - -.. note:: Splunk can increase this limit on a customer's request. The customer is subject to overage charges. - -Log Observer processing rule limits -============================================================================================= - -The following table lists Log Observer's processing rule limits: - -.. list-table:: - :header-rows: 1 - :widths: 50, 50 - - * - :strong:`Limit name` - - :strong:`Default limit value` - - * - Maximum number of processing rules - - 128 - -Maximum number of processing rules ---------------------------------------------------------------------------------------------- - -This is the maximum number of processing rules that an organization can create. An organization can create 128 combined log processing rules, including field extraction rules, field copy rules, and field redaction rules. An organization can also create a total of 128 infinite logging rules and 128 log metricization rules. - -What happens when the limit is hit? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -No new log processing rules can be created. - -.. note:: Log Observer has a hard limit of 128 rules. Splunk cannot increase this limit at a customer's request. - -Log Observer search query limits -============================================================================================= - -The following table lists Log Observer's search query limits: - -.. list-table:: - :header-rows: 1 - :widths: 50, 50 - - * - :strong:`Limit name` - - :strong:`Default limit value` - - * - Maximum number of saved search queries - - 1,000 - - * - Maximum number of logs processed for Fields Summary - - 150,000 - - * - Maximum number of concurrent live tails - - 2,048 - -Maximum number of saved search queries ---------------------------------------------------------------------------------------------- -This is the maximum number of saved search queries that can be created in an organization. - -What happens when the limit is hit? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The user experience might degrade and is not guaranteed to be functional. - -Maximum number of logs processed for the Fields Summary ---------------------------------------------------------------------------------------------- - -The Log Observer UI displays a summary of fields and their value distribution. By default, it processes the most recent 150,0000 events to generate this view. - -What happens when the limit is hit? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -If the search results contain more than 150,000 events, then only the latest 150,000 events are processed. - -Maximum number of concurrent live tails ---------------------------------------------------------------------------------------------- - -This is the maximum number of live tails that can be running at the same time. These queries are dispatched as the user interacts with the Log Observer Live Tail UI. - -What happens when the limit is hit? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Additional live tail queries are queued until an existing live tail is canceled. Live tail queries do not return data while queued. diff --git a/logs/live-tail.rst b/logs/live-tail.rst deleted file mode 100644 index 3feeaa028..000000000 --- a/logs/live-tail.rst +++ /dev/null @@ -1,116 +0,0 @@ -.. _logs-live-tail: - -************************************************************************** -Verify changes to monitored systems with Live Tail -************************************************************************** - -.. meta:: - :description: Live Tail shows a near real-time feed of log messages as they come into Log Observer. See the impact of updates live. Verify that an integration is sending data. - -.. include:: /_includes/log-observer-transition.rst - -Live Tail displays a streaming view of log messages. Use Live Tail to do the following: - -- Verify that an integration is sending data to Splunk Observability Cloud. -- View spans and traces that your APM services are sending to Splunk Observability Cloud. -- See the impact of configuration changes on your incoming data streams. - -Only customers will a Splunk Log Observer Connect entitlement can monitor systems with Live Tail. Those customers must transition to Log Observer Connect. - -After the transition to Log Observer Connect -============================================================================= -The Log Observer Live Tail feature ends in January 2024. In Splunk Cloud Platform, you can achieve similar functionality by adjusting the time range picker to :guilabel:`All time (real-time)` or :guilabel:`30 second window`. You must select :guilabel:`Search` again and rerun your search to see the most recent log events because live events do not stream in unprompted. For more information, see :new-page:`Select time ranges to apply to your search ` - -View the Live Tail time range -================================================================================ -The Log Observer TimeLine time picker offers Live Tail as one of the time ranges. -In all other time ranges, the logs are already indexed by Splunk Cloud Platform services. -The logs displayed by Live Tail aren't indexed. - -Exit Live Tail -================================================================================ -To exit Live Tail and return to the Log Observer main page, use the time picker in the -navigation bar to select a different time range. - -The Live Tail display -================================================================================ -The Live Tail displays a sample of incoming logs because the amount of log data -is too large to display completely. Below the time picker menu in the navigation bar, -you can see the time when Live Tail started displaying logs and the percentage of logs displayed. -The number of logs visible in Live Tail depends on the amount of data you're -receiving. - -Adjust incoming log speed in Live Tail -================================================================================ -Because incoming data comes in quickly, you might have problems reading the incoming logs. -You can adjust the incoming log speed in the following ways: - -- Scroll the table. Scrolling freezes the table view, letting you read a portion of - the incoming log lines. -- Click :guilabel:`Stop` or :guilabel:`Play` in the navigation bar. -- Adjust the log speed using the :guilabel:`Logs/Second` slider. Next to the slider, you can see what percentage of logs are visible at the selected rate. As you increase the rate of logs per second, the :guilabel:`Showing 100% of logs` callout adjusts accordingly. - -When you are not viewing the most recent events, you can view the most recent incoming event -by clicking :guilabel:`Jump to recent` at the end of the display. - -The following examples use Live Tail to check that data is coming into the Splunk -Observability Suite after an integration with Kubernetes. - -Verify an integration using Live Tail -================================================================================ -To verify, for example, your integration of Kubernetes with Splunk Observability Cloud, use -one of of the techniques demonstrated in the following examples: - -- :ref:`verify-integration-with-live-tail-filtering` -- :ref:`verify-integration-with-live-tail-keyword-highlighting` - -.. _verify-integration-with-live-tail-filtering: - -Example: Verify an integration with Live Tail filtering --------------------------------------------------------------------------------- - -To use Live Tail filtering to verify your Kubernetes integration worked, follow these steps: - -#. In Log Observer, click the navigation bar menu, select the :menuselection:`time picker`, then select - :menuselection:`Live Tail` from the time picker drop-down list. - -#. To add a filter, in the navigation bar click :guilabel:`Add Filter`. - -#. Select the filter type you want to use: - - - To filter by keywords, click the :guilabel:`Keywords` tab. - - - To filter by fields in the log records, click the :guilabel:`Fields` tab. - -#. In the :guilabel:`Find` text box, type the keyword or field that you want to filter on, - then press Enter to filter the logs as they stream into the Live Tail display. - -#. To filter for minimum or maximum values in a numeric field, enter a range in the - :guilabel:`Min` and :guilabel:`Max` text boxes. - -For example, if you add a filter for the log record field :monospace:`K8s.container.name`, you -see this field name in all the records in the display. If you don't see the field, then you -know that your integration might have problems. - -Adding filters helps you find log records for a specific integration. - - -.. _verify-integration-with-live-tail-keyword-highlighting: - -Example: Verify an integration with Live Tail keyword highlighting --------------------------------------------------------------------------------- - -Live Tail highlighting helps you filter logs using keywords. You can specify -up to nine keywords at a time, and Live Tail displays each keyword it finds with a unique -color. - -If you highlight nine keywords, you have to remove a keyword to add -another one. - -To highlight keywords in log records, follow these steps: - -#. In Log Observer, click the navigation bar menu, select the :menuselection:`time picker`, then select - :menuselection:`Live Tail` from the time picker drop-down list. -#. In the navigation bar, type up to nine keywords in the :guilabel:`Enter keyword` text box, then press Enter. - Live Tail displays each keyword it finds with a unique color. - diff --git a/logs/lo-connect-landing.rst b/logs/lo-connect-landing.rst index ec7518d0e..9be5c05ad 100644 --- a/logs/lo-connect-landing.rst +++ b/logs/lo-connect-landing.rst @@ -28,6 +28,9 @@ Splunk Log Observer Connect aggregations logviews timestamp + logs-save-share + forward-logs + lo-transition lo-connect-limits @@ -62,13 +65,16 @@ Splunk Log Observer Connect - :ref:`logs-aggregations` -- :ref:`logs-save-share` - - :ref:`logs-logviews` - :ref:`logs-timestamp` -- :ref:`lo-connect-limits` +- :ref:`logs-save-share` +- :ref:`forward-logs` + +- :ref:`lo-transition` + +- :ref:`lo-connect-limits` -If you do not have Log Observer Connect and instead use Log Observer, see :ref:`log-observer-landing`. \ No newline at end of file +To keep up to date with changes in Log Observer Connect, see the Splunk Observability Cloud :ref:`release notes `. \ No newline at end of file diff --git a/logs/lo-transition.rst b/logs/lo-transition.rst index 815e676bd..7d3a41222 100644 --- a/logs/lo-transition.rst +++ b/logs/lo-transition.rst @@ -1,51 +1,23 @@ .. _lo-transition: -************************************************************************************************************** -Splunk Log Observer transition -************************************************************************************************************** +********************************************************************************************* +Accomplish logs pipeline rules in Splunk platform +********************************************************************************************* .. meta:: :description: Discover how you can transition from Splunk Log Observer to Splunk Log Observer Connect where you can ingest more logs from a wider variety of data sources, use a more advanced logs pipeline, and expand into security logging by the January 2024 deadline. -All Splunk Log Observer customers, who are sending log data to Splunk Observability cloud today, must transition to using Splunk Cloud Platform or Splunk Enterprise as the central platform for logs by the end of December 2023. Splunk Observability Cloud will continue to support Log Observer functionality and user experience with Splunk Log Observer Connect as a bridge between Splunk Observability Cloud and Splunk Cloud Platform. Transitioning to the Splunk platform, whether it is Splunk Cloud Platform or Splunk Enterprise, as the back-end for log storage does not impact your ability to use Splunk Observability Cloud to correlate logs, metrics, and traces. - -Using the Splunk platform allows you to ingest more logs from a wider variety of data sources, use a more advanced logs pipeline, and use logging for security use cases. - - -How to transition to Log Observer Connect -============================================================================================================== - -To transition to Splunk Log Observer Connect, you must take the following actions: - -1. Reach out to your Splunk regional sales manager to request assistance with the transition. The deadline is November 15, 2023. - -2. Connect your Splunk platform instance to your Log Observer Connect instance. See :ref:`logs-scp` or :ref:`logs-set-up-logconnect`, depending on the type of Splunk platform deployment you have. - -3. If you have a Splunk Cloud Platform deployment, set up an HEC token to forward or mirror your existing Log Observer logs to Splunk Cloud Platform. See :ref:`forward-logs` to learn how. - -Verify log data transfer -============================================================================================================== -After completing the preceding steps, you can store data in both Log Observer and your Splunk platform instance for 30 days. During the 30-day window you can verify that the data in your Splunk platform instance from Log Observer Connect matches the Log Observer data. There is no disruption to your functionality during this time. - -Changes in logging after the transition -============================================================================================================== -After your transition to Log Observer Connect, you experience changes in the following logging functionality: - -* :ref:`Log processing rules ` -* :ref:`Infinite logging rules ` -* :ref:`Search-time processing rules ` -* :ref:`Live Tail ` +All customers who ingest logs into Splunk Observability Cloud now use Log Observer Connect, a bridge between Splunk Observability Cloud and Splunk platform. Using the Splunk platform allows you to ingest more logs from a wider variety of data sources, use a more advanced logs pipeline, and use logging for security use cases. +The following sections explain how to achieve all logging pipeline features in Splunk platform. .. _transition-processing-rules: Log processing rules --------------------------------------------------------------------------------------------------------------- -You can continue using existing log processing rules. See :ref:`logs-processors` for more information. You can turn your existing log processing rules off and on. However, you cannot create new log processing rules or edit existing rules. - -Going forward, you can process data in the Splunk platform using the following methods: +--------------------------------------------------------------------------------------------- +You can process data in the Splunk platform using the following methods: .. list-table:: :header-rows: 1 @@ -70,55 +42,7 @@ Going forward, you can process data in the Splunk platform using the following m - See :new-page:`Use the Data Stream Processor `. -.. _transition-infinite-logging: - -Infinite logging rules --------------------------------------------------------------------------------------------------------------- -You can continue using existing infinite logging rules. See :ref:`logs-infinite` for more information. You can turn your existing infinite logging rules off and on. However, you cannot create new infinite logging rules or edit existing rules. - -Going forward, determine the best option for your organization by discussing with your Splunk representative the following types of data storage: - -.. list-table:: - :header-rows: 1 - :widths: 30, 40 - - * - :strong:`Storage type` - - :strong:`Documentation` - - * - Dynamic Data Active Archive - - See :new-page:`Store expired Splunk Cloud Platform data in a Splunk-managed archive ` - - * - Dynamic Data Self Storage - - See :new-page:`Store expired Splunk Cloud Platform data in your private archive ` - - * - Ingest actions - - See :new-page:`Use ingest actions to improve the data input process ` - - -.. _transition-search-time-rules: - -Search-time processing rules --------------------------------------------------------------------------------------------------------------- -You cannot use search-time processing rules in the Log Observer Connect UI. Search-time rules are the application of log processing rules across historical data. See :ref:`logs-search-time-rules` for more information. - -Going forward, you can utilize the following methods for processing data at search time in Splunk Cloud Platform: - -.. list-table:: - :header-rows: 1 - :widths: 30, 40 - - * - :strong:`Search-time processing method` - - :strong:`Documentation` - - * - Field extractor - - See :new-page:`Build field extractions with the field extractor ` - - * - Field aliases - - See :new-page:`Create field aliases in Splunk Web ` - - -.. _transition-live-tail: Live Tail --------------------------------------------------------------------------------------------------------------- -The Live Tail feature of Log Observer ends in January 2024. In Splunk Cloud Platform, you can achieve similar functionality by adjusting the time range picker to :guilabel:`All time (real-time)` or :guilabel:`30 second window`. You must select :guilabel:`Search` again and rerun your search to see the most recent log events because live events do not stream in unprompted. For more information, see :new-page:`Select time ranges to apply to your search ` \ No newline at end of file +-------------------------------------------------------------------------------------------- +To achieve Live Tail functionality, adjust the time range picker in the Splunk platform Search & Reporting app to :guilabel:`All time (real-time)` or :guilabel:`30 second window`. You must select :guilabel:`Search` again and rerun your search to see the most recent log events because live events do not stream in unprompted. For more information, see :new-page:`Select time ranges to apply to your search ` \ No newline at end of file diff --git a/logs/log-observer-landing.rst b/logs/log-observer-landing.rst deleted file mode 100644 index f994eb436..000000000 --- a/logs/log-observer-landing.rst +++ /dev/null @@ -1,94 +0,0 @@ -.. _log-observer-landing: - -************************************* -Splunk Log Observer -************************************* - -.. meta:: - :description: The Log Observer landing page lists and describes all capabilities. Investigate logs in context with metrics and traces in Splunk Log Observer. - - -.. include:: /_includes/log-observer-transition.rst - - -.. toctree:: - :maxdepth: 3 - :hidden: - - - lo-transition - forward-logs - get-started-logs - logs - timeline - live-tail - queries - raw-logs-display - keyword - open-logs-splunk - alias - individual-log - message-field - aggregations - search-time-rules - save-share - logviews - pipeline - processors - metricization - infinite - timestamp - limits - - - - - -- :ref:`lo-transition` - -- :ref:`forward-logs` - -- :ref:`get-started-logs` - -- :ref:`logs-logs` - -- :ref:`logs-timeline` - -- :ref:`logs-live-tail` - -- :ref:`logs-queries` - -- :ref:`logs-raw-logs-display` - -- :ref:`logs-keyword` - -- :ref:`open-logs-splunk` - -- :ref:`logs-alias` - -- :ref:`logs-individual-log` - -- :ref:`logs-message-field` - -- :ref:`logs-aggregations` - -- :ref:`logs-search-time-rules` - -- :ref:`logs-save-share` - -- :ref:`logs-logviews` - -- :ref:`logs-pipeline` - - - :ref:`logs-processors` - - - :ref:`logs-metricization` - - - :ref:`logs-infinite` - -- :ref:`logs-timestamp` - -- :ref:`logs-limits` - - -If you do not have a Log Observer entitlement and instead use Log Observer Connect, see :ref:`lo-connect-landing`. \ No newline at end of file diff --git a/logs/logs-individual-log-connect.rst b/logs/logs-individual-log-connect.rst index 407ab0a9f..2d75bdd5d 100644 --- a/logs/logs-individual-log-connect.rst +++ b/logs/logs-individual-log-connect.rst @@ -7,7 +7,6 @@ View individual log details .. meta:: :description: View the contents of an individual log, then create a field extraction to drill down further. See message, error, span ID, trace ID, and other fields. -.. include:: /_includes/log-observer-transition.rst After you find log records that contain a specific area, view the contents of an individual record to get a precise view of the data related to diff --git a/logs/save-share.rst b/logs/logs-save-share.rst similarity index 69% rename from logs/save-share.rst rename to logs/logs-save-share.rst index 2f05aebf6..e1f339205 100644 --- a/logs/save-share.rst +++ b/logs/logs-save-share.rst @@ -1,18 +1,16 @@ .. _logs-save-share: ***************************************************************** -Save and share Log Observer queries +Save and share Log Observer Connect queries ***************************************************************** .. meta:: - :description: Collaborate with team members by sharing Log Observer or Log Observer Connect queries. Saved queries include filters, aggregations, and search-time rules. + :description: Collaborate with team members by sharing Log Observer Connect queries. Saved queries include filters, aggregations, and search-time rules. -.. include:: /_includes/log-observer-transition.rst -After you create useful queries in Log Observer, you can save them and share them with team members. You can only save or share queries on the :guilabel:`Splunk Observability Cloud data` index. A saved query is made up of a filter and any aggregations or search-time rules you applied during the search. You can only save a query if you have created a filter. Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can save and share Log Observer queries. +After you create useful queries in Log Observer Connect, you can save them and share them with team members. A saved query is made up of a filter and any aggregations or search-time rules you applied during the search. You can only save a query if you have created a filter. -To learn how to create filters, see :ref:`logs-keyword`. -Log Observer Connect has no default aggregation. Log Observer defaults to :guilabel:`All (*)`` logs grouped by :guilabel:`Severity`. To learn how to create a unique aggregation, see :ref:`logs-aggregations`. To learn how to create search-time rules, see :ref:`logs-search-time-rules`. +To learn how to create filters, see :ref:`logs-keyword`. Log Observer Connect has no default aggregation. To learn how to create a unique aggregation, see :ref:`logs-aggregations`. .. note:: All organizations have access to pre-defined queries for Kubernetes and Cassandra. These queries appear at the beginning of the list of saved queries and are a part of content packs. Content packs include pre-defined saved queries as well as log processing rules. Splunk Observability Cloud includes content packs for Kubernetes System Events and Cassandra. @@ -21,17 +19,17 @@ You can also download the results of a query as a CSV or JSON file. See :ref:`ex Prerequisites ================================================================================ -To save and share Log Observer queries, you must have an administrator or power user role. +To save and share Log Observer Connect queries, you must have an administrator or power user role. -Save a Log Observer query +Save a Log Observer Connect query ================================================================================ -To create a query, follow these steps: +To create and save a query, follow these steps: -#. In the control bar, select the desired time increment from the time picker, then in the :guilabel:`Index` field, select :guilabel:`Splunk Observability Cloud data`. Select :guilabel:`Add Filter`, then enter a keyword or field. +#. In the control bar, select the desired time increment from the time picker, then in the :guilabel:`Index` field, select the index you want to search. Select :guilabel:`Add Filter`, then enter a keyword or field. -#. To override the default aggregation, follow these steps: +#. To set an aggregation, follow these steps: #. Using the calculation control, set the calculation type you want from the list. The default is :guilabel:`Count`. #. Select the field that you want to aggregate by. @@ -42,12 +40,12 @@ To create a query, follow these steps: #. In the :guilabel:`Name` text box, enter a name for your query. #. Optionally, you can describe the query in the :guilabel:`Description` text box. #. Optionally, in the :guilabel:`Tags` text box, enter tags to help you and your team locate the query. - Log Observer stores tags you've used before and auto-populates the :guilabel:`Tags` text box. + Log Observer Connect stores tags you've used before and auto-populates the :guilabel:`Tags` text box. #. To save this query as a public query, select :guilabel:`Filter sharing permissions set to public`. - When you save a query as a public query, any user in your organization can view and delete it in Log Observer. + When you save a query as a public query, any user in your organization can view and delete it in Log Observer Connect. -Use Log Observer saved queries +Use Log Observer Connect saved queries ================================================================================ You can view, share, set as default, or delete saved queries in the Saved Queries @@ -77,7 +75,7 @@ The following table lists the actions you can take in the Saved Queries catalog. * - Delete a saved query from your Saved Queries catalog - Select the :guilabel:`More` icon for the query, then select :menuselection:`Delete Query`. -.. note:: If you set a saved query as default, when you open Log Observer, it displays the result of +.. note:: If you set a saved query as default, when you open Log Observer Connect, it displays the result of that query. .. _exportCSV: diff --git a/logs/logs.rst b/logs/logs.rst deleted file mode 100644 index a188b9497..000000000 --- a/logs/logs.rst +++ /dev/null @@ -1,232 +0,0 @@ -.. _logs-logs: - -************************************************** -Set up Log Observer -************************************************** - - -.. meta:: - :description: Connect Splunk Observability Cloud to your data sources. Set up Log Observer to investigate logs in context with metrics and traces. - -.. toctree:: - :hidden: - -.. include:: /_includes/log-observer-transition.rst - -Complete the instructions on this page if you have a Log Observer entitlement in Splunk Observability Cloud. If you don't have a Log Observer entitlement in Splunk Observability Cloud, see :ref:`logs-intro-logconnect` to set up the integration and begin using Log Observer to query your Splunk platform logs. - -By default, Log Observer indexes and stores all logs data that you send to Splunk Observability Cloud unless you choose to archive some of your logs data in Amazon S3 buckets. See :ref:`logs-infinite` to learn how to archive logs until you want to index and analyze them in Log Observer. If you use Log Observer Connect, your logs data remains in your Splunk platform instance and is never stored in Log Observer or Splunk Observability Cloud. - -What type of data is supported? -================================================== -Splunk Log Observer supports unstructured log data at ingest. - - -Prerequisites -================================================== -Before setting up Log Observer, you must meet the following criteria: - -- Your Splunk Observability Cloud organization must be provisioned with an entitlement for Log Observer. -- You must be an administrator in a Splunk Observability Cloud organization to set up integrations. - - -Start using Log Observer -================================================== -You can use Splunk Observability Cloud guided setups to send logs to Log Observer from your hosts, containers, and cloud providers. Use the :ref:`Splunk Distribution of OpenTelemetry Collector ` to capture logs from your resources and applications. Decide whether you want to see logs from each data source, only one, or any combination of data sources. The more complete your log collection in Log Observer, the more effective your use of Log Observer can be for troubleshooting your entire environment using logs. You can complete step 1, step 2, or both in the following list, depending on which logs you want to see. - -To start using Log Observer, complete the following tasks: - -1. :ref:`Collect logs from your hosts and containers ` - -2. :ref:`Collect logs from your cloud providers ` - -3. :ref:`Filter and aggregate your data in Log Observer ` - -4. :ref:`Ensure the severity key is correctly mapped ` - -.. _hosts-containers: - -Collect logs from your hosts and containers --------------------------------------------------- -To send logs from your hosts and containers to Log Observer, follow these instructions: - -1. Log in to Splunk Observability Cloud. - -2. In the left navigation menu, select :menuselection:`Data Management`. - -3. Go to the :guilabel:`Available integrations` tab, or select :guilabel:`Add Integration` in the :guilabel:`Deployed integrations` tab. - -4. Select the tile for the platform you want to import logs from. You can select Windows, Kubernetes, or Linux. The guided setup for your platform appears. - -5. Follow the instructions in the guided setup then see :ref:`work-with-data`. - -After you see data coming into Log Observer from your data source, you can send logs from another data source or continue analyzing logs from the platform you have just set up. - -.. _cloud-providers: - -Collect logs from your cloud providers --------------------------------------------------- - -Amazon Web Services -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To send logs from Amazon Web Services to Log Observer, follow these instructions: - -1. Log in to Splunk Observability Cloud. - -2. In the left navigation menu, select :menuselection:`Data Management`. - -3. Go to the :guilabel:`Available integrations` tab, or select :guilabel:`Add Integration` in the :guilabel:`Deployed integrations` tab. - -4. In the :guilabel:`Cloud Integrations` section, select the the Amazon Web Services tile. - -5. Follow the instructions in the guided setup then see :ref:`work-with-data`. - -For more information about setting up an AWS connection, see :ref:`aws-logs` and :ref:`aws-cloudformation`. - -Google Cloud Platform -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To send logs from Google Cloud Platform to Log Observer, follow the instructions in :ref:`ingest-gcp-log-data` then see :ref:`work-with-data`. - -Microsoft Azure -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To send logs from Microsoft Azure to Log Observer, follow the instructions in :ref:`ingest-azure-log-data` then see :ref:`work-with-data`. - -After you see data coming into Log Observer from your data source, you can send logs from another data source or continue analyzing logs from the cloud provider you have just set up. - -.. note:: - - If you already have existing Fluentd or Fluent Bit deployments, you can configure them to send logs to Log Observer. However, it is important to note that the following are true when using Fluentd or Fluent Bit: - - - Logs captured by your own Fluentd or Fluent Bit agents do not include the resource metadata that automatically links log data to other related sources available within APM and Infrastructure Monitoring. - - - Although there are multiple ways to send log data to Log Observer, Splunk only provides direct support for the Splunk distribution of OpenTelemetry Collector. - - If you still want to use Fluentd to send logs to Log Observer, see :ref:`Configure Fluentd to send logs `. - -.. _work-with-data: - -Filter and aggregate your data in Log Observer --------------------------------------------------- -After you have collected some logs, use filters and aggregation to efficiently navigate your logs in Log Observer. You can verify that Log Observer is correctly processing and indexing your logs by filtering and aggregating your log data. - -You can use the Log Observer interface to filter your logs based on keywords or fields. To filter your data, follow these steps: - -1. Select :strong:`Add Filter`. - -2. To find logs containing a keyword, select the :strong:`Keyword` tab and enter a keyword. - -3. To find logs containing a specific field, select the :strong:`Fields` tab and enter a field in :strong:`Find a field` then select it from the list. If helpful, you can enter a value for the specified field. - -4. To display only results that include the keywords, fields, or field values you entered, select the equal sign (=) next to the appropriate entry. To display only results that exclude the keywords, fields, or field values you entered, select the not equal sign (!=) next to the appropriate entry. - -The resulting logs appear in the Logs table. You can add more filters, enable and disable existing filters, and select individual logs to learn more. - -Perform aggregations on logs to visualize problems in a histogram that shows averages, sums, and other statistics related to logs. Aggregations group related data by one field and then perform statistical calculation on other fields. Find the aggregations controls in the control bar at the top of the Log Observer UI. The default aggregation shows all logs grouped by severity. - -See :ref:`logs-aggregations` to learn how to perform more aggregations. - -.. _severity-key: - -Ensure correct mapping of severity key --------------------------------------------------- -The severity key is a field that all logs contain. It has the values ``DEBUG``, ``ERROR``, ``INFO``, ``UNKNOWN``, and ``WARNING``. Because the ``severity`` field in many logs is called ``level``, Log Observer automatically remaps the log field ``level`` to ``severity``. - -If your logs call the ``severity`` key by a different name, that's okay. To ensure that Log Observer can read your field, transform your field name to ``severity`` using a Field Copy Processor. See :ref:`field-copy-processors` to learn how. - -.. _fluentd: - -Configure Fluentd to send logs --------------------------------------------------------------------------- -If you already have Fluentd running in your environment, you can reconfigure it -to send logs to an additional output. To send logs to Splunk Observability Cloud -in addition to your current system, follow these steps: - -1. Make sure that you have the HEC plugin for Fluentd installed. - - | :strong:`Option A` - | Install the plugin and rebuild the Fluentd using the instructions in :new-page:`fluent-plugin-splunk-hec `. - - | :strong:`Option B` - | Use an existing Fluentd docker image with HEC plugin included. To get this image, enter - | `docker pull splunk/fluentd-hec`. - - | To learn more, see :new-page:`Fluentd docker image with HEC plugin included `. - -2. Add HEC output. - Change your Fluentd configuration by adding another output section. The new HEC - output section points to Splunk's SignalFx Observability ingest endpoint. - - For example, if you have one output to elasticsearch, follow these steps: - - - Change ``type`` from ``@elasticsearch`` to ``@copy`` in the match section. - - Put ``elasticsearch`` into the ```` block. - - Add another ```` block for HEC output. - - - The following is a sample of output to ``@elasticsearch``: - - .. code-block:: bash - - - @type elasticsearch - ... - - ... - - - -3. Change the ``@elasticsearch`` output to the following: - - .. code-block:: - - - @type copy - - @type elasticsearch - ... - - ... - - - - @type splunk_hec - hec_host "ingest..signalfx.com" - hec_port 443 - hec_token "" - ... - - ... - - - - -4. In the new ```` section for splunk_hec, provide at least the following fields: - - - ``hec_host`` - Set the HEC ingest host (for example, ``ingest.us1.signalfx.com hec_port``) to 443. - - - ``hec_token`` - Provide the SignalFx access token. - -5. Specify the following parameters: - - - ``sourcetype_key`` or ``sourcetype`` - Defines source type of logs by using a particular log field or static value - - - ``source_key`` or ``source`` - Defines source of logs by using a particular log field or static value - -6. Set up a buffer configuration for HEC output. The following is an example using memory buffer: - - .. code-block:: - - - @type memory - chunk_limit_records 100000 - chunk_limit_size 200k - flush_interval 2s - flush_thread_count 1 - overflow_action block - retry_max_times 10 - total_limit_size 600m - - -For more details on buffer configuration, see :new-page:`About buffer `. - -See :new-page:`HEC exporter documentation ` to learn about other optional fields. diff --git a/logs/logviews.rst b/logs/logviews.rst index 7a538504c..6f769b7e2 100644 --- a/logs/logviews.rst +++ b/logs/logviews.rst @@ -7,7 +7,6 @@ Add logs data to Splunk Observability Cloud dashboards .. meta:: :description: Add logs data to Splunk Observability Cloud dashboards without turning your logs into metrics first. Align log views, log timeline charts, and metrics charts on one dashboard. -.. include:: /_includes/log-observer-transition.rst On a dashboard, metrics charts show what changed in your systems and when the problem started. Logs data on the same dashboard shows you in detail what is happening and why. All the data you add to a dashboard respond to the same time selection and other dashboard filters, allowing you to drill down to the source of the problem faster. diff --git a/logs/message-field.rst b/logs/message-field.rst index c8aa59cec..c0f810725 100644 --- a/logs/message-field.rst +++ b/logs/message-field.rst @@ -7,7 +7,6 @@ Display a field separately in the log details flyout .. meta:: :description: Display the message field from your logs in an easy-to-access flyout in each individual log record. -.. include:: /_includes/log-observer-transition.rst The log details flyout in Log Observer always displays the ``message`` field in a standalone section called :strong:`MESSAGE` at the top of the log details flyout. diff --git a/logs/metricization.rst b/logs/metricization.rst deleted file mode 100644 index 8106ce6be..000000000 --- a/logs/metricization.rst +++ /dev/null @@ -1,92 +0,0 @@ -.. _logs-metricization: - -***************************************************************************** -Create metrics from your logs with log metricization rules -***************************************************************************** - -.. meta:: - :description: Log metricization rules derive metrics from logs. Show an aggregate count of logs grouped by a dimension. Embed logs data in charts, dashboards, and detectors. - -.. include:: /_includes/log-observer-transition.rst - -Log metricization rules allow you to create a log-derived metric showing an aggregate count of logs grouped by the dimension of your choice. While Log Observer visual analysis allows you to dynamically view aggregate metrics in the context of your query, log metricization rules allow you to embed metrics from log data in charts, dashboards, and detectors. Log metricization rules enable you to see trends in your full logs data set without paying to index all of your logs data. - -.. note:: Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can create log metricization rules. If you do not have a Log Observer entitlement and are using Splunk Log Observer Connect instead, see :ref:`logs-intro-logconnect` to learn what you can do with the Splunk Enterprise integration. - -Order of execution of logs pipeline rules -============================================================================= -Logs pipeline rules execute in the following order: - -1. Log processing rules - -2. Log metricization rules - -3. Infinite logging rules - - Log Observer indexes your logs data only after executing all pipeline management rules. When you metricize then archive a set of logs, metricized logs count against your ingest capacity but not against your indexing capacity. Like any other metric, a metric derived from log metricization rules counts toward your metrics quota per your contract. For more information, see :ref:`logs-pipeline-sequence`. - -Prerequisites -================================================================================ -To create log metricization rules, you must have an administrator or power user role in Splunk Observability Cloud. - - -Create log metricization rules -============================================================================= -There are two ways to create log metricization rules: - -* :ref:`Create a log metricization rule from the logs pipeline ` -* :ref:`Create a log metricization rule in the context of a Log Observer query ` - -.. _metricization-rule-from-pipeline: - -Create a log metricization rule from the logs pipeline --------------------------------------------------------------------------------- - -To create a new log metricization rule from scratch in the logs pipeline, follow these steps: - -1. From the navigation menu, go to :guilabel:`Data Configuration > Logs Pipeline Management`. - -2. Click :guilabel:`New Metricization Rule`. - -3. Define a matching condition. Only matching logs will be included in the chart resulting from your metricization rule. - -4. To configure a metric, perform a Log Observer aggregation query. Select a function, an aggregate, and a dimension for this query. You can choose from the following functions: :guilabel:`Count`, :guilabel:`AVG`, :guilabel:`MAX`, :guilabel:`MIN`, and :guilabel:`SUM`. The default function is :guilabel:`Count`. The default aggregate for Log Observer is :guilabel:`All(*)`, and the default dimension is :guilabel:`severity`. Log Observer Connect has no default aggregation. To change the dimension of the aggregation, select another dimension in the :guilabel:`Group by` field. To See :ref:`logs-aggregations` for a thorough explanation of aggregation queries. - -5. Next, select a target field by which you want to aggregate logs. For example, you can choose :guilabel:`services` as your target field, then group logs by :guilabel:`status`. Fields with "#", such as :guilabel:`amount`, require a numerical value to aggregate logs. - -6. Click :guilabel:`Next`. - -7. Review your metric time series (MTS) summary to see how your metricization could affect your subscription usage. You can optionally select an ingest token to limit the MTS count. - -8. Click :guilabel:`Next`. - -9. Give your metric a name. The name defaults to the function and target fields. - -10. You can optionally change the Metric Type to :guilabel:`Gauge`, :guilabel:`Counter`, or :guilabel:`Cumulative counter`. - -11. Give your rule a name and description. - -12. Review your configuration, then click :guilabel:`Save`. Your rule appears in the list of Metricization Rules on the Logs Pipeline Management page. Click the name of your rule to view a summary of the rule. To view the output of your rule, click :guilabel:`view your new metric in a chart`. This takes you to chart builder populated with your new metric. In less than 60 seconds, you will see metrics reported within the chart. - -13. While still in chart builder, click :guilabel:`Save As` to save your new metric as a chart. You can then embed it on a new or existing dashboard. - -.. _metricization-rule-from-Log-Observer-query: - -Create a log metricization rule in the context of a Log Observer query --------------------------------------------------------------------------------- - -Often, you might notice the potential value of an existing query and decide to create a log metricization rule based on that query. You can quickly launch the creation of a new metricization rule from a Log Observer query. - -To create a new log metricization rule in the context of an existing search query, follow these steps: - -1. In the navigation menu, go to :guilabel:`Log Observer`. - -2. Create a query that aggregates logs. See :ref:`logs-aggregations` to learn how. - -3. In the :guilabel:`Save` menu, select :guilabel:`Save as Metric`. This takes you to the Configure Metric page in Logs Pipeline Management. - -4. Go to step 3 in :ref:`Create a log metricization rule from the logs pipeline ` and complete the instructions. - -Log metricization rules limits --------------------------------------------------------------------------------- -An organization can create a total of 128 log metricization rules. \ No newline at end of file diff --git a/logs/open-logs-splunk.rst b/logs/open-logs-splunk.rst index 5b4196276..665403240 100644 --- a/logs/open-logs-splunk.rst +++ b/logs/open-logs-splunk.rst @@ -7,7 +7,6 @@ Open logs in Splunk platform .. meta:: :description: Open your logs in Splunk Cloud or Splunk Enterprise for additional SPL queries. -.. include:: /_includes/log-observer-transition.rst You can search Splunk Observability Cloud logs if your Splunk Observability Cloud instance ingests logs. If your organization has integrated its Splunk platform (Splunk Cloud Platform or Splunk Enterprise) instance with its Splunk Observability Cloud instance, you can search Splunk platform logs that your Splunk platform role has permissions to see in Splunk platform. You can also open the logs in Splunk platform for additional SPL querying. diff --git a/logs/pipeline.rst b/logs/pipeline.rst deleted file mode 100644 index 290ee02a4..000000000 --- a/logs/pipeline.rst +++ /dev/null @@ -1,42 +0,0 @@ -.. _logs-pipeline: - -***************************************************************** -Manage the logs pipeline -***************************************************************** - -.. meta:: - :description: Manage the logs pipeline with log processing rules, log metricization rules, and Infinite Logging rules. Customize your pipeline. - -.. include:: /_includes/log-observer-transition.rst - -Add value to your raw logs by customizing your pipeline. The pipeline is a set of rules that execute sequentially. - -Splunk Observability Cloud lets you create three types of pipeline rules: - -* :ref:`Log processing rules ` transform your data or a subset of your data as it arrives in Splunk Observability Cloud. -* :ref:`Log metricization rules ` let you create charts to see trends in your logs. -* :ref:`Infinite Logging rules ` archive unindexed logs in Amazon S3 buckets for potential future use. - -.. note:: Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can manage the Log Observer pipeline. If you do not have a Splunk Log Observer entitlement and are using Splunk Log Observer Connect instead, see :ref:`logs-intro-logconnect` to learn what you can do with the Splunk platform integration. - -.. _logs-pipeline-sequence: - -Sequence of logs pipeline rules -============================================================================= -Logs pipeline rules execute in the following order: - -1. All log processing rules (field extraction, field copy, and field redaction processors) - -2. All log metricization rules - -3. All Infinite Logging rules - -Adjust the order of custom rules by dragging and dropping their placement within their rule category. Log Observer indexes logs only after all three types of pipeline rules are executed. Any logs that you archive through Infinite Logging rules do not count toward your indexing capacity. - -Because log processing rules execute first, you can create field extraction rules, then use the resulting fields in log metricization rules or Infinite Logging rules or both. For example, say you want to archive and not index all logs that contain the values ‘START', ‘RETRY', ‘FAIL', and ‘SUCCESS' in the :guilabel:`message` field, which also contains other information. Without any processing, you might need to create a rule with a keyword search for each value. Instead, you can use field extraction to make Infinite Logging rules easier and more manageable. First, create a log processing rule to extract a new field called :guilabel:`status` from the portion of the field message that contains the desired values. Then, create an Infinite Logging rule that filters on :guilabel:`status` to include logs with the values , ‘START', ‘RETRY', ‘FAIL', or ‘REPORT'. - -Because Infinite Logging rules are last in the pipeline, log-derived metrics are based on 100% of ingested logs, not on a sample of logs. Thus, your organization can make use of full and accurate log-derived metrics without needing to index all the logs that you metricize. For example, say you want to create a metric to count the occurrences of “puppies” or “kittens” in the message field, but you also want to archive the logs containing those occurrences without indexing. First, create a log processing rule to extract a new field called pet from the portion of the message field that contains the desired values. Then, create a metricization rule that records the count of all log messages, grouped by pet. You can now graph or alert on the count of each pet from logs in Splunk Observability Cloud dashboards and detectors. If you don't want to see the log messages in Log Observer, create an Infinite Logging rule that archives without indexing all log messages that contain the field pet. Now you have real-time visibility into logging trends without using index capacity. - -Logs pipeline rules limits -================================================================================ -An organization can create a total of 128 log processing rules, which includes the combined sum of field extraction rules, field copy rules, and field redaction rules. In addition, an organization can create 128 log metricization rules and 128 infinite logging rules. \ No newline at end of file diff --git a/logs/processors.rst b/logs/processors.rst deleted file mode 100644 index c7d64f892..000000000 --- a/logs/processors.rst +++ /dev/null @@ -1,271 +0,0 @@ -.. _logs-processors: - -***************************************************************** -Transform your data with log processing rules -***************************************************************** - -.. meta:: - :description: Manage the logs pipeline with log processing rules, log metricization rules, and Infinite Logging rules. Customize your logs pipeline. - -.. include:: /_includes/log-observer-transition.rst - -Add value to your raw logs by creating log processing rules, also known as processors, to transform your data or a subset of your data as it arrives. To add more control to processors, you can add filters that determine which logs a processor will be applied to. - -Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can create or manage log processing rules using the Splunk Log Observer pipeline. Those customers must transition to Log Observer Connect. - -After the transition to Log Observer Connect -============================================================================= -When you transition to Log Observer Connect, log processing rule functionality changes. At transition, you can continue using existing log processing rules. You can turn your existing log processing rules off and on. However, you cannot create new log processing rules or edit existing rules. - -Going forward after the transition to Log Observer Connect, you can process data in the Splunk platform using the following methods: - -.. list-table:: - :header-rows: 1 - :widths: 30, 40 - - * - :strong:`Processing method` - - :strong:`Documentation` - - * - Field extractions - - See :new-page:`Build field extractions with the field extractor ` - - * - Ingest actions - - See :new-page:`Use ingest actions to improve the data input process ` - - * - .conf configuration - - See :new-page:`Overview of event processing `. - - * - Edge Processor - - See :new-page:`About the Edge Processor solution ` - - * - Data Stream Processor - - See :new-page:`Use the Data Stream Processor `. - - * - Ingest Processor - - See :new-page:`About Ingest Processor `. - - -Prepackaged processing rules -============================================================================= - -Prepackaged processing rules appear at the beginning of the list of processing rules, and have a lock icon. These prepackaged processing rules always execute before any processing rules you define. You can't modify or reorder prepackaged processing rules. - -One example of a prepackaged processing rule is the ``Level`` to ``severity`` attributed remapper. - -Splunk Observability Cloud includes prepackaged processing rule for Kubernetes and Cassandra. - -Splunk Observability Cloud provides three types of log processing rules: - -* :ref:`Field extraction processors ` create a subset of log data by extracting fields and values. -* :ref:`Field copy processors ` create a set of log data by moving field values from one field - in the log record to a different field name in a new record. -* :ref:`Field redaction processors ` redact data to mask personally identifiable information. - - -Order of execution of logs pipeline rules -============================================================================= -Logs pipeline rules execute in the following order: - -1. All log processing rules (field extraction, field copy, and field redaction processors) - -2. All log metricization rules - -3. All infinite logging rules - -Because log processing rules execute first, you can create field extraction rules, then use the resulting fields in log metricization rules or infinite logging rules or both. For more information, see :ref:`logs-pipeline-sequence`. - - -.. _field-extraction-processors: - -Field extraction processors -================================================================================ -Field extraction lets you find an existing field in your incoming logs and -create a processor based on the format of the field's value. - -Field extraction helps you do the following tasks: - -* Filter logs based on the extracted fields. To learn more about filtering, see :ref:`logs-keyword`. -* Aggregate on extracted fields. To learn more, see :ref:`logs-aggregations`. - -Consider the following raw log record - -`10.4.93.105 - - [04/Feb/2021:16:57:05 +0000] "GET /metrics HTTP/1.1" 200 73810 "-" "Go-http-client/1.1" 23` - -If you have not defined any processors in your logs pipeline, you can only do a keyword search on the sample log, -which searches the ``_raw`` field. The following table shows how you can extract fields to define processing rules: - -.. list-table:: - :header-rows: 1 - :widths: 50 50 - - * - :strong:`Example of value to extract` - - :strong:`Processor definition to use` - - * - IP address (10.4.93.105) - - IP - - * - 04/Feb/2021:16:57:05 +0000 - - time - - * - GET - - method - - * - /metrics - - path - -Creating Regex and event time field extractions allows you to filter and aggregate on the fields: -IP, time, method, and path. This enables you to create the query "Display a Visual Analysis of the number of -requests from {IP} broken down by {method}". - -Additionally, the extracted fields begin appearing in the fields summary panel along with their -top values and other statistics. - -There are three types of field extraction. These are: - -* Regex processors -* JSON processors -* Event time processors -* KV parser processors - -To start creating a field extraction, follow these steps: - -#. From the navigation menu, go to :guilabel:`Data Configuration > Logs Pipeline Management`. - A list of existing processors is displayed with the prepackaged processors displaying first. - -#. Click :guilabel:`New Processing Rule`. - - Alternatively, you can launch the processor wizard from Log Observer. - To do this, click into a log in the Logs table. The :guilabel:`Log Details` panel - appears on the right. Click a field value then select :menuselection:`Extract field`. - This takes you to :guilabel:`Define Processor`, the second step of the processor wizard. - Skip to step 7. - -#. Select :menuselection:`Field Extraction` as the processor type, then click :guilabel:`Continue`. - This takes you to :menuselection:`Select sample`, the first step in the processor wizard. - -#. To narrow your search for a log that contains the field you want to extract, you can select a time from the time picker or click :guilabel:`Add Filter` and add keywords or fields. - -#. Click the log containing the field you want. A list of fields and values - appears below the log line. - -#. Click :guilabel:`Use as sample` next to the field you want to extract, then click :guilabel:`Next`. - This takes you to :guilabel:`Define Processor`, the second step of the processor wizard. - -#. Select the extraction processor type that you want to use. - -#. From here, follow the steps to create the extraction processor type you selected: - - * :ref:`Regex processor ` - * :ref:`JSON processor ` - * :ref:`Event time processor ` - * :ref:`KV parser processor` - -.. _regex-processor: - -Create a Regex processor --------------------------------------------------------------------------------- -The regular expression workspace lets you to extract fields from your data -and then create a new processor using regex. Pipeline Management makes -suggestions to help you write the appropriate regex for your processor. -You can modify the regex within the processor wizard. - -To create a regex processor, follow these steps: - -#. Highlight the value of the field you want to extract in your sample and select :menuselection:`Extract field` from the drop-down menu. -#. Click into the field name box and enter a name for the field you selected. The default name is ``Field1``. Results display in a table. -#. Click `Edit regex` below the field name box if you want to modify the regex that the processor has automatically generated to create this rule based on your field name and value. -#. Preview your rule in the table to ensure that the correct fields are extracted. -#. To apply your new rule to only a subset of incoming logs, add filters to the content control bar. - The new rule will apply only to logs matching this filter. -#. In step 3 of the processor wizard entitled :guilabel:`Name, Save, and Review`, give your new rule a name and description. -#. Review your configuration choices, then click :guilabel:`Save`. Your processor defaults to :guilabel:`Active` and immediately begins processing incoming logs. -#. To see your new processor, go to :guilabel:`Data Configuration > Logs Pipeline Management`, expand the :guilabel:`Processing Rules` section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click :guilabel:`Inactive`. - -.. _json-processor: - -Create a JSON processor --------------------------------------------------------------------------------- -To create a JSON processor, follow these steps: - -#. To apply your new rule to only a subset of incoming logs, click :guilabel:`Add Filter` and add a keyword or field. The new rule will apply only to logs matching this filter. Pipeline Management only applies the new processor to log events that match this filter. -#. Preview your rule to ensure that Pipeline Management is extracting the correct field values. -#. If you see the correct field values in the results table, click :guilabel:`Next`. Otherwise, adjust your filter. -#. Add a name and description for your new rule, then click :guilabel:`Save`. Your processor defaults to :guilabel:`Active` and immediately begins processing incoming logs. -#. To see your new processor, go to :guilabel:`Data Configuration > Logs Pipeline Management`, expand the :guilabel:`Processing Rules` section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click :guilabel:`Inactive`. - -.. _event-time-processor: - -Create an event time processor --------------------------------------------------------------------------------- -To create an event time processor, follow these steps: - -#. Select a time format from the drop-down list. The wizard looks for the selected format within your sample. -#. From the matches you see, select the time when the sample event occurred, then click :guilabel:`Next`. -#. Add filters to the content control bar to define a matching condition, then click :guilabel:`Next`. - Pipeline Management only applies the new processor to log events that match this filter. -#. Give your new rule a name and description. -#. Review your configuration choices, then click :guilabel:`Save`. Your processor defaults to :guilabel:`Active` and immediately begins processing incoming logs. -#. To see your new processor, go to :guilabel:`Data Configuration > Logs Pipeline Management`, expand the :guilabel:`Processing Rules` section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click :guilabel:`Inactive`. - -.. _kv-processor: - -Create a KV parser processor --------------------------------------------------------------------------------- -A KV parser processor is a rule that parses key-value (KV) pairs. To create a KV parser processor, follow these steps: - -#. To apply your new rule to only a subset of incoming logs, click :guilabel:`Add Filter` then add a keyword or field. The new rule will apply only to logs matching this filter. -#. Preview your rule to ensure that Pipeline Management is extracting the correct field values. -#. If you see the correct field values in the results table, click :guilabel:`Next`. Otherwise, adjust your filter. -#. Add a name and description for your new rule, then click :guilabel:`Save`. Your processor defaults to :guilabel:`Active` and immediately begins processing incoming logs. -#. To see your new processor, go to :guilabel:`Data Configuration > Logs Pipeline Management`, expand the :guilabel:`Processing Rules` section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click :guilabel:`Inactive`. - - -.. _field-copy-processors: - -Field copy processors -================================================================================ -Field copy processors let you define a new relationship between new or existing fields. One way to use Field Copy Processors is to use OpenTelemetry mappings to help power your :ref:`Related Content ` suggestions. - -To create a field copy processor, follow these steps: - -#. From the navigation menu, go to :menuselection:`Data Configuration > Logs Pipeline Management`. -#. Click :guilabel:`New Processing Rule`. -#. Select :menuselection:`Field Copy`, then click :guilabel:`Continue`. -#. Enter a target field in the first text box. - You can choose from available extracted fields in the drop-down list. -#. In the second text box, choose a field to which you want to map your target field. - The drop-down list options suggest OpenTelemetry mappings, - which help power your Related Content suggestions. -#. If you want to create multiple mappings, click :guilabel:`+ Add another field copying rule` and repeat steps 4 and 5; otherwise, click :guilabel:`Next`. -#. To apply your new rule to only a subset of incoming logs, add filters to the content control bar. - The new rule is applied only to logs matching this filter. If you do not add a filter, - the rule is applied to all incoming log events. -#. Preview your rule to ensure that Pipeline Management is extracting the correct field values, then click :guilabel:`Next`. -#. Give your new rule a name and description, then click :guilabel:`Save`. Your processor defaults to :guilabel:`Active` and immediately begins processing incoming logs. -#. To see your new processor, go to :guilabel:`Data Configuration > Logs Pipeline Management`, expand the :guilabel:`Processing Rules` section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click :guilabel:`Inactive`. - -.. _field-redaction-processors: - -Field redaction processors -================================================================================ -Field redaction lets you mask data, including personally identifiable information. - -To create a field redaction processor, follow these steps: - -#. From the navigation menu, go to :menuselection:`Data Configuration > Logs Pipeline Management`. -#. Click :guilabel:`New Processing Rule`. -#. Select :menuselection:`Field Redaction`, then click :guilabel:`Continue`. This takes you to the first step in the processor wizard, Select :guilabel:`Sample`. -#. To find a log that contains the field you want to redact, add filters to the content control bar until the Logs table displays a log with the desired field. -#. Click the log containing the field you want. A list of fields and values appears below the log line. -#. Click :guilabel:`Use as sample` next to the field you want to redact, then click :guilabel:`Next`. This takes you to :guilabel:`Define Processor`, the second step of the processor wizard. -#. Select if you want to redact an entire field value or a partial field value. If you want to redact a partial field value, highlight the portion you want to redact. You can edit the regex here. -#. Define a matching condition. To apply your new rule to only a subset of incoming logs, add filters to the content control bar. The new rule will apply only to logs matching this filter. -#. Give your new rule a name and description. -#. Review your configuration choices, then click :guilabel:`Save`. Your processor defaults to :guilabel:`Active` and immediately begins processing incoming logs. -#. To see your new processor, go to :guilabel:`Data Configuration > Logs Pipeline Management`, expand the :guilabel:`Processing Rules` section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click :guilabel:`Inactive`. - -.. note:: If the field you redacted also appears in ``_raw``, it is still available in ``_raw``. Redact the field in ``_raw`` in addition to redacting the field itself. - -Log processing rules limits -================================================================================ -An organization can create a total of 128 log processing rules. The 128 rule limit includes the combined sum of field extraction processors, field copy processors, and field redaction processors. \ No newline at end of file diff --git a/logs/queries.rst b/logs/queries.rst index 3e7165edd..4e7701d76 100644 --- a/logs/queries.rst +++ b/logs/queries.rst @@ -1,13 +1,12 @@ .. _logs-queries: ***************************************************************************** -Query logs in Log Observer +Query logs in Log Observer Connect ***************************************************************************** .. meta:: :description: Overview of the various ways you can query logs in Log Observer. Browse, search by keyword, filter, extract fields, or aggregate logs. -.. include:: /_includes/log-observer-transition.rst You can search Splunk Observability Cloud logs if your Splunk Observability Cloud instance ingests logs. Many Splunk platform (Splunk Cloud Platform and Splunk Enterprise) users can access their Splunk platform logs in Splunk Observability Cloud because their organization has integrated its Splunk platform and Splunk Observability Cloud instances. If you are using the integration, you can only access Splunk platform logs in Splunk Observability Cloud if your Splunk platform role has permissions to see that log's index in Splunk platform. Your Splunk platform admin controls your permissions to see Splunk platform logs in Splunk Observability Cloud. @@ -17,7 +16,7 @@ Click any of the following documents to learn more about each way you can explor * :ref:`logs-keyword` -* :ref:`logs-individual-log` +* :ref:`logs-individual-log-connect` * :ref:`logs-aggregations` diff --git a/logs/raw-logs-display.rst b/logs/raw-logs-display.rst index a217a7e5f..7406c31b8 100644 --- a/logs/raw-logs-display.rst +++ b/logs/raw-logs-display.rst @@ -7,7 +7,6 @@ Browse logs in the logs table .. meta:: :description: Browse logs in the logs table as they come into Log Observer or Log Observer Connect. Customize the logs table display by field. See a count of new log events. -.. include:: /_includes/log-observer-transition.rst At the center of the Log Observer display is the logs table, which displays log records as they come in. The most recent logs appear at the diff --git a/logs/scp.rst b/logs/scp.rst index 1ecf072b3..9d5df862c 100644 --- a/logs/scp.rst +++ b/logs/scp.rst @@ -9,7 +9,7 @@ Set up Log Observer Connect for Splunk Cloud Platform Set up Log Observer Connect by integrating Log Observer with Splunk Cloud Platform. If you are in a Splunk Enterprise environment and want to set up Log Observer Connect, see :ref:`logs-set-up-logconnect`. -When you set up Log Observer Connect, your logs data remains in your Splunk Cloud Platform instance and is accessible only to Log Observer Connect. Log Observer Connect does not store or index your logs data. There is no additional charge for Log Observer Connect. +When you set up Log Observer Connect, your logs remain in your Splunk Cloud Platform instance and are accessible only to Log Observer Connect. Log Observer Connect does not store or index your logs data. There is no additional charge for Log Observer Connect. .. note:: You can collect data using both the Splunk Distribution of the OpenTelemetry Collector and the Universal Forwarder without submitting any duplicated telemetry data. See :ref:`collector-with-the-uf` to learn how. diff --git a/logs/search-time-rules.rst b/logs/search-time-rules.rst deleted file mode 100644 index 753d96ce2..000000000 --- a/logs/search-time-rules.rst +++ /dev/null @@ -1,97 +0,0 @@ -.. _logs-search-time-rules: - -***************************************************************** -Apply processing rules across historical data -***************************************************************** - -.. meta:: - :description: Transform your data with a log processing rule, then apply the rule to logs that came in before the rule existed. Learn about search-time vs. index-time rules. - -.. include:: /_includes/log-observer-transition.rst - -Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can apply processing rules across historical data using search-time rules. Those customers must transition to Log observer Connect. - -After the transition to Log Observer Connect -============================================================================= -You cannot use search-time processing rules in the Log Observer Connect UI. - -Going forward, you can utilize the following methods for processing data at search time in the Splunk platform: - -.. list-table:: - :header-rows: 1 - :widths: 30, 40 - - * - :strong:`Search-time processing method` - - :strong:`Documentation` - - * - Field extractor - - See :new-page:`Build field extractions with the field extractor ` - - * - Field aliases - - See :new-page:`Create field aliases in Splunk Web ` - - -What are search-time rules? -============================================================================= - -Search-time rules are the application of log processing rules across historical data. Log processing rules can occur at index time or at search time. Index-time rules can only be applied to data that streams in after the index-time rule was created. To learn more about index-time rules, see :ref:`logs-processors`. It can be helpful to apply an index-time rule to data that streamed in before the index-time rule existed. To do so, create a search-time rule. - -The following table compares search-time rules and index-time rules. - -.. list-table:: - :header-rows: 1 - :widths: 50 50 - - * - :strong:`Search-time rule` - - :strong:`Index-time rule` - - * - Transforms your data or a subset of your data - - Transforms your data or a subset of your data - - * - Apply to data from any time period - - Apply only to data that streamed in after the rule was created and activated - - * - Is part of a query - - Is part of the logs pipeline - - * - Activate or deactivate in :guilabel:`Saved Queries` or :guilabel:`Active search-time rules` in Log Observer - - Activate or deactivate in :guilabel:`Data Configuration > Logs Pipeline Management` - - -Do not activate search-time rules except when you are intentionally applying index-time rules to historical data. Applying search-time rules does not impact the subscription usage, but does impact performance. Search-time rules are transformations that increase the time it takes to complete a search. Applying index-time rules can impact index subscription usage, but does not impact performance. - - -Use case for applying search-time rules -============================================================================= - -You can apply search-time rules when you discover a problem after the fact. For example, suppose an error occurred between 2 am and 5 am last night and no one was on duty to track down the cause. This morning at 9 am, you discover the error occurred and try to figure out what went wrong. You create field extractions to define a few fields to make filtering easier. The new fields, which were created with index-time rules, can only be applied to logs that stream in after you created the fields at 9 am. To apply your newly created fields to logs that streamed in between 2 am and 5 am, create a search-time rule based on the index-time rule you created at 9 am, then activate it as a search-time rule and apply it to logs that came in between 2 am and 5 am. - - -Create and activate a search-time rule -============================================================================= - -To create a search-time rule, follow these steps: - -1. Create an index-time rule from an individual log or in Logs Pipeline Management. See the :guilabel:`Field extraction processors` section of :ref:`logs-processors` to learn how. :guilabel:`Note`: You can apply only Regex processing rules at search time. -2. Click :guilabel:`Active Search-time rules` in Log Observer. A :guilabel:`Search-time rules` panel appears. -3. On the :guilabel:`Search-time rules` panel, click the :guilabel:`Index-time rules` tab. -4. Find and select your index-time rule in the list to activate it at search time, then click :guilabel:`Apply 1 rule at search time`. -5. Click the :guilabel:`Search-time rules` tab. -6. Drag the active search-time rules to obtain the order in which you want to apply the rules. -7. Adjust the time in the Log Observer time picker to apply the rule to the historical data you want. - - -Deactivate a search-time rule -============================================================================= - -To deactivate a search-time rule, follow these steps: - -1. In Log Observer, click :guilabel:`Active search-time rules`. -2. On the :guilabel:`Search-time rules` panel, click the :guilabel:`Active search-time rules` tab. -3. Find and select the rule you want to deactivate, then click :guilabel:`Deactivate 1 rule`. - - -Save a search-time rule --------------------------------------------------------------------------------- - -When you create a search-time rule, it automatically becomes part of the current query. To save the rule, save the query. See :ref:`logs-save-share` to learn how. diff --git a/logs/timeline.rst b/logs/timeline.rst index 5f0d55415..300423824 100644 --- a/logs/timeline.rst +++ b/logs/timeline.rst @@ -7,7 +7,6 @@ View overall system health using the timeline .. meta:: :description: The Log Observer timeline displays a histogram chart of logged events over time, grouped by values of the “message” field. See the spread of error severity levels. -.. include:: /_includes/log-observer-transition.rst The Log Observer timeline displays a histogram of logged events over time, grouped by values of the message field ``severity``. Note that Log Observer Connect has no default aggregation. You can change Log Observer's default aggregation by changing the value in the :strong:`Group by` field. To learn more, see :new-page-ref:`logs-aggregations`. @@ -23,9 +22,7 @@ These features help you use the Timeline to review the health of your systems: To adjust the duration of each histogram bucket, use the time picker. - * The Live Tail option doesn't display a histogram. Use filtering or keyword highlighting to - review incoming log records. To learn more, see :new-page-ref:`logs-live-tail`. - * Other options display histograms over a previous time period. Log Observer calculates the time intervals for each + * Histograms display over a previous time period. Log Observer calculates the time intervals for each histogram bucket. The duration of each interval appears in the control bar. * To display a histogram for a specific time period, use the :menuselection:`Custom Time` option. * By default, the time period for the histogram is :menuselection:`Last 5 minutes`, which displays buckets for diff --git a/logs/timestamp.rst b/logs/timestamp.rst index e9dd8ad8e..1d90aa59a 100644 --- a/logs/timestamp.rst +++ b/logs/timestamp.rst @@ -7,15 +7,14 @@ Where does a log's logical time come from? .. meta:: :description: Log Observer determines a log's time and assigns it to _time. Time comes from event time processor, HEC protocol timestamp, or entrance into Splunk Observability Cloud. -.. include:: /_includes/log-observer-transition.rst A log's logical time can come from different places, depending on what data is available for the log. Your logs may have fields, such as ``timestamp`` or ``Time``, that sound like the log's logical time. However, Log Observer determines the log's logical time and assigns it to the field, ``_time``. If your logs already contain the field ``_time``, Log Observer overwrites it. -Log Observer applies the following three rules, in priority order, to determine each log's logical time: +Log Observer applies the following two rules, in priority order, to determine each log's logical time: -1. The time matched and parsed by any rule you created using an event time processor, a log processing rule (See :ref:`event-time-processor` for more information.) -2. The timestamp sent as part of the HTTP Event Collector (HEC) protocol as the event time -3. The time when the log event hits Splunk Observability Cloud +* The timestamp sent as part of the HTTP Event Collector (HEC) protocol as the event time + +* The time when the log event hits Splunk Observability Cloud First, Log Observer checks for a matching event time processor, rule 1 in the preceding list. If there is a match, it is used as the logical time. Log Observer prioritizes an event time processor rule first because it was a rule you created to determine your logs' logical time. diff --git a/metrics-and-metadata/relatedcontent.rst b/metrics-and-metadata/relatedcontent.rst index 248f93a1b..039fd03d9 100644 --- a/metrics-and-metadata/relatedcontent.rst +++ b/metrics-and-metadata/relatedcontent.rst @@ -201,18 +201,12 @@ The following table describes the four methods for remapping log fields: * - :strong:`Remapping Method` - :strong:`Instructions` - * - Splunk Observability Cloud Logs Pipeline Management - - Create and apply a field copy processor. See the :strong:`Field copy processors` section in :ref:`logs-processors` to learn how. - Note: Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can use this method. If you are using Log Observer Connect, use one of other methods in this table. - * - Log Field Aliasing - Create and activate a field alias. See :ref:`logs-alias` to learn how. Learn when to use Log Field Aliasing in the next section. * - Client-side - Configure your app to remap the necessary fields. - * - Collector-side - - Use a Fluentd or FluentBit configuration. See :ref:`Configure Fluentd to send logs ` to learn how. When to use Log Field Aliasing ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/references/glossary.rst b/references/glossary.rst index 4ee2d9855..a5b641e27 100644 --- a/references/glossary.rst +++ b/references/glossary.rst @@ -23,6 +23,12 @@ A analytics Analytics are the mathematical functions that can be applied to a collection of data points. For a full list of analytics that can be applied in Splunk Infrastructure Monitoring, see the :ref:`analytics-ref`. + automatic discovery + Automatic discovery is a feature of the Splunk Distribution of the OpenTelemetry Collector that identifies the services, such as third-party databases and web servers, running in your environment and sends telemetry data from them to Splunk Application Performance Monitoring (APM) and Infrastructure Monitoring. The Collector configures service-specific receivers that collect data from an endpoint exposed on each service. For more information, see :ref:`discovery_mode`. + + automatic instrumentation + Automatic instrumentation allows you to instrument your applications and export telemetry data without having to modify the application source files. The language-specific instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud back end. Automatic instrumentation is available for applications written in Java, Node.js, .NET, Go, Python, Ruby, and PHP and automatically collects telemetry data for code written using supported libraries in each language. For more information, see :ref:`get-started-application`. + C == diff --git a/release-notes/2024-10-01-rn.rst b/release-notes/2024-10-01-rn.rst new file mode 100644 index 000000000..d502ccdf2 --- /dev/null +++ b/release-notes/2024-10-01-rn.rst @@ -0,0 +1,69 @@ +.. _2024-10-01-rn: + +*************** +October 1, 2024 +*************** + +Splunk Observability Cloud released the following new features and enhancements on October 1, 2024. This is not an exhaustive list of changes in the observability ecosystem. For a detailed breakdown of changes in versioned components, see the :ref:`list of changelogs `. + +.. _loc-2024-10-01: + +Log Observer Connect +==================== + +.. list-table:: + :header-rows: 1 + :widths: 1 2 + :width: 100% + + * - New feature or enhancement + - Description + * - Splunk virtual compute (SVC) optimization + - You can optimize SVC, resulting in performance improvements and cost savings, by using new :guilabel:`Play`, :guilabel:`Pause`, and :guilabel:`Run` search buttons in the UI. The default limit is 150,000 logs. For more information, see :ref:`logs-keyword`. + +.. _ingest-2024-20-01: + +Data ingest +=========== + +.. list-table:: + :header-rows: 1 + :widths: 1 2 + :width: 100% + + * - New feature or enhancement + - Description + * - Kubernetes control plane metrics + - In a continued effort to replace Smart Agent monitors with OpenTelemetry Collector receivers, a collection of Kubernetes control plane metrics are available using OpenTelemetry Prometheus receivers that target Prometheus endpoints. For more information see :ref:`kubernetes-control-plane-prometheus`. + +.. _data-mngt-2024-10-01: + +Data management +=============== + +.. list-table:: + :header-rows: 1 + :widths: 1 2 + :width: 100% + + * - New feature or enhancement + - Description + * - Data retention for archived metrics extended from 8 to 31 days + - To facilitate long-term data and historical trend analysis, you can store archived metrics for up to 31 days. You can also customize your restoration time window when creating exception rules. + * - Terraform implementation + - You can use Terraform to archive metrics and create exception rules, such as routing a subset of metrics to the real-time tier rather than the archival tier. + +.. _slo-2024-10-01: + +Service level objective (SLO) +============================= + +.. list-table:: + :header-rows: 1 + :widths: 1 2 + :width: 100% + + * - New feature or enhancement + - Description + * - SignalFlow editor for custom metrics SLO + - You can use SignalFlow to define metrics and filters when creating a custom metric SLO. For more information, see :ref:`create-slo`. The feature released on October 2, 2024. \ No newline at end of file diff --git a/release-notes/release-notes-overview.rst b/release-notes/release-notes-overview.rst new file mode 100644 index 000000000..954d85540 --- /dev/null +++ b/release-notes/release-notes-overview.rst @@ -0,0 +1,74 @@ +.. _release-notes-overview: + +********************** +Release notes overview +********************** + +.. meta:: + :description: The Splunk Observability Cloud release notes overview page, which lists all the products and components that have release notes. + +.. toctree:: + :hidden: + + 2024-10-01-rn + +Keep up to date with the latest new features and enhancements to Splunk Observability Cloud products and components. Splunk Observability Cloud comprises both SaaS products which release on a rolling basis and downloadable versioned components. Presented here are new feature and enhancement announcements for both SaaS and versioned offerings as well as links to detailed changelogs for versioned components. + +.. raw:: html + +

What's new

+ +Each release date includes new features and enhancements for SaaS and versioned products and components. + +.. list-table:: + :widths: 1 2 + :width: 100% + :header-rows: 1 + + * - Release + - Changes by product or component + * - :ref:`October 1, 2024 <2024-10-01-rn>` + - * :ref:`Log Observer Connect ` + * :ref:`Data ingest ` + * :ref:`Data management ` + * :ref:`Service level objective ` + +.. _changelogs: + +.. raw:: html + +

Changelogs

+ +For a detailed breakdown of changes in versioned components, see the following table: + +.. list-table:: + :widths: 1 2 + :width: 100% + :header-rows: 1 + + * - Component + - Changelog + * - Splunk OpenTelemetry Collector + - :new-page:`https://github.com/signalfx/splunk-otel-collector/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry Collector for Kubernetes + - :new-page:`https://github.com/signalfx/splunk-otel-collector-chart/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry Java instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-java/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry .NET instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-dotnet/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry Python instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-python/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry Node.JS instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-js/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry Go instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-go/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry Android instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-android/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry Lambda instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-lambda/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry JavaScript for Web instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-js-web/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry iOS instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-ios/blob/main/CHANGELOG.md` + * - Splunk OpenTelemetry React Native instrumentation + - :new-page:`https://github.com/signalfx/splunk-otel-react-native/blob/main/CHANGELOG.md` \ No newline at end of file diff --git a/rum/intro-to-rum.rst b/rum/intro-to-rum.rst index b04b15a8f..401538cdf 100644 --- a/rum/intro-to-rum.rst +++ b/rum/intro-to-rum.rst @@ -20,6 +20,7 @@ With Splunk Real User Monitoring (RUM), you can gain insight about the performan * - Splunk RUM for Mobile - Splunk Real User Monitoring (RUM) for Mobile provides visibility into every user session of your native iOS and Android mobile applications by equipping you with comprehensive performance monitoring, directed troubleshooting, and full-stack observability. +To keep up to date with changes in RUM, see the Splunk Observability Cloud :ref:`release notes `. .. _wcidw-rum: diff --git a/scenarios-tutorials/scenario.rst b/scenarios-tutorials/scenario.rst index a79d71224..028e64c63 100644 --- a/scenarios-tutorials/scenario.rst +++ b/scenarios-tutorials/scenario.rst @@ -229,8 +229,6 @@ Consulting with Deepu, the :strong:`paymentservice` owner, they agreed that the Learn more #################### -* For details about creating metrics from logs and displaying them in a chart, see :ref:`logs-metricization`. - * For details about creating detectors to issue alerts based on charts or metrics, see :ref:`create-detectors`. * For details about setting up detectors and alerts, see :ref:`get-started-detectoralert`. @@ -246,5 +244,3 @@ Learn more * For details about using the Kubernetes navigator and other navigators, see :ref:`use-navigators-imm`. * For details about using Tag Spotlight, see :ref:`apm-tag-spotlight`. - -* For details about using Splunk Log Observer Live Tail view, see :ref:`logs-live-tail`. \ No newline at end of file diff --git a/splunkplatform/practice-reliability/incident-response.rst b/splunkplatform/practice-reliability/incident-response.rst index 169f00a09..7807de200 100644 --- a/splunkplatform/practice-reliability/incident-response.rst +++ b/splunkplatform/practice-reliability/incident-response.rst @@ -85,7 +85,7 @@ With Log Observer Connect, you can aggregate logs to group by interesting fields * :ref:`logs-keyword` -* :ref:`logs-individual-log` +* :ref:`logs-individual-log-connect` * :ref:`logs-alias` diff --git a/synthetics/intro-synthetics.rst b/synthetics/intro-synthetics.rst index 28160a82b..78c31a512 100644 --- a/synthetics/intro-synthetics.rst +++ b/synthetics/intro-synthetics.rst @@ -9,6 +9,8 @@ Introduction to Splunk Synthetic Monitoring Create detailed tests to proactively monitor the speed and reliability of websites, web apps, and resources over time, at any stage in the development cycle. +To keep up to date with changes in Synthetic Monitoring, see the Splunk Observability Cloud :ref:`release notes `. + How does Splunk Synthetic Monitoring work? ============================================= Synthetic tests are the primary mechanism of application monitoring in Splunk Synthetic Monitoring. You can set up Browser tests and Uptime tests to monitor various aspects of your site or application. You can set up these tests to run at your preferred frequency from the devices and locations of your choosing.