diff --git a/_images/images-metrics/usage-analytics-example-profile.png b/_images/images-metrics/usage-analytics-example-profile.png new file mode 100644 index 000000000..db3974297 Binary files /dev/null and b/_images/images-metrics/usage-analytics-example-profile.png differ diff --git a/_images/images-metrics/usage-analytics-home-page.png b/_images/images-metrics/usage-analytics-home-page.png new file mode 100644 index 000000000..233907d60 Binary files /dev/null and b/_images/images-metrics/usage-analytics-home-page.png differ diff --git a/_images/synthetics/Synth-built-in-dashboards.png b/_images/synthetics/Synth-built-in-dashboards.png new file mode 100644 index 000000000..f8c904f75 Binary files /dev/null and b/_images/synthetics/Synth-built-in-dashboards.png differ diff --git a/_images/synthetics/ootb-dashboard-modal.png b/_images/synthetics/ootb-dashboard-modal.png new file mode 100644 index 000000000..ee57267c9 Binary files /dev/null and b/_images/synthetics/ootb-dashboard-modal.png differ diff --git a/_includes/metric-categories.rst b/_includes/metric-categories.rst deleted file mode 100644 index a7c897db5..000000000 --- a/_includes/metric-categories.rst +++ /dev/null @@ -1,77 +0,0 @@ -.. list-table:: - :header-rows: 1 - :widths: 20 80 - :width: 100% - - * - :strong:`Category type` - - :strong:`Description` - - * - 0 - - | No information about the category type of the metric. - | Note: Category type information for metrics is only available after 03/16/2023. Any metrics created before that date has category type ``0``. - - * - 1 - - Host - - * - 2 - - Container - - * - 3 - - | Custom - | Metrics reported to Splunk Observability Cloud outside of those reported by default, such as host, container, or bundled metrics. Custom metrics might result in increased data ingest costs. - - * - 4 - - Hi-resolution - - * - 5 - - Internal - - * - 6 - - Tracing metrics - - * - 7 - - | Bundled - | In host-based subscription plans, additional metrics sent through Infrastructure Monitoring public cloud integrations that are not attributed to specific hosts or containers. - - * - 8 - - APM hosts - - * - 9 - - APM container - - * - 10 - - APM identity - - * - 11 - - APM bundled metrics - - * - 12 - - | APM Troubleshooting MetricSets - | This category is not part of the report. - - * - 13 - - APM Monitoring MetricSets - - * - 14 - - Infrastructure Monitoring function - - * - 15 - - APM function - - * - 16 - - | RUM Troubleshooting MetricSets - | This category is not part of the report. - - * - 17 - - RUM Monitoring MetricSets - - * - 18 - - Network Explorer metrics - - * - 19 - - Runtime metrics - - * - 20 - - Synthetics metrics - -.. note:: In subscription plans based on metric time series (MTS), all metrics are categorized as custom metrics and billed accordingly. \ No newline at end of file diff --git a/_includes/metric-classes.rst b/_includes/metric-classes.rst new file mode 100644 index 000000000..57766eccf --- /dev/null +++ b/_includes/metric-classes.rst @@ -0,0 +1,28 @@ +.. list-table:: + :header-rows: 1 + :widths: 20 80 + :width: 100% + + * - :strong:`Billing class` + - :strong:`Metrics included` + * - Custom metrics + - Metrics reported to Splunk Observability Cloud outside of those reported by default, such as host, container, or bundled metrics. Custom metrics might result in increased data ingest costs. + * - APM Monitoring MetricSets + - Includes metrics from APM Monitoring MetricSets. See :ref:`apm-metricsets` for more information. + * - RUM Monitoring MetricSets + - Includes metrics from RUM Monitoring MetricSets. See :ref:`rum-custom-indexed-tags` for more information. + * - Default/bundled metrics (Infrastructure) + - * Host + * Container + * Bundled + * Additional metrics sent through infrastructure monitoring public cloud integrations that aren't attributed to specific hosts or containers. + * - Default/bundled metrics (APM) + - * Host + * Container + * Identity + * Bundled + * Tracing + * Runtime + * Synthetics + * - Other metrics + - Internal metrics \ No newline at end of file diff --git a/_includes/synthetics/chrome-flags.rst b/_includes/synthetics/chrome-flags.rst new file mode 100644 index 000000000..733e3fcfc --- /dev/null +++ b/_includes/synthetics/chrome-flags.rst @@ -0,0 +1,22 @@ +.. list-table:: + :header-rows: 1 + :widths: 40 60 + :width: 100% + + * - :strong:`Chrome flag` + - :strong:`Description` + * - ``--disable-http2`` + - Requests are made using using ``http/1.1`` instead of ``http/2.0``. This HTTP version is viewable in the HAR file. + * - ``--disable-quic`` + - Deactivates QUIC, which also deactivates HTTP3. + * - ``--disable-web-security`` + - Deactivate enforcement of same origin policy. + * - ``--unsafely-treat-insecure-origin-as-secure=http://a.test,http://b.test`` + - Treat given insecure origin as secure. Multiple origins can be supplied in a comma-separated list. + * - ``--proxy-bypass-list="*.google.com;*foo.com;127.0.0.1:8080"`` + - Proxy bypass list for any specified proxy for the given semi-colon-separated list of hosts. This flag must be used with ``--proxy-server``. + * - ``--proxy-server="foopy:8080"`` + - Uses a specified proxy server to override default settings. + * - ``--no-proxy-server`` + - Don't use a proxy server, always make direct connections. This flag can be used to override any other proxy server flags that you may have set up in a private location. + diff --git a/_includes/zero-code-info.rst b/_includes/zero-code-info.rst new file mode 100644 index 000000000..fccaa7774 --- /dev/null +++ b/_includes/zero-code-info.rst @@ -0,0 +1,2 @@ +.. note:: Due to changes in the upstream OpenTelemetry documentation, "automatic instrumentation" has been changed to "zero-code instrumentation". For more information, see :ref:`zero-code-overview`. + diff --git a/alerts-detectors-notifications/alerts-and-detectors/create-detectors-for-alerts.rst b/alerts-detectors-notifications/alerts-and-detectors/create-detectors-for-alerts.rst index 27b626a32..419bdbab2 100644 --- a/alerts-detectors-notifications/alerts-and-detectors/create-detectors-for-alerts.rst +++ b/alerts-detectors-notifications/alerts-and-detectors/create-detectors-for-alerts.rst @@ -115,11 +115,15 @@ Select alert signals On the :strong:`Alert signal` tab, define the signal to monitor by entering a metric and corresponding analytics. -If you are creating a detector from scratch, you have to first select the signals you want to monitor. Selecting a signal for a detector is similar to selecting a signal in a chart in the Chart Builder. Enter a metric and select the metric you want to monitor from the list. Add filters or analytics. To learn more, see :ref:`specify-signal`. +* If you are creating a detector from scratch, you have to first select the signals you want to monitor. Selecting a signal for a detector is similar to selecting a signal in a chart in the Chart Builder. Enter a metric and select the metric you want to monitor from the list. Add filters or analytics. To add more signals, select :guilabel:`Add Metric or Event` or :guilabel:`Add Formula`. You can add events to be displayed on the chart, but you cannot select an event as the signal to be monitored. To learn more, see :ref:`specify-signal`. -If you want to add more signals, select :guilabel:`Add Metric or Event` or :guilabel:`Add Formula`. Note that you can add events to be displayed on the chart, but you cannot select an event as the signal to be monitored. -.. note:: If you are creating a detector :ref:`from a chart` or by :ref:`cloning a detector`, you might not need to add new signals. However, if you do add new signals to the detector, the signals you add are not added to the original chart or detector. + .. note:: When you select an archived metric as a signal in your detector, the archived metric can't be report data to your detector and will cause the detector to misfire alerts or stop working. To include an archived metric in detectors, route them to real-time or create exception rules to make them available. For more information, see the :ref:`mpm-rule-routing-exception` section. + +* If you are creating a detector :ref:`from a chart` or by :ref:`cloning a detector`, you might not need to add new signals. However, if you do add new signals to the detector, the signals you add are not added to the original chart or detector. + +* You can add events to be displayed on the chart, but you can't select an event as the signal to be monitored. + .. _compound-conditions: diff --git a/apm/span-tags/tag-spotlight.rst b/apm/span-tags/tag-spotlight.rst index 084666be9..34e9caae3 100644 --- a/apm/span-tags/tag-spotlight.rst +++ b/apm/span-tags/tag-spotlight.rst @@ -31,6 +31,14 @@ To view service performance broken down by your indexed span tags, follow these #. View the distribution of all indexed span tags. The tag bar charts display either request and error distributions or latency distribution. Use the :guilabel:`Cards display` menu to select the data you want to display in the bars. #. Select the menu on the top left of the bar chart section to select which metrics to display in each tag panel. You can also use this menu to select whether to display tags with no values. +Customize tags display on Tag Spotlight +---------------------------------------------------------------------- +To configure the layout of the cards on the Tag Spotlight page, follow these steps: + +#. From the menu on the top left of the bar chart, select :guilabel:`Customize card display order`. +#. Drag each of the span tags to arrange the order that cards are displayed on the page. Arrange the tags by priority, order of importance, or other criteria. +#. Select :guilabel:`Save`. + Explore the distribution of span tags and values to find trends ---------------------------------------------------------------------- diff --git a/data-visualization/charts/chart-builder.rst b/data-visualization/charts/chart-builder.rst index 55c2f0a21..1f84f6730 100644 --- a/data-visualization/charts/chart-builder.rst +++ b/data-visualization/charts/chart-builder.rst @@ -19,7 +19,7 @@ If you are editing an existing chart, you might want to start by configuring plo Specify a signal for a plot line ============================================================================= -A signal is the :term:`metric` or :ref:`histogram metric ` you want to plot on the chart, to which you might add filters and apply analytics. Plot lines, or plots, are the building blocks of charts. A chart has one or more plots, and each plot is composed of the :term:`metric time series` or histogram metric represented by the signal and its properties and dimensions, any filters, and any analytics applied. +A signal is the :term:`metric` or :ref:`histogram metric ` you want to plot on the chart, to which you might add filters and apply analytics. Plot lines, or plots, are the building blocks of charts. A chart has one or more plots, and each plot is composed of the :term:`metric time series` or histogram metric represented by the signal and its properties and dimensions, any filters, and any analytics applied. .. note:: Instead of a metric, you can also enter a :ref:`time series expression` to create a composite or derived metric, specify an :ref:`event` to be displayed on the chart, or :ref:`link a detector to a chart` to display its alert status on the chart. @@ -114,6 +114,17 @@ In this case, if you want to plot a metric as histogram, do the following steps For more information on histogram function and supported methods, see :new-page:`histogram() ` in the SignalFlow reference documentation. +.. _archived-metrics-charts: + +Use archived metrics in charts +-------------------------------------- + +When you select an archived metric as a signal in your chart, the archived metric can't be plotted. + +To include an archived metric in a chart, route the archived metric to real-time or create exception rules to make it available. For more information, see the :ref:`mpm-rule-routing-exception` section. + +To learn more about MPM, see :ref:`metrics-pipeline-intro`. + .. _filter-signal: Filter the signal diff --git a/gdi/get-data-in/application/application.rst b/gdi/get-data-in/application/application.rst index bf259ca60..1703ea454 100644 --- a/gdi/get-data-in/application/application.rst +++ b/gdi/get-data-in/application/application.rst @@ -19,6 +19,7 @@ Instrument back-end applications to send spans to Splunk APM Instrument a PHP application TOGGLE Instrument a C++ application TOGGLE Send spans from the Istio service mesh + Instrumentation methods You can instrument your back-end services and applications to send metrics and traces to Splunk Observability Cloud. diff --git a/gdi/get-data-in/application/go/instrumentation/instrument-go-application.rst b/gdi/get-data-in/application/go/instrumentation/instrument-go-application.rst index 44ec74e74..8f0343af4 100644 --- a/gdi/get-data-in/application/go/instrumentation/instrument-go-application.rst +++ b/gdi/get-data-in/application/go/instrumentation/instrument-go-application.rst @@ -7,6 +7,8 @@ Instrument your Go application for Splunk Observability Cloud .. meta:: :description: The Splunk Distribution of OpenTelemetry Go can instrument your Go application or service. Follow these steps to get started. +.. include:: /_includes/zero-code-info.rst + The Splunk Distribution of OpenTelemetry Go can instrument your Go application or service. To get started, use the guided setup or follow the instructions manually. Generate customized instructions using the guided setup diff --git a/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst b/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst index 9750f3b95..014db81da 100644 --- a/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst +++ b/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst @@ -9,6 +9,8 @@ Instrument your Java application for Splunk Observability Cloud The Java agent from the Splunk Distribution of OpenTelemetry Java can automatically instrument your Java application by injecting instrumentation to Java classes. To get started, use the guided setup or follow the instructions manually. +.. include:: /_includes/zero-code-info.rst + Generate customized instructions using the guided setup ==================================================================== diff --git a/gdi/get-data-in/application/nodejs/instrumentation/instrument-nodejs-application.rst b/gdi/get-data-in/application/nodejs/instrumentation/instrument-nodejs-application.rst index 2a5d88209..284004aa3 100644 --- a/gdi/get-data-in/application/nodejs/instrumentation/instrument-nodejs-application.rst +++ b/gdi/get-data-in/application/nodejs/instrumentation/instrument-nodejs-application.rst @@ -7,6 +7,8 @@ Instrument your Node.js application for Splunk Observability Cloud .. meta:: :description: The Splunk Distribution of OpenTelemetry Node.js can automatically instrument your Node.js application or service. Follow these steps to get started. +.. include:: /_includes/zero-code-info.rst + The Splunk Distribution of OpenTelemetry JS can automatically instrument your Node.js application and many of the popular node.js libraries your application uses. To get started, use the guided setup or follow the instructions manually. diff --git a/gdi/get-data-in/application/otel-dotnet/get-started.rst b/gdi/get-data-in/application/otel-dotnet/get-started.rst index 3944c49c0..6b9e5ca64 100644 --- a/gdi/get-data-in/application/otel-dotnet/get-started.rst +++ b/gdi/get-data-in/application/otel-dotnet/get-started.rst @@ -25,7 +25,7 @@ Instrument .NET applications for Splunk Observability Cloud (OpenTelemetry) SignalFx Instrumentation for .NET (Deprecated) TOGGLE Migrate from SignalFx Instrumentation for .NET -The Splunk Distribution of OpenTelemetry .NET provides automatic instrumentation for popular .NET libraries and frameworks to collect and send telemetry to Splunk Observability Cloud. +The Splunk Distribution of OpenTelemetry .NET provides zero-code instrumentation for popular .NET libraries and frameworks to collect and send telemetry to Splunk Observability Cloud. .. raw:: html diff --git a/gdi/get-data-in/application/otel-dotnet/instrumentation/dotnet-pre-checks.rst b/gdi/get-data-in/application/otel-dotnet/instrumentation/dotnet-pre-checks.rst index e6ac7496d..797fd960a 100644 --- a/gdi/get-data-in/application/otel-dotnet/instrumentation/dotnet-pre-checks.rst +++ b/gdi/get-data-in/application/otel-dotnet/instrumentation/dotnet-pre-checks.rst @@ -5,9 +5,9 @@ Pre-checks ********** .. meta:: - :description: A list of pre-checks for the user to complete before installing the .NET automatic instrumentation. + :description: A list of pre-checks for the user to complete before installing the .NET zero-code instrumentation agent. -Before installing the .NET automatic instrumentation, complete the following pre-checks. +Before installing the .NET zero-code instrumentation agent, complete the following pre-checks. Verify platform compatibility ============================= @@ -57,7 +57,7 @@ Review core dependencies Make sure that your application's dependencies are compatible with the .NET instrumentation. -#. Verify whether your target applications have the same dependencies as the automatic instrumentation. See :new-page:`OpenTelemetry.AutoInstrumentation ` and :new-page:`OpenTelemetry.AutoInstrumentation.AdditionalDeps `. If there are conflicts, consider installing using the NuGet packages. Otherwise, you must resolve all the dependencies before manually installing the instrumentation. +#. Verify whether your target applications have the same dependencies as the zero-code instrumentation. See :new-page:`OpenTelemetry.AutoInstrumentation ` and :new-page:`OpenTelemetry.AutoInstrumentation.AdditionalDeps `. If there are conflicts, consider installing using the NuGet packages. Otherwise, you must resolve all the dependencies before manually installing the instrumentation. #. Verify whether your target applications have the same dependencies as the NuGet packages. See the :new-page:`NuGet dependencies ` in the NuGet documentation. If there are conflicts, you must resolve them before installing the instrumentation using the NuGet packages. diff --git a/gdi/get-data-in/application/otel-dotnet/instrumentation/instrument-dotnet-application.rst b/gdi/get-data-in/application/otel-dotnet/instrumentation/instrument-dotnet-application.rst index 56e0b6016..b01543db1 100644 --- a/gdi/get-data-in/application/otel-dotnet/instrumentation/instrument-dotnet-application.rst +++ b/gdi/get-data-in/application/otel-dotnet/instrumentation/instrument-dotnet-application.rst @@ -7,6 +7,8 @@ Instrument your .NET application for Splunk Observability Cloud (OpenTelemetry) .. meta:: :description: The Splunk Distribution of OpenTelemetry .NET automatically instruments .NET applications, Windows services running .NET applications, and ASP.NET applications deployed on IIS. Follow these steps to get started. +.. include:: /_includes/zero-code-info.rst + The Splunk Distribution of OpenTelemetry .NET automatically instruments .NET applications, Windows services running .NET applications, and ASP.NET applications deployed on IIS. You can install the .NET instrumentation manually or using the NuGet packages. The manual instructions include the option to use a guided setup. The NuGet packages are the best method for avoiding dependency version conflicts, but are not well-suited for instrumenting multiple applications running on the same machine. Review the :ref:`pre-checks ` and the various installation procedures on this page to identify the best installation method for your application environment. @@ -25,8 +27,8 @@ The following scenarios are ideal for using the NuGet packages: * You control the application build but not the machine or container where the application is running. * You're instrumenting a self-contained application. See :new-page:`Publish self-contained ` in the .NET documentation. -* You want to facilitate developer experimentation with automatic instrumentation through NuGet packages. -* You need to solve version conflicts between the dependencies used by the application and the automatic instrumentation. +* You want to facilitate developer experimentation with zero-code instrumentation through NuGet packages. +* You need to solve version conflicts between the dependencies used by the application and the zero-code instrumentation. Don't use the NuGet packages if any of the following apply to your environment: @@ -38,7 +40,7 @@ If your scenario isn't compatible with NuGet package installation, install the d .. note:: - For advanced configuration of the .NET automatic instrumentation, such as changing trace propagation formats or changing the endpoint URLs, see :ref:`advanced-dotnet-otel-configuration`. + For advanced configuration of the .NET zero-code instrumentation, such as changing trace propagation formats or changing the endpoint URLs, see :ref:`advanced-dotnet-otel-configuration`. Instrument your application using the NuGet packages ---------------------------------------------------- @@ -69,7 +71,7 @@ Alternatively, you can set the ``SkippedInstrumentation`` property from the term To distribute the appropriate native runtime components with your .NET application, specify a Runtime Identifier (RID) to build the application using ``dotnet build`` or ``dotnet publish``. For more information, see :new-page:`.NET RID Catalog ` in the .NET documentation. -Both self-contained and framework-dependent applications are compatible with automatic instrumentation. See :new-page:`.NET application publishing overview ` in the .NET documentation for more information. +Both self-contained and framework-dependent applications are compatible with zero-code instrumentation. See :new-page:`.NET application publishing overview ` in the .NET documentation for more information. Run the instrumented application -------------------------------- @@ -116,14 +118,14 @@ Consider using the NuGet packages if any of the following apply to your environm * You control the application build but not the machine or container where the application is running. * You're instrumenting a self-contained application. See :new-page:`Publish self-contained ` in the .NET documentation. -* You want to facilitate developer experimentation with automatic instrumentation through NuGet packages. -* You need to solve version conflicts between the dependencies used by the application and the automatic instrumentation. +* You want to facilitate developer experimentation with zero-code instrumentation through NuGet packages. +* You need to solve version conflicts between the dependencies used by the application and the zero-code instrumentation. To install the distribution using the official NuGet packages, see :ref:`otel-dotnet-nuget-pkg`. .. note:: - For advanced configuration of the .NET automatic instrumentation, such as changing trace propagation formats or changing the endpoint URLs, see :ref:`advanced-dotnet-otel-configuration`. + For advanced configuration of the .NET zero-code instrumentation, such as changing trace propagation formats or changing the endpoint URLs, see :ref:`advanced-dotnet-otel-configuration`. Generate customized instructions using the guided setup ------------------------------------------------------- @@ -300,11 +302,11 @@ Linux # Install the distribution sh ./splunk-otel-dotnet-install.sh -#. Activate the automatic instrumentation: +#. Activate the zero-code instrumentation: .. code-block:: shell - # Activate the automatic instrumentation + # Activate the zero-code instrumentation . $HOME/.splunk-otel-dotnet/instrument.sh #. Set the environment and service version resource attributes: @@ -335,7 +337,7 @@ See :ref:`get-data-in-profiling` for more information. For more settings, see :r Configure the instrumentation --------------------------------------------- -For advanced configuration of the .NET automatic instrumentation, like changing trace propagation formats or changing the endpoint URLs, see :ref:`advanced-dotnet-otel-configuration`. +For advanced configuration of the .NET zero-code instrumentation, like changing trace propagation formats or changing the endpoint URLs, see :ref:`advanced-dotnet-otel-configuration`. Database Query Performance settings ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -362,7 +364,7 @@ To instrument applications or services running on Azure Web Apps, see :ref:`inst Offline installation for Windows ---------------------------------------------- -To install the .NET automatic instrumentation on Windows hosts that are offline, follow these steps: +To install the .NET zero-code instrumentation on Windows hosts that are offline, follow these steps: #. Download the following files from the :new-page:`Releases page on GitHub ` and copy them to the offline server: diff --git a/gdi/get-data-in/application/otel-dotnet/instrumentation/manual-dotnet-instrumentation.rst b/gdi/get-data-in/application/otel-dotnet/instrumentation/manual-dotnet-instrumentation.rst index 9ec431779..76961318f 100644 --- a/gdi/get-data-in/application/otel-dotnet/instrumentation/manual-dotnet-instrumentation.rst +++ b/gdi/get-data-in/application/otel-dotnet/instrumentation/manual-dotnet-instrumentation.rst @@ -7,8 +7,8 @@ Manually instrument .NET applications for Splunk Observability Cloud .. meta:: :description: Manually instrument your .NET application to add custom attributes to spans or manually generate spans. Keep reading to learn how to manually instrument your .NET application for Splunk Observability Cloud. -The Splunk Distribution of OpenTelemetry .NET automatic instrumentation provides a base you can build on by adding -your own manual instrumentation. By using both automatic and manual instrumentation, you can better instrument the logic and functionality of your applications, clients, and frameworks. +The Splunk Distribution of OpenTelemetry .NET zero-code instrumentation provides a base you can build on by adding +your own manual instrumentation. By using both zero-code and manual instrumentation, you can better instrument the logic and functionality of your applications, clients, and frameworks. .. _custom-traces-otel-dotnet: diff --git a/gdi/get-data-in/application/otel-dotnet/sfx/sfx-instrumentation.rst b/gdi/get-data-in/application/otel-dotnet/sfx/sfx-instrumentation.rst index e9e0b17bc..d165da17c 100644 --- a/gdi/get-data-in/application/otel-dotnet/sfx/sfx-instrumentation.rst +++ b/gdi/get-data-in/application/otel-dotnet/sfx/sfx-instrumentation.rst @@ -24,7 +24,7 @@ SignalFx Instrumentation for .NET (Deprecated) Manual instrumentation Troubleshoot the .NET instrumentation -The SignalFx Instrumentation for .NET provides automatic instrumentation for popular .NET libraries and frameworks to collect and send telemetry data to Splunk Observability Cloud. +The SignalFx Instrumentation for .NET provides zero-code instrumentation for popular .NET libraries and frameworks to collect and send telemetry data to Splunk Observability Cloud. .. raw:: html @@ -54,4 +54,4 @@ To instrument your .NET application, follow these steps: #. Instrument your .NET application. See :ref:`instrument-dotnet-applications`. #. Configure your instrumentation. See :ref:`advanced-dotnet-configuration`. -You can also automatically instrument your .NET applications along with the Splunk Distribution of OpenTelemetry Collector installation. Automatic instrumentation removes the need to install and configure the .NET library separately. See :ref:`windows-backend-auto-discovery` for the installation instructions. +You can also automatically instrument your .NET applications along with the Splunk Distribution of OpenTelemetry Collector installation. Zero-code instrumentation removes the need to install and configure the .NET library separately. See :ref:`windows-backend-auto-discovery` for the installation instructions. diff --git a/gdi/get-data-in/application/php/get-started.rst b/gdi/get-data-in/application/php/get-started.rst index 5158b3551..da368ea81 100644 --- a/gdi/get-data-in/application/php/get-started.rst +++ b/gdi/get-data-in/application/php/get-started.rst @@ -16,7 +16,7 @@ Instrument PHP applications for Splunk Observability Cloud SignalFx Tracing Library (Deprecated) Migrate from the SignalFx PHP library -You can send application traces and metrics from your PHP applications to Splunk Observability Cloud using the OpenTelemetry automatic instrumentation for PHP. +You can send application traces and metrics from your PHP applications to Splunk Observability Cloud using the OpenTelemetry zero-code instrumentation for PHP. To instrument your PHP application, follow these steps: @@ -24,4 +24,4 @@ To instrument your PHP application, follow these steps: #. Instrument your PHP application. See :ref:`instrument-php-otel-applications`. #. Add custom instrumentation. See :ref:`manual-php-otel-instrumentation`. -.. note:: The SignalFx Tracing Library for PHP is deprecated. See :ref:`php-migration-guide` to migrate to the OpenTelemetry automatic instrumentation for PHP. +.. note:: The SignalFx Tracing Library for PHP is deprecated. See :ref:`php-migration-guide` to migrate to the OpenTelemetry zero-code instrumentation for PHP. diff --git a/gdi/get-data-in/application/php/instrument-php-application.rst b/gdi/get-data-in/application/php/instrument-php-application.rst index ece3a81c8..7ebb98e29 100644 --- a/gdi/get-data-in/application/php/instrument-php-application.rst +++ b/gdi/get-data-in/application/php/instrument-php-application.rst @@ -7,6 +7,8 @@ Instrument your PHP application for Splunk Observability Cloud .. meta:: :description: The OpenTelemetry PHP extensions automatically instruments PHP applications using a PHP extension and available instrumentation libraries. Follow these steps to get started. +.. include:: /_includes/zero-code-info.rst + The OpenTelemetry PHP extension automatically instruments PHP applications using a PHP extension and available instrumentation libraries. You can send telemetry to the Splunk Distribution of OpenTelemetry Collector or directly to the Splunk Observability Cloud ingest endpoint. To get started, use the guided setup or follow the instructions to install manually. diff --git a/gdi/get-data-in/application/php/sfx/sfx-instrumentation.rst b/gdi/get-data-in/application/php/sfx/sfx-instrumentation.rst index 023cc9d70..0fbe410e0 100644 --- a/gdi/get-data-in/application/php/sfx/sfx-instrumentation.rst +++ b/gdi/get-data-in/application/php/sfx/sfx-instrumentation.rst @@ -21,7 +21,7 @@ SignalFx Tracing Library for PHP (deprecated) Configure the PHP instrumentation Manual instrumentation -The SignalFx Tracing Library for PHP provides automatic instrumentations for many popular PHP libraries and frameworks. The library is a native extension that supports PHP versions 7.0 or higher running on the Zend Engine. +The SignalFx Tracing Library for PHP provides zero-code instrumentation for many popular PHP libraries and frameworks. The library is a native extension that supports PHP versions 7.0 or higher running on the Zend Engine. To instrument your PHP application, follow these steps: diff --git a/gdi/get-data-in/application/python/instrumentation/instrument-python-application.rst b/gdi/get-data-in/application/python/instrumentation/instrument-python-application.rst index 1e8defedc..7ca4c4313 100644 --- a/gdi/get-data-in/application/python/instrumentation/instrument-python-application.rst +++ b/gdi/get-data-in/application/python/instrumentation/instrument-python-application.rst @@ -7,6 +7,8 @@ Instrument your Python application for Splunk Observability Cloud .. meta:: :description: The Splunk OpenTelemetry Python agent can automatically instrument your Python application or service. Follow these steps to get started. +.. include:: /_includes/zero-code-info.rst + The Python agent from the Splunk Distribution of OpenTelemetry Python can automatically instrument your Python application by dynamically patching supported libraries. To get started, use the guided setup or follow the instructions manually. diff --git a/gdi/get-data-in/application/python/instrumentation/instrument-python-frameworks.rst b/gdi/get-data-in/application/python/instrumentation/instrument-python-frameworks.rst index 069a52aeb..6ab8636d4 100644 --- a/gdi/get-data-in/application/python/instrumentation/instrument-python-frameworks.rst +++ b/gdi/get-data-in/application/python/instrumentation/instrument-python-frameworks.rst @@ -5,7 +5,7 @@ Instrument Python frameworks for Splunk Observability Cloud *************************************************************** .. meta:: - :description: If you're instrumenting a Python app that uses Django or uWSGI, perform these additional steps after you've followed the common procedure for automatic instrumentation. + :description: If you're instrumenting a Python app that uses Django or uWSGI, perform these additional steps after you've followed the common procedure for zero-code instrumentation. If you're instrumenting a Python application or service that uses Django or uWSGI, follow these additional steps after you've followed all the steps in :ref:`instrument-python-applications`. diff --git a/gdi/get-data-in/application/ruby/instrument-ruby.rst b/gdi/get-data-in/application/ruby/instrument-ruby.rst index 7ee5a531e..04de56003 100644 --- a/gdi/get-data-in/application/ruby/instrument-ruby.rst +++ b/gdi/get-data-in/application/ruby/instrument-ruby.rst @@ -7,6 +7,8 @@ Instrument your Ruby application for Splunk Observability Cloud .. meta:: :description: Instrument your Ruby application using the OpenTelemetry instrumentation for Ruby and get your data into Splunk Observability Cloud. +.. include:: /_includes/zero-code-info.rst + You can use the OpenTelemetry Collector to send traces from Ruby applications to Splunk APM. .. _ruby-prereqs: diff --git a/gdi/get-data-in/application/zero-code-overview.rst b/gdi/get-data-in/application/zero-code-overview.rst new file mode 100644 index 000000000..e39da1ecf --- /dev/null +++ b/gdi/get-data-in/application/zero-code-overview.rst @@ -0,0 +1,69 @@ +.. _zero-code-overview: + +********************************************************************** +Instrumentation methods for Splunk Observability Cloud +********************************************************************** + +.. meta:: + :description: Learn about zero-code instrumentation (formerly automatic instrumentation) for back-end applications. + +To stay consistent with the terminology from the OpenTelemetry Collector docs, automatic instrumentation has been changed to zero-code instrumentation, and manual instrumentation has been changed to code-based instrumentation. + +See the upstream OpenTelemetry Collector documentation for more information: :new-page:`https://opentelemetry.io/docs/concepts/instrumentation/zero-code/`. + +This change is only a terminology update and doesn't require you to install or update the OpenTelemetry Collector or any Splunk instrumentation agents. + +.. _zero-code-info: + +Zero-code instrumentation +========================================= + +Zero-code instrumentation allows you to instrument your applications and export telemetry data without having to modify the application source files. + +The language-specific instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud back end. + +Zero-code instrumentation is available for applications written in Java, Node.js, .NET, Go, Python, Ruby, and PHP and automatically collects telemetry data for code written using supported libraries in each language. + +How does zero-code instrumentation differ from automatic discovery and configuration? +----------------------------------------------------------------------------------------- + +Automatic discovery and zero-code instrumentation have similar capabilities but are separate features. Both automatic discovery and zero-code instrumentation collect telemetry data and send it to Splunk Observability Cloud, but they differ in several key details. + +See the following table for key differences between the automatic discovery and zero-code instrumentation: + +.. list-table:: + :header-rows: 1 + + * - Capability + - Zero-code instrumentation + - Automatic discovery + * - Deployment + - Deployed as a language-specific instrumentation agent, for example, the Splunk OpenTelemetry Java agent. + - Deployed with the Splunk Distribution of OpenTelemetry Collector as an optional add-on. + * - Applications instrumented + - Instruments only back-end applications, for example, Python, Java, and Node.js applications. + - Collects telemetry data from third-party services such as databases and web servers. + * - Languages instrumented + - Agents are language-specific. For example, the Node.js agent only instruments Node.js applications. Zero-code instrumentation supports applications written in Java, Node.js, .NET, Go, Python, Ruby, and PHP. + - Automatic discovery itself does not instrument language runtimes, but can be used to deploy zero-code instrumentation for applications written in Java, Node.js, and .NET. + +.. _code-based-info: + +Code-based instrumentation +======================================= + +Code-based instrumentation allows you to instrument your applications and export telemetry data to Splunk Observability Cloud by editing your application's source code. + +Unlike zero-code instrumentation, code-based instrumentation requires editing your application's source code. Modifying the application's source code allows it to send telemetry data to a local running instance of the OpenTelemetry Collector, which then processes and forwards the data to Splunk Observability Cloud. + +Code-based instrumentation supports applications written in Java, Node.js, .NET, Python, PHP, Go, Ruby, and C++. C++ only supports code-based instrumentation. + +Learn more +=========================== + +* To learn more about automatic discovery and configuration, see :ref:`discovery_mode`. +* For more information about important terms in Splunk Observability Cloud, see :ref:`get-started-glossary`. + + + + diff --git a/gdi/get-data-in/connect/aws/aws-connect-polling.rst b/gdi/get-data-in/connect/aws/aws-connect-polling.rst index 854ee5826..fe85288c9 100644 --- a/gdi/get-data-in/connect/aws/aws-connect-polling.rst +++ b/gdi/get-data-in/connect/aws/aws-connect-polling.rst @@ -101,7 +101,10 @@ After creating an AWS IAM policy and assigning it to a particular role through t Modify the scope of data collection -------------------------------------------------- -By default, Splunk Observability Cloud brings in data from all supported AWS services associated with your account, with :ref:`certain limitations `, but only imports certain :ref:`recommended stats ` from each service. +By default, Splunk Observability Cloud brings in: + +* Data from all supported AWS services associated with your account, with :ref:`certain limitations `. +* 5 default stats per service: ``sum``, ``min``, ``max``, ``count``, and ``avg``. Use the check box options in the guided setup to limit the scope of your data collection. These are the available options: @@ -110,6 +113,8 @@ Use the check box options in the guided setup to limit the scope of your data co * Select which :ref:`AWS regions ` to fetch data from. * Select which AWS services to fetch data from. +.. note:: You can also chose to import recommended stats. Learn more at :ref:`aws-recommended-stats`. + To limit data collection, you can also: - Manage the amount of data to import. See :ref:`aws-infra-import`. diff --git a/gdi/get-data-in/connect/aws/aws-recommended-stats.rst b/gdi/get-data-in/connect/aws/aws-recommended-stats.rst index 2ee7ef397..573bd3f88 100644 --- a/gdi/get-data-in/connect/aws/aws-recommended-stats.rst +++ b/gdi/get-data-in/connect/aws/aws-recommended-stats.rst @@ -7,7 +7,12 @@ AWS recommended stats (polling only) .. meta:: :description: List of recommended stats used in the AWS integration. -If you're polling data, by default Splunk Observability Cloud only imports certain stats, which are based on AWS' own recommended stats and vary with service. You can look for your services' AWS recommended stats in the official AWS docs, for example :new-page:`CloudWatch metrics for your Classic Load Balancer ` or :new-page:`S3 monitoring with Amazon CloudWatch `. +If you're polling data, by default Splunk Observability Cloud only polls these 5 statistics: SampleCount (``count`` in Splunk Observability Cloud), Average (``mean``), Sum (``sum``), Minimum (``lower``), and Maximum (``upper``). + +If you chose to import recommended stats, instead of default stats Splunk Observability Cloud imports a set of recommended stats which are based on AWS' own recommended stats and vary with service. + +List of recommended stats +================================================== Splunk Observability Cloud uses the following recommended stats: diff --git a/gdi/get-data-in/connect/gcp/gcp-connect.rst b/gdi/get-data-in/connect/gcp/gcp-connect.rst index 3b70b652d..c7f0b6a46 100644 --- a/gdi/get-data-in/connect/gcp/gcp-connect.rst +++ b/gdi/get-data-in/connect/gcp/gcp-connect.rst @@ -7,145 +7,58 @@ Connect to Google Cloud Platform: Guided setup and other options .. meta:: :description: Connect your Google Cloud Platform / GCP account to Splunk Observability Cloud. +You can connect your GCP account and send data to Splunk Observability Cloud with the following methods: + +* :ref:`gcp-connect-ui` +* :ref:`gcp-api` +* :ref:`gcp-terraform` + +.. note:: Before you connect, make sure to read :ref:`gcp-prereqs`. + +.. _gcp-connect-ui: + Connect to GCP using the guided setup ============================================ Follow these steps to connect to GCP: -#. :ref:`gcp-one` -#. :ref:`gcp-two` -#. :ref:`gcp-three` +* :ref:`gcp-one` +* :ref:`gcp-two` +* :ref:`gcp-three` .. _gcp-one: -1. Select a role for your GCP service account +1. Define a role for your GCP service account -------------------------------------------------------------------------------------- -You can use GCP's :strong:`Viewer` role as it comes with the permissions you need for most scenarios. - -Alternatively you can create a more restrictive role using the permissions in the table: - -.. list-table:: - :header-rows: 1 - :widths: 35 45 20 - - * - :strong:`Permission` - - :strong:`Required?` - - :strong:`Included in GCP's Viewer role?` - - * - ``compute.instances.list`` - - Yes, if the Compute Engine service is activated - - Yes - - * - ``compute.machineTypes.list`` - - Yes, if the Compute Engine service is activated - - Yes - - * - ``container.clusters.list`` - - Yes, if the Kubernetes (GKE) service is activated - - Yes - - * - ``container.nodes.list`` - - Yes, if the Kubernetes (GKE) service is activated - - Yes - - * - ``container.pods.list`` - - Yes, if the Kubernetes (GKE) service is activated - - Yes - - * - ``monitoring.metricDescriptors.get`` - - Yes - - Yes - - * - ``monitoring.metricDescriptors.list`` - - Yes - - Yes - - * - ``monitoring.timeSeries.list`` - - Yes - - Yes - - * - ``resourcemanager.projects.get`` - - Yes, if you want to sync project metadata (such as labels) - - Yes - - * - ``serviceusage.services.use`` - - Yes, if you either want to activate the use of a quota from the project where metrics are stored or sync cloud sql metadata - - No, but included in ``roles/serviceusage.serviceUsageConsumer`` - - * - ``spanner.instances.list`` - - Yes, if the Spanner service is activated - - Yes - - * - ``storage.buckets.list`` - - Yes, if the Spanner service is activated - - Yes - - * - ``cloudsql.databases.list`` - - Yes, if the cloud sql service is activated - - Yes - - * - ``cloudsql.instances.list`` - - Yes, if the cloud sql service is activated - - Yes - - * - ``pubsub.topics.list`` - - Yes, if the pub/sub service is activated - - Yes - - * - ``pubsub.subscriptions.list`` - - Yes, if the pub/sub service is activated - - Yes - - * - ``run.jobs.list`` - - Yes, if the cloud run service is activated - - Yes - - * - ``run.revisions.list`` - - Yes, if the cloud run service is activated - - Yes - - * - ``cloudasset.assets.searchAllResources`` - - Yes, if the cloud run service is activated - - Yes - - * - ``cloudfunctions.functions.list`` - - Yes, if the cloud functions service is activated - - Yes +Use GCP's :strong:`Viewer` role as it comes with the permissions you need for most scenarios. +To customize the permissions for your role refer to :ref:`gcp-prereqs-role-permissions`. .. _gcp-two: 2. Configure GCP -------------------------------------------------------------------------------------- -To configure your GCP service, follow these steps: +To configure your GCP service: -#. In a new window or tab, go to the Google Cloud Platform website, and log into your GCP account. -#. Open the GCP web console, and select a project you want to monitor. -#. From the sidebar, select :menuselection:`IAM & admin`, then :menuselection:`Service Accounts`. -#. Go to :guilabel:`Create Service Account` at the top of the screen, and complete the following fields: +#. Log into your GCP account and select the project you want to monitor in the GCP web console. - .. list-table:: - :header-rows: 1 - :widths: 40 60 +#. From the sidebar, select :menuselection:`IAM & admin`, then :menuselection:`Service Accounts`. - * - :strong:`Field` - - :strong:`Description` +#. Go to :guilabel:`Create Service Account` at the top of the screen, complete the following fields, and select :guilabel:`CREATE`. - * - Service account name - - Enter ``Splunk``. + * **Service account name**. Enter ``Splunk``. - * - Service account ID - - This field autofills after you enter ``Splunk`` for Service account name. + * **Service account ID**. This field autofills after you enter ``Splunk`` for Service account name. - * - Service account description - - Enter the description for your service account. + * **Service account description**. Enter the description for your service account. -#. Select :guilabel:`CREATE`. #. (Optional) Select a role to grant this Service account access to the selected project, then select :guilabel:`CONTINUE`. -#. Activate Key type :guilabel:`JSON`, and select :guilabel:`CREATE`. A new service account key JSON file is then downloaded to your computer. -#. In a new window or tab, go to :new-page:`Cloud Resource Manager API `, and activate the Cloud Resource Manager API. You need to activate this API so Splunk Infrastructure Monitoring can use it to validate permissions on the service account keys. + +#. Activate Key type :guilabel:`JSON`, and select :guilabel:`CREATE`. A new service account key JSON file is then downloaded to your computer. You will need this key to authenticate in Splunk Observability Cloud. + +#. In a new window or tab, go to :new-page:`Cloud Resource Manager API `, and activate the Cloud Resource Manager API. You need to activate this API so Splunk Observability Cloud can use it to validate permissions on the service account keys. .. _gcp-projects: @@ -153,15 +66,14 @@ To configure your GCP service, follow these steps: .. _gcp-three: -3. Start the integration +3. Connect to Splunk Observability Cloud and start the integration -------------------------------------------------------------------------------------- -By default, all supported services are monitored, and any new services added later are also monitored. When you set integration parameters, you can choose to import metrics from a subset of the available services. +By default, Splunk Observability Cloud monitors all supported services, and any new services added later are also monitored. When you set integration parameters, you can choose to import metrics from a subset of the available services. -#. Log in to Splunk Observability Cloud. -#. Open the :new-page:`Google Cloud Platform guided setup `. Optionally, you can navigate to the guided setup on your own: +#. Log in to Splunk Observability Cloud and open the :new-page:`Google Cloud Platform guided setup `. Optionally, you can navigate to the guided setup on your own: - #. In the navigation menu, select :menuselection:`Data Management`. + #. In the left navigation menu, select :menuselection:`Data Management`. #. Go to the :guilabel:`Available integrations` tab, or select :guilabel:`Add Integration` in the :guilabel:`Deployed integrations` tab. @@ -169,31 +81,37 @@ By default, all supported services are monitored, and any new services added lat #. In the :guilabel:`Cloud Integrations` section, select the :guilabel:`Google Cloud Platform` tile to open the Google Cloud Platform guided setup. - #. Go to :guilabel:`New Integration`. +#. In the GCP guided setup enter a name for your new GCP integration, then :guilabel:`Add Project`. -#. Enter a name for the new GCP integration, then :guilabel:`Add Project`. #. Next, select :guilabel:`Import Service Account Key`, and select one or more of the JSON key files that you downloaded from GCP in :ref:`Configure GCP `. + #. Select :guilabel:`Open`. You can then see the project IDs corresponding to the service account keys you selected. + #. To import :ref:`metrics ` from only some of the available services, follow these steps: - Go to :guilabel:`All Services` to display a list of the services you can monitor. - Select the services you want to monitor, and then :guilabel:`Apply`. -#. Select the rate (in seconds) at which you want Splunk Observability Cloud to poll GCP for metric data, with 1 minute as the minimum unit, and 10 minutes as the maximum unit. For example, a value of 300 polls metrics once every 5 minutes. -#. Optional: +#. Select the rate (in seconds) at which you want Splunk Observability Cloud to poll GCP for metric data, with 1 minute as the minimum unit, and 10 minutes as the maximum unit. For example, a value of 300 polls metrics once every 5 minutes. - - List any additional GCP service domain names that you want to monitor, using commas to separate domain names in the :strong:`Custom Metric Type Domains` field. - - - For example, to obtain Apigee metrics, add ``apigee.googleapis.com``. - - To learn about custom metric type domain syntax, see :new-page:`Custom metric type domain examples ` in the Splunk developer documentation. +Your GCP integration is now complete. - - If you select Compute Engine as one of the services to monitor, you can enter a comma-separated list of Compute Engine Instance metadata keys to send as properties. These metadata keys are sent as properties named ``gcp_metadata_``. +.. note:: Splunk is not responsible for data availability, and it can take up to several minutes (or longer, depending on your configuration) from the time you connect until you start seeing valid data from your account. - - Select :strong:`Use quota from the project where metrics are stored` to use a quota from the project where metrics are stored. The service account provided for the project needs either the ``serviceusage.services.use`` permission, or the `Service Usage Consumer` role. +Options +++++++++ -Your GCP integration is now complete. +Optionally you can: -.. note:: Splunk is not responsible for data availability, and it can take up to several minutes (or longer, depending on your configuration) from the time you connect until you start seeing valid data from your account. +* To list any additional GCP service domain names that you want to monitor, use commas to separate domain names in the :strong:`Custom Metric Type Domains` field. For example, to obtain Apigee metrics, add ``apigee.googleapis.com``. + + - For information on the available GCP metric domains refer to the official GCP docs at :new-page:`Google Cloud metrics `. + + - To learn about custom metric type domain syntax, see :new-page:`Custom metric type domain examples ` in the Splunk developer documentation. + +* If you select Compute Engine as one of the services to monitor, you can enter a comma-separated list of Compute Engine Instance metadata keys to send as properties. These metadata keys are sent as properties named ``gcp_metadata_``. + +* Select :strong:`Use quota from the project where metrics are stored` to use a quota from the project where metrics are stored. The service account provided for the project needs either the ``serviceusage.services.use`` permission, or the `Service Usage Consumer` role. Alternatives to connect to GCP ============================================ @@ -203,7 +121,9 @@ Alternatives to connect to GCP Integrate GCP using the API -------------------------------------------------------------------------------------- -You can also integrate GCP with Splunk Observability Cloud using the GCP API. See :new-page:`Integrate Google Cloud Platform Monitoring with Splunk Observability Cloud ` in our developer portal for details. +You can also integrate GCP with Splunk Observability Cloud using the GCP API. + +See :new-page:`Integrate Google Cloud Platform Monitoring with Splunk Observability Cloud ` in our developer portal for details. .. _gcp-terraform: diff --git a/gdi/get-data-in/connect/gcp/gcp-prereqs.rst b/gdi/get-data-in/connect/gcp/gcp-prereqs.rst index e025c59dd..7c1922d31 100644 --- a/gdi/get-data-in/connect/gcp/gcp-prereqs.rst +++ b/gdi/get-data-in/connect/gcp/gcp-prereqs.rst @@ -1,22 +1,136 @@ -.. _gcp-prerequisites: .. _gcp-prereqs: ******************************************************** -GCP authentication, permissions, and supported regions +GCP authentication, permissions and supported regions ******************************************************** .. meta:: :description: Connect your Google Cloud Platform / GCP account to Splunk Observability Cloud. -The following pre-requisites apply: +.. _gcp-prerequisites: -* You must be an administrator of your Splunk Observability Cloud organization to create a GCP connection. -* Splunk Observability Cloud supports all GCP regions. +Prerequisites +============================================ -Account permissions +You must be an administrator of your Splunk Observability Cloud organization to create a GCP connection. + +Authenticate your Google account ============================================ -Starting in March 2024, GCP disables service account key creation by setting ``iam.disableServiceAccountKeyCreation`` to ``false`` by default. When this constraint is set, you cannot create user-managed credentials for service accounts in projects affected by the constraint. Check the restrictions on your organization's account keys before connecting to Splunk Observability Cloud. +You need your service account keys to be able to integrate your GCP services with Splunk Observability Cloud. Check the restrictions on your organization's account keys before connecting to Splunk Observability Cloud. + +For more information, refer to: + +* GCP's docs on :new-page:`Service account keys ` +* Google's official announcement on the new permission policies at :new-page:`Introducing stronger default Org Policies for our customers ` + +Authenticate using Workload Identity Federation +-------------------------------------------------------------------------------------- + +Alternatively, if you're connecting to Splunk Observability Cloud using the API you can use :new-page:`GCP's Workload Identity Federation (WIF) ` to access your Google Cloud resources and authenticate them. It's safer, and with WIF you won't have to export and rotate service account keys. + +See how to authenticate with WIF in the Splunk Observability Cloud developer documentation at :new-page:`Integrate GCP `. + +.. _gcp-prereqs-role-permissions: + +GCP role permissions +============================================ + +You can use GCP's :strong:`Viewer` role as it comes with the permissions you need for most scenarios. + +Alternatively you can create a more restrictive role using the permissions in the table: + +.. list-table:: + :header-rows: 1 + :widths: 35 45 20 + + * - :strong:`Permission` + - :strong:`Required?` + - :strong:`Included in GCP's Viewer role?` + + * - ``compute.instances.list`` + - Yes, if the Compute Engine service is activated + - Yes + + * - ``compute.machineTypes.list`` + - Yes, if the Compute Engine service is activated + - Yes + + * - ``container.clusters.list`` + - Yes, if the Kubernetes (GKE) service is activated + - Yes -For more information, refer to Google's official announcement :new-page:`Introducing stronger default Org Policies for our customers `. + * - ``container.nodes.list`` + - Yes, if the Kubernetes (GKE) service is activated + - Yes + + * - ``container.pods.list`` + - Yes, if the Kubernetes (GKE) service is activated + - Yes + + * - ``monitoring.metricDescriptors.get`` + - Yes + - Yes + + * - ``monitoring.metricDescriptors.list`` + - Yes + - Yes + + * - ``monitoring.timeSeries.list`` + - Yes + - Yes + + * - ``resourcemanager.projects.get`` + - Yes, if you want to sync project metadata (such as labels) + - Yes + + * - ``serviceusage.services.use`` + - Yes, if you either want to activate the use of a quota from the project where metrics are stored or sync cloud sql metadata + - No, but included in ``roles/serviceusage.serviceUsageConsumer`` + + * - ``spanner.instances.list`` + - Yes, if the Spanner service is activated + - Yes + + * - ``storage.buckets.list`` + - Yes, if the Spanner service is activated + - Yes + + * - ``cloudsql.databases.list`` + - Yes, if the cloud sql service is activated + - Yes + + * - ``cloudsql.instances.list`` + - Yes, if the cloud sql service is activated + - Yes + + * - ``pubsub.topics.list`` + - Yes, if the pub/sub service is activated + - Yes + + * - ``pubsub.subscriptions.list`` + - Yes, if the pub/sub service is activated + - Yes + + * - ``run.jobs.list`` + - Yes, if the cloud run service is activated + - Yes + + * - ``run.revisions.list`` + - Yes, if the cloud run service is activated + - Yes + + * - ``cloudasset.assets.searchAllResources`` + - Yes, if the cloud run service is activated + - Yes + + * - ``cloudfunctions.functions.list`` + - Yes, if the cloud functions service is activated + - Yes + +.. _gcp-prereqs-regions: + +Supported regions +============================================ +Splunk Observability Cloud supports all GCP regions. \ No newline at end of file diff --git a/gdi/get-data-in/connect/gcp/gcp.rst b/gdi/get-data-in/connect/gcp/gcp.rst index 1b1b62e98..28991b6b2 100644 --- a/gdi/get-data-in/connect/gcp/gcp.rst +++ b/gdi/get-data-in/connect/gcp/gcp.rst @@ -10,7 +10,7 @@ Connect to Google Cloud Platform .. toctree:: :hidden: - GCP prerequisites + Authentication, permission and regions Supported GCP services Connect to GCP GCP metrics and metadata diff --git a/gdi/get-data-in/rum/browser/browser-rum-instrumentations.rst b/gdi/get-data-in/rum/browser/browser-rum-instrumentations.rst index 3e5683982..d9e63ac29 100644 --- a/gdi/get-data-in/rum/browser/browser-rum-instrumentations.rst +++ b/gdi/get-data-in/rum/browser/browser-rum-instrumentations.rst @@ -8,7 +8,7 @@ Instrumentation-specific data for Browser RUM .. meta:: :description: Splunk Observability Cloud real user monitoring / RUM for Browser collects the following data through automatic instrumentations. -Splunk RUM for Browser collects the following data through automatic instrumentations. To activate or deactivate instrumentations, see :ref:`browser-rum-instrumentation-settings`. +Splunk RUM for Browser collects the following data through instrumentation. To activate or deactivate instrumentations, see :ref:`browser-rum-instrumentation-settings`. .. _browser-rum-data-doc-load: diff --git a/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/deploy-collector-k8s-java.rst b/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/deploy-collector-k8s-java.rst index 7e604987c..cad4f8207 100644 --- a/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/deploy-collector-k8s-java.rst +++ b/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/deploy-collector-k8s-java.rst @@ -74,7 +74,7 @@ Deploy the Spring Petclinic Java application in your Kubernetes cluster: - Image for the Spring Petclinic application * - ``spec.template.metadata.annotations`` - ``instrumentation.opentelemetry.io/inject-java: "true"`` - - Activates Splunk OpenTelemetry automatic instrumentation for the Java application + - Activates Splunk OpenTelemetry zero-code instrumentation for the Java application After adding these keys and values, your petclinic-spec.yaml file looks like the following example: @@ -94,7 +94,7 @@ Deploy the Spring Petclinic Java application in your Kubernetes cluster: labels: app: spring-petclinic annotations: - # Activates automatic instrumentation for the Java application + # Activates zero-code instrumentation for the Java application instrumentation.opentelemetry.io/inject-java: "true" spec: containers: diff --git a/gdi/opentelemetry/automatic-discovery/windows/windows-backend.rst b/gdi/opentelemetry/automatic-discovery/windows/windows-backend.rst index c17f0ac8b..2ba2c96b6 100644 --- a/gdi/opentelemetry/automatic-discovery/windows/windows-backend.rst +++ b/gdi/opentelemetry/automatic-discovery/windows/windows-backend.rst @@ -9,7 +9,7 @@ Automatic discovery for back-end applications in Windows Automatic discovery can detect the following types of applications in your Windows environment: -Automatic discovery and configuration for OpenTelemetry .NET activates automatic instrumentation for .NET applications running on Windows. By default, automatic instrumentation is only turned on for IIS applications. To activate other application and service types, see :ref:`otel-dotnet-manual-install`. After installing the package, you must start or restart any .NET applications that you want to instrument. +Automatic discovery and configuration for OpenTelemetry .NET activates zero-code instrumentation for .NET applications running on Windows. By default, zero-code instrumentation is only turned on for IIS applications. To activate other application and service types, see :ref:`otel-dotnet-manual-install`. After installing the package, you must start or restart any .NET applications that you want to instrument. .. note:: The SignalFx instrumentation for .NET is deprecated and will reach end of support on February 21, 2025. To learn how to migrate from SignalFx .NET to OpenTelemetry .NET, see :ref:`migrate-signalfx-dotnet-to-dotnet-otel`. diff --git a/gdi/opentelemetry/troubleshoot-logs.rst b/gdi/opentelemetry/troubleshoot-logs.rst index 4f7ca8788..3782764a3 100644 --- a/gdi/opentelemetry/troubleshoot-logs.rst +++ b/gdi/opentelemetry/troubleshoot-logs.rst @@ -1,21 +1,20 @@ .. _tshoot-logs: **************************************************************** -Troubleshoot Collector logs +Troubleshoot log collection **************************************************************** .. meta:: - :description: Describes known issues when collecting logs with the Splunk Distribution of OpenTelemetry Collector. + :description: Describes known issues when collecting logs with the Splunk Distribution of the OpenTelemetry Collector. +This document describes common issues related to log collection with the Collector. -.. note:: See also the :new-page:`OpenTelemetry Project troublehooting docs ` for more information about debugging. +To troubleshoot the health and performance of the Collector see the :new-page:`OpenTelemetry Project troublehooting docs `. It includes information about troubleshooting tools and debugging. -Here are some common issues related to log collection on the Collector. - -Source isn't generating logs +My source isn't generating logs ========================================= -If using Linux, run the following commands to check if the source is generating Collector logs: +If using Linux, run the following commands to check if the source is generating logs: .. code-block:: bash @@ -23,7 +22,7 @@ If using Linux, run the following commands to check if the source is generating journalctl -u my-service.service -f -If using Windows, run the following command to check if the source is generating Collector logs: +If using Windows, run the following command to check if the source is generating logs: .. code-block:: shell @@ -44,11 +43,11 @@ Do the following to check the Fluentd configuration: While every attempt is made to properly configure permissions, it is possible that td-agent does not have the permission required to collect logs. Debug logging should indicate this issue. -It is possible that the ```` section configuration does not match the log events. +It's possible that the ```` section configuration does not match the log events. If you see a message such as "2021-03-17 02:14:44 +0000 [debug]: #0 connect new socket", Fluentd is working as expected. You need to activate debug logging to see this message. -Collector isn't configured properly +The Collector isn't configured properly ========================================= .. note:: Fluentd is part of the Splunk Distribution of OpenTelemetry Collector, but deactivated by default for Linux and Windows. To activate it, use the ``--with-fluentd`` option when installing the Collector for Linux, or the ``with_fluentd = 1`` option when installing the Collector for Windows. @@ -90,8 +89,7 @@ Depending on its configuration, the Splunk Distribution of the OpenTelemetry Col To turn off logs colletion, see :ref:`exclude-log-data` for more information. - -Send logs from the Collector to Splunk Cloud Platform or Enterprise +Send logs to Splunk Cloud Platform or Enterprise using the Collector ================================================================================== To send logs from the Collector to Splunk Cloud Platform or Splunk Enterprise, see :ref:`send_logs_to_splunk`. diff --git a/index.rst b/index.rst index 164b71202..e4235e118 100644 --- a/index.rst +++ b/index.rst @@ -338,6 +338,11 @@ To keep up to date with changes in the products, see the Splunk Observability Cl .. toctree:: :maxdepth: 3 + Centralized user and role management + +.. toctree:: + :maxdepth: 3 + Scenarios .. toctree:: @@ -730,6 +735,11 @@ To keep up to date with changes in the products, see the Splunk Observability Cl synthetics/key-concepts +.. toctree:: + :maxdepth: 3 + + synthetics/syn-ottb-dashboards + .. toctree:: :maxdepth: 3 @@ -743,18 +753,23 @@ To keep up to date with changes in the products, see the Splunk Observability Cl .. toctree:: :maxdepth: 3 - Use a Browser test to test a webpage TOGGLE + Browser tests for webpages TOGGLE .. toctree:: :maxdepth: 3 - Use an Uptime test to test port or HTTP uptime TOGGLE + Uptime Tests for port and HTTP TOGGLE .. toctree:: :maxdepth: 3 Use an API test to test an endpoint TOGGLE +.. toctree:: + :maxdepth: 3 + + synthetics/test-kpis/test-kpis + .. toctree:: :maxdepth: 3 diff --git a/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst b/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst index 5e76719e0..d1f7c2837 100644 --- a/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst +++ b/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst @@ -46,7 +46,7 @@ Use metric pipeline management to control your data volume For each metric you send to Splunk Observability Cloud, MPM can help you configure how to ingest, keep, and manage the metric's data volume and cardinality. -For example, you can decide to route your low-value metrics to archived metrics, a low-cost data tier, or even entirely drop them. Meanwhile, your high-value metrics continue to be routed to the real-time tier for alerting and monitoring. To learn more, see :ref:`mpm-rule-routing`. +For example, you can decide to route your low-value metrics to archived metrics, a low-cost data tier, or even entirely drop them. Meanwhile, your high-value metrics continue to be routed to the real-time tier for alerting and monitoring. To learn more, see :ref:`mpm-rule-routing`. You can also convert a high-cardinality metric into a low-cardinality metric by aggregating away the dimensions that are not needed. To learn more, see :ref:`mpm-rule-routing-exception`. diff --git a/infrastructure/metrics-pipeline/metrics-usage-report.rst b/infrastructure/metrics-pipeline/metrics-usage-report.rst index 8abdb763d..b9d309a89 100644 --- a/infrastructure/metrics-pipeline/metrics-usage-report.rst +++ b/infrastructure/metrics-pipeline/metrics-usage-report.rst @@ -42,7 +42,7 @@ Metric identifiers The following table has an overview of metric category types. To learn more about metric categories, see :ref:`metrics-category`. -.. include:: /_includes/metric-categories.rst +.. include:: /_includes/metric-classes.rst Usage statistics -------------------------------- diff --git a/metrics-and-metadata/metric-categories.rst b/metrics-and-metadata/metric-categories.rst index 1e8103ec6..3b497427a 100644 --- a/metrics-and-metadata/metric-categories.rst +++ b/metrics-and-metadata/metric-categories.rst @@ -10,7 +10,7 @@ Metric categories These are the available categories for metrics in Splunk Observability Cloud: -.. include:: /_includes/metric-categories.rst +.. include:: /_includes/metric-classes.rst Identify and track the category of a metric ==================================================== diff --git a/metrics-and-metadata/metrics-landing.rst b/metrics-and-metadata/metrics-landing.rst index 1e7bc7e70..84f12ec37 100644 --- a/metrics-and-metadata/metrics-landing.rst +++ b/metrics-and-metadata/metrics-landing.rst @@ -17,6 +17,7 @@ Metrics in Splunk Observability Cloud Histogram metrics Get histogram data in Metadata: Dimensions, properties, tags, attributes + metrics-usage-analytics Naming conventions Events diff --git a/metrics-and-metadata/metrics-usage-analytics.rst b/metrics-and-metadata/metrics-usage-analytics.rst new file mode 100644 index 000000000..19fd7d982 --- /dev/null +++ b/metrics-and-metadata/metrics-usage-analytics.rst @@ -0,0 +1,177 @@ +.. _metrics-usage-analytics-intro: + +******************************************************************** +Analyze your metric usage in Splunk Observability Cloud +******************************************************************** + +.. meta:: + :description: Use usage analytics to determine the usage of your metrics in Splunk Observability Cloud. + +Usage analytics gives you in-depth visualizations of your metric usage in Splunk Observability Cloud. Usage analytics can help you make informed decisions about your metrics, for example, if you're deciding whether to aggregate, archive, or drop certain metrics. + +To learn how to use usage analytics, see :ref:`mua-understand-metrics`. + +For guidance in using usage analytics to manage and reduce your overall metric usage, see :ref:`mua-reduce-usage`. + +To learn more about metric usage and billing, see :ref:`subscription-overview`. + +Benefits of usage analytics +================================================ + +With usage analytics, you can quickly find and visualize which metrics your organization is using and how these metrics contribute to your overall monthly usage. With this information, you can accurately decide how to manage individual metrics for the purpose of reducing your overall usage. + +Usage analytics can help you complete the following example scenarios: + +* You want to view high-cardinality custom metrics that are taking up large chunks of your metric usage plan. +* You want to identify what metrics your team is producing so you can access their usefulness. +* You want to find the source and ownership of a certain metric so that you can modify or adjust it. + +.. _mua-understand-metrics: + +View and understand your metric usage +==================================================== + +Usage analytics displays several charts and visualizations that help you determine your metric usage relative to your usage plan. + +With usage analytics, you can also find more details about individual metrics, such as which dimensions the metric uses, which tokens the metric is associated with, and which charts the metric appears in. + +Access usage analytics +------------------------------------------------ + +To access usage analytics in Splunk Observability Cloud, follow these steps: + +#. In Splunk Observability Cloud, select :guilabel:`Settings`. +#. Under :guilabel:`Data Configuration`, select :guilabel:`Metrics Management`. +#. Select the :guilabel:`Usage analytics` tab. + +The usage analytics home page contains the following visualizations: + +* A card displaying the average number of metric time series (MTS) per hour for your selected time frame. +* A chart displaying the average number of MTS per half hour over the selected time frame. +* The metrics table, displaying each of your metrics and their usage. See :ref:`mua-metrics-table` to interpret these values. + +.. image:: /_images/images-metrics/usage-analytics-home-page.png + :alt: The usage analytics home page, which displays the total MTS count, trends for hourly MTS count, and metrics with the highest utilization. + +.. _mua-metrics-table: + +Understand metric usage with the metrics table +------------------------------------------------- + +The metric usage table displays the following fields: + +.. list-table:: + :header-rows: 1 + + * - Field + - Description + * - Metric name + - The name of the metric. + * - Billing class + - Class of metric for billing purposes (host, billing, or custom). To learn more about billing classes, see :ref:`metric-categories`. + * - Utilization + - Whether the metric is used. "Unused" indicates that the metric is producing MTS, but these values aren't utilized in Splunk Observability Cloud. + * - Utility score + - Indicates how much the metric is used. A high utility score means higher usage. + * - Metric time series (MTS) + - The average number of MTS associated with this metric, measured per hour. + * - Percentage of total + - How much of your total usage plan this metric utilizes. + +You can use the options at the top of the page to filter metrics by time, billing class, utilization, and token. + +For example, if you only want to see metrics that are unused, follow these steps: + +#. Select the box with :guilabel:`Utilization: Any`. +#. In the menu, select :guilabel:`Unused`. +#. Select :guilabel:`Run search`. + +After running the search, the usage analytics page displays only metrics which are unused. To revert the search, select :guilabel:`Reset`. + +.. note:: Running searches with filters that yield more results, such as searching for metrics from the previous 30 days instead of the previous 24 hours, might cause the search to run slower. + +View dimensions, tokens, and charts with metric profiles +--------------------------------------------------------- + +Usage analytics includes metric profiles for each of your metrics. To access a metric profile, select one of the metrics in your metric usage table. + +Metric profiles provide the following tables with additional information about the metric: + +.. list-table:: + :header-rows: 1 + :widths: 20, 40, 40 + + * - Table + - Description + - Notes + * - Dimensions + - Displays the dimension name of each metric sorted by average hourly MTS count. High-cardinality dimensions appear at the top of the list. + - Displays up to 5000 dimensions. + * - Tokens + - Displays the token name and ID for each metric, sorted by the number of metric time series associated with the token. + - Displays up to 5000 tokens. + * - Charts + - Displays the charts and dashboards associated with each of your metrics, as well as the user who last updated the chart and the time they updated it. + - None + * - Detectors + - Displays the detectors associated with each of your metrics, as well as the user who last updated the detector and the time they updated it. + - None + +For example, the following metric profile displays information about the CPUUtilization metric, including the metric's dimensions: + +.. image:: /_images/images-metrics/usage-analytics-example-profile.png + :alt: Information about the CPUUtilization metric, including the total MTS, the percentage of total MTS, and related tokens, dimensions, charts, and detectors. + +.. _mua-reduce-usage: + +Manage and reduce your metric usage +================================================ + +This section contains tips for identifying metrics that you can aggregate, archive, or drop for the purpose of reducing your metric usage. + +Archive or drop unused metrics +----------------------------------------------- + +Using the metrics table, you can find metrics that aren't used. If you have any unused metrics, you can archive them so they take up less of your usage plan. + +Archived metrics go to an archival route in Splunk Observability Cloud, where they remain unused and have a lower billing cost. You can bring them out of the archival route whenever you need to use them again. + +To learn more about archiving metrics, see :ref:`archived-metrics-intro`. + +If you aren't using these metrics and don't plan on using them in the future, consider dropping them to save usage space. To learn more about dropping metrics, see :ref:`mpm-rule-routing`. + +Find metrics with low utility scores and aggregate them +------------------------------------------------------------- + +If you have metrics with low utility scores, consider aggregating them to reduce the total number of metrics. + +To help decide whether to aggregate these metrics, follow these steps: + +#. Select the metric you're considering aggregating to open the metric profile. +#. Select the :guilabel:`Detectors` tab to check whether the metric appears in any detectors. +#. If the metric doesn't appear in detectors, check the :guilabel:`Charts` tab to see which charts use it. +#. Consider whether the metric is important to keep in the respective charts. If not, then aggregate the metric with other dimensions to reduce usage. + +To learn more about how to aggregate metrics, see :ref:`mpm-rule-agreggation`. + +Reduce the cardinality of your metrics +-------------------------------------------------------------- + +If you have metrics with high cardinality, consider using a routing exception rule to reroute specific MTS. For example, you can archive or drop MTS with dimensions that you aren't using. + +To learn more about using routing exception rules, see :ref:`mpm-rule-routing-exception`. + + + + + + + + + + + + + + + diff --git a/metrics-and-metadata/search.rst b/metrics-and-metadata/search.rst index 6427c89a1..ce21c8f7e 100644 --- a/metrics-and-metadata/search.rst +++ b/metrics-and-metadata/search.rst @@ -14,8 +14,6 @@ Prerequisites Search only shows results for Splunk APM if your organization has access to Splunk APM. -Search is currently limited to Splunk APM, dashboards, charts, Infrastructure Monitoring navigators, and docs results. - .. _prefix: Supported search prefixes @@ -26,7 +24,7 @@ Narrow your search results to specific types of objects by using one of the supp Supported search prefixes include: - metric search -- dashboard +- dashboards - chart - team - metric @@ -41,6 +39,12 @@ Supported search prefixes include: - trace (APM trace) - service (APM service) - business workflow (APM workflow) +- application, app (RUM application) +- session (RUM session ID) +- test (Synthetics test) +- private location (Synthetics private location) +- saved query (Log Observer saved query) +- connection (Log Observer connection) .. - index (Log index) PI2 .. - saved query (Log saved query) @@ -50,8 +54,7 @@ Use the prefix in a 'key value pair' format to narrow your search. For example, You can also search using only the prefix to search for all objects of that type. - -How to use observability search +How to use Observability search ===================================== You can either search a specific term, or define what type of object you're looking for by using one of the supported prefixes to narrow the search to specific result types. This allows you to search for a specific object, if you know the type and name. Or, you can search by prefix type if you're unsure of the name. diff --git a/references/glossary.rst b/references/glossary.rst index a5b641e27..30e42f92f 100644 --- a/references/glossary.rst +++ b/references/glossary.rst @@ -26,9 +26,6 @@ A automatic discovery Automatic discovery is a feature of the Splunk Distribution of the OpenTelemetry Collector that identifies the services, such as third-party databases and web servers, running in your environment and sends telemetry data from them to Splunk Application Performance Monitoring (APM) and Infrastructure Monitoring. The Collector configures service-specific receivers that collect data from an endpoint exposed on each service. For more information, see :ref:`discovery_mode`. - automatic instrumentation - Automatic instrumentation allows you to instrument your applications and export telemetry data without having to modify the application source files. The language-specific instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud back end. Automatic instrumentation is available for applications written in Java, Node.js, .NET, Go, Python, Ruby, and PHP and automatically collects telemetry data for code written using supported libraries in each language. For more information, see :ref:`get-started-application`. - C == @@ -192,3 +189,10 @@ T trace A trace is a collection of operations that represents a unique transaction handled by an application and its constituent services. Traces are made of spans, which are calls that microservices make to each other. +Z +== + +.. glossary:: + + zero-code instrumentation + Zero-code instrumentation allows you to instrument your applications and export telemetry data without having to modify the application source files. The language-specific instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud back end. Zero-code instrumentation is available for applications written in Java, Node.js, .NET, Go, Python, Ruby, and PHP and automatically collects telemetry data for code written using supported libraries in each language. For more information, see :ref:`get-started-application`. diff --git a/splunkplatform/centralized-rbac.rst b/splunkplatform/centralized-rbac.rst new file mode 100644 index 000000000..a036a1c1d --- /dev/null +++ b/splunkplatform/centralized-rbac.rst @@ -0,0 +1,166 @@ + + +.. _centralized-rbac: + +************************************************************************************************* +Centralized user and role management +************************************************************************************************* + +.. meta:: + :description: This page describes how Splunk Cloud Platform admins can manage Splunk Observability Cloud roles from Splunk Cloud Platform. + +Administrators can now centrally manage users and roles for both Splunk Cloud Platform and Splunk Observability Cloud in Splunk Cloud Platform. Splunk Cloud Platform becomes the role based access control (RBAC) store for Splunk Observability Cloud. + +Who can access centralized user and role management? +================================================================================================= + +All customers who have Unified Identity can access centralized user and role management in Splunk Cloud Platform. Unified Identity is available to Splunk Cloud Platform and Splunk Observability Cloud customers co-located in the same AWS region. + +Prerequisites +================================================================================================= + +Customers who meet the following criteria can access centralized user and role management: + +* Splunk Cloud Platform version is 9.3.2408 or higher + +* Unified Identity is set up. See :ref:`unified-id-unified-identity` for more information. + +* Your Splunk Cloud Platform and Splunk Observability Cloud organizations are co-located in the same AWS region. See the following table + +.. list-table:: + :header-rows: 1 + :width: 100% + + * - :strong:`Splunk Observability Cloud realm` + - :strong:`AWS Region` + * - us0 + - AWS US East Virginia (us-east-1) + * - us1 + - AWS US West Oregon (us-west-2) + * - eu0 + - AWS EU Dublin (eu-west-1) + * - eu1 + - AWS EU Frankfurt (eu-central-1) + * - eu2 + - AWS EU London (eu-west-2) + * - au0 + - AWS AP Sydney (ap-southeast-2) + * - jp0 + - AWS AP Tokyo (ap-northeast-1) + + +How to set up centralized user and role management +================================================================================================= + +You can set up centralized user and role management whether you already have Splunk Observability Cloud or not. If want to set up centralized user and role management but you do not have Splunk Observability Cloud yet, see the next section, :ref:`rbac-new-o11y-customers`. If you already have Splunk Observability Cloud, follow the instructions in :ref:`rbac-existing-o11y-customers` to set up centralized user and role management. + +.. _rbac-new-o11y-customers: + +New Splunk Observability Cloud customers +------------------------------------------------------------------------------------------------- + +If you do not yet have Splunk Observability Cloud, inform your Splunk sales representative that you want to purchase Splunk Observability Cloud or start a trial. The sales representative initiates a Splunk Observability Cloud trial that is already integrated with your Splunk Cloud Platform instance and has centralized user and role management already configured. + +.. _rbac-existing-o11y-customers: + +Existing Splunk Observability Cloud customers +------------------------------------------------------------------------------------------------- + +Once you have configured Unified Identity, you can use Admin Config Service (ACS) to set up centralized user and role management. If you haven't installed the ACS command-line tool and want to use it, see :new-page:`Administer Splunk Cloud Platform using the ACS CLI `. + +To set up centralized user and role management, follow these steps: + +1. Confirm that your organization has set up Unified Identity. If not, run the following Admin Config Services (ACS) command to set up Unified Identity: + + .. code-block:: bash + + acs observability pair --o11y-access-token "" + + Replace ```` in the example above, with the user API access token you retrieved from Splunk Observability Cloud in previous step. + +2. Run the following ACS command to add prepackaged Splunk Observability Cloud roles to your Splunk Cloud Platform instance: + + .. code-block:: bash + + acs observability enable-capabilities + +3. Give all users who should have access to Splunk Observability Cloud the ``o11y_access`` role. + +4. Log in to Splunk Cloud Platform as an administrator and go to :guilabel:`Settings` then :guilabel:`Users and Authentication` then :guilabel:`Roles`. Assign Splunk Observability Cloud roles to users. The following Splunk Observability Cloud roles (with ``o11y_*`` prefix) are now visible in Splunk Cloud role management page: + + * o11y_admin + + * o11y_power + + * o11y_read_only + + * o11y_usage + + See :ref:`roles-table-phase` to learn precisely what each role can do. + +5. If you want users to have access to real-time Splunk Observability Cloud metrics in Splunk Cloud Platform, give them the ``read_o11y_content`` and ``write_o11y_content`` capabilities. + +6. Allow your Splunk Observability Cloud organization to start using Splunk Cloud Platform as the source of role based access controls (RBAC) by enabling centralized RBAC. + + .. note:: When you run the command to enable centralized RBAC, Splunk Cloud Platform becomes the RBAC store for all Splunk Observability Cloud users who authenticate using their Splunk Cloud Platform credentials. Therefore, you must assign a Splunk Observability Cloud role to each affected user in Splunk Cloud Platform before running the command to enable centralized RBAC. If not, the user will be locked out of Splunk Observability Cloud because they won't have a role. + + Run the following ACS command to enable centralized RBAC: + + .. code-block:: bash + + acs observability enable-centralized-rbac --o11y-access-token + +How centralized user and role management works +================================================================================================= + +After setting up centralized user and role management, Splunk Cloud Platform is the source of role based access controls (RBAC) for Splunk Observability Cloud users. Splunk Observability Cloud roles are now visible in Splunk Cloud Platform and assignable to Splunk users. See :ref:`roles-table-phase` to learn exactly what each role can do. + +When a user logs in to Splunk Observability Cloud with their Splunk Cloud Platform credentials, Splunk Cloud Platform becomes the RBAC store, or source of truth for roles. Their role is the role assigned to their user in Splunk Cloud Platform. Their role is visible only in Splunk Cloud Platform, and is no longer visible in the Splunk Observability Cloud UI. An administrator must make updates to roles in Splunk Cloud Platform. + +Conversely, when a user logs in to Splunk Observability Cloud locally or through a third party identity provider and not with Splunk Cloud Platform credentials, then Splunk Observability Cloud remains the source of truth and displays their role in the UI. In this case, an administrator can see and update their role in the Splunk Observability Cloud UI. + +Whenever you create a new user in Splunk Observability Cloud using Unified Identity, you still need to give that user the ``o11y_access`` role. + +If you want a Splunk Cloud Platform user who is not a Splunk Observability Cloud user to access Real Time Metrics in Splunk Cloud, you must give them the ``read_o11y_content`` and ``write_o11y_content`` capabilities. + +Troubleshooting +================================================================================================= + +Following are known issues along with their solutions. + +No access issue +------------------------------------------------------------------------------------------------- +The user can’t log in to Splunk Observability Cloud after configuring centralized user and role management. The user sees error message, “You do not have access to Splunk Observability Cloud…” + +Cause +------------------------------------------------------------------------------------------------- +The user's Splunk Cloud Platform stack might be undergoing maintenance. Alternatively, the administrator who configured centralized user and role management might have forgotten to give the user the ``o11y_access`` role. + +Solution +------------------------------------------------------------------------------------------------- + +First, confirm that the Splunk Cloud Platform instance is available and not undergoing maintenance. + +Next, confirm that the user with login problems has both of the following roles in Splunk Cloud Platform: + +* the ``o11y_access`` role + +* one of the ``o11y_*`` roles (See the complete step 3 in the previous section.) + + +Lastly, check the signalboost-rest skynet logs, searching for errors containing the keyword ``SplunkCloudPlatformAuthManager``. + +Multiple errors issue +------------------------------------------------------------------------------------------------- +After an administrator has set up centralized user and role management, the user sees errors across the UI after logging in. + +Cause +------------------------------------------------------------------------------------------------- +The user's Splunk Cloud Platform stack might be undergoing maintenance. Another cause might be that token authentication is not active on the Splunk Cloud Platform instance. + +Solution +------------------------------------------------------------------------------------------------- +First, confirm that the paired Splunk search head or search head cluster is available and not undergoing maintenance. + +Next, check that token authentication is active on the Splunk Cloud Platform instance. + diff --git a/splunkplatform/unified-id/unified-identity.rst b/splunkplatform/unified-id/unified-identity.rst index 91390b7f0..62cf9e497 100644 --- a/splunkplatform/unified-id/unified-identity.rst +++ b/splunkplatform/unified-id/unified-identity.rst @@ -86,6 +86,7 @@ Splunk Cloud Platform customers who want to purchase Splunk Observability Cloud 2. Turn on token authentication to allow Splunk Observability Cloud to view your Splunk Cloud Platform logs. See :new-page:`Enable or disable token authentication ` to learn how. +.. _existing-setup-unified-identity: Set up Unified Identity for existing Splunk Observability Cloud customers ------------------------------------------------------------------------------------------ diff --git a/synthetics/api-test/api-test-results.rst b/synthetics/api-test/api-test-results.rst index 9e44cab5d..e06d9d5c5 100644 --- a/synthetics/api-test/api-test-results.rst +++ b/synthetics/api-test/api-test-results.rst @@ -27,50 +27,7 @@ On the :guilabel:`Test History` page, view a customizable summary of recent run Customize the Performance KPIs chart -------------------------------------------------- -The :guilabel:`Performance KPIs` chart offers a customizable visualization of your recent test results. Use these steps to customize the visualization: - -In the :guilabel:`Performance KPIs` chart, use the selectors to adjust the following settings: - - .. list-table:: - :header-rows: 1 - :widths: 20 20 60 - - * - :strong:`Option` - - :strong:`Default` - - :strong:`Description` - - * - Time - - Last 8 hours - - Choose the amount of time shown in the chart. - - * - Interval - - Run level - - | Interval between each pair of data points. - | - | When you choose :strong:`Run level`, each data point on the chart corresponds to an actual run of the test; choosing larger intervals shows an aggregation of results over that time interval. - | - | If you choose a level higher than :strong:`Run level`, the data points you see are aggregations of multiple runs. You can select an aggregate data point in the chart to zoom in and view the data at a per-run level. - - * - Scale - - Linear - - Choose whether the y-axis has a linear or logarithmic scale. - - * - Segment by - - Location - - | Choose whether the data points are segmented by run location or no segmentation: - | - | - Choose :strong:`No segmentation` to view data points aggregated from across all locations in your test. - | - Choose :strong:`Location` to compare performance across multiple test locations. - | - | Toggle between these options to see your test data sliced in various ways. - - * - Filter - - All options selected - - If you have enabled segmentation, choose the run locations, pages, or transactions you want to display on the chart. - - * - Metrics - - Duration - - By default, the chart displays the :guilabel:`Duration` metric. Use the drop-down list to choose the metrics you want to view in the chart. +See :ref:`test-kpis`. View results for a specific run diff --git a/synthetics/browser-test/browser-test-results.rst b/synthetics/browser-test/browser-test-results.rst index 64492fb8f..f12dcd351 100644 --- a/synthetics/browser-test/browser-test-results.rst +++ b/synthetics/browser-test/browser-test-results.rst @@ -29,53 +29,7 @@ On the :guilabel:`Test History` page, view a customizable summary of recent run Customize the Performance KPIs chart -------------------------------------------------- -The :guilabel:`Performance KPIs` chart offers a customizable visualization of your recent test results. Use these steps to customize the visualization: - -In the :guilabel:`Performance KPIs` chart, use the selectors to adjust the following settings: - - .. list-table:: - :header-rows: 1 - :widths: 20 20 60 - - * - :strong:`Option` - - :strong:`Default` - - :strong:`Description` - - * - Time - - Last 8 hours - - Choose the amount of time shown in the chart. - - * - Interval - - Run level - - | Interval between each pair of data points. - | - | When you choose :strong:`Run level`, each data point on the chart corresponds to an actual run of the test; choosing larger intervals shows an aggregation of results over that time interval. - | - | If you choose a level higher than :strong:`Run level`, the data points you see are aggregations of multiple runs. You can select an aggregate data point in the chart to zoom in and view the data at a per-run level. - - * - Scale - - Linear - - Choose whether the y-axis has a linear or logarithmic scale. - - * - Segment by - - Location - - | Choose whether the data points are segmented by run location, test page, synthetic transaction, or no segmentation: - | - | - Choose :strong:`No segmentation` to view data points aggregated from across all locations, pages, and synthetic transactions in your test. - | - Choose :strong:`Location` to compare performance across multiple test locations. - | - Choose :strong:`Page` if your test includes multiple pages and you want to compare performance across pages. - | - Choose :strong:`Synthetic transaction` to compare performance across multiple synthetic transactions in your test. - | - | Toggle between these options to see your test data sliced in various ways. - - * - Filter - - All options selected - - If you have enabled segmentation, choose the run locations, pages, or transactions you want to display on the chart. - - * - Metrics - - Duration - - By default, the chart displays the :guilabel:`Duration` metric. Use the drop-down list to choose the metrics you want to view in the chart. - +See :ref:`test-kpis`. View results for a specific run --------------------------------- diff --git a/synthetics/browser-test/set-up-browser-test.rst b/synthetics/browser-test/set-up-browser-test.rst index 9c8d14566..a33dee426 100644 --- a/synthetics/browser-test/set-up-browser-test.rst +++ b/synthetics/browser-test/set-up-browser-test.rst @@ -392,7 +392,6 @@ Auto-retry Run a test again automatically if it fails without any user intervention. It's a best practice to turn on auto-retry to reduce unnecessary failures from temporary interruptions like network issues, timeouts, or intermittent issues on your site. Auto-retry runs do not impact subscription usage, only the completed run result counts towards your subscription usage. Auto-retry requires at least runner version 0.9.29. -.. Security .. _browser-validation: @@ -420,7 +419,6 @@ When executing the browser test, the Chrome browser is configured with the crede More details on Chrome authentication are available :new-page:`here list `. -.. Custom content .. _browser-headers: @@ -510,6 +508,19 @@ Here are the limits for each type of wait time. The maximum limit for a run is 3 +Chrome flags +---------------- +Google Chrome flags are a helpful tool for troubleshooting. Activate browser features that are not available by default to test custom browser configurations and specialized use cases, like a proxy server. + +For more, see +:new-page:`What are Chrome flags? ` in the Google Chrome Developer guide. + +Note: Global variables are incompatible with Chrome flags. + +These are the flags available: + + +.. include:: /_includes/synthetics/chrome-flags.rst diff --git a/synthetics/syn-ottb-dashboards.rst b/synthetics/syn-ottb-dashboards.rst new file mode 100644 index 000000000..00f316a62 --- /dev/null +++ b/synthetics/syn-ottb-dashboards.rst @@ -0,0 +1,56 @@ +.. _syn-ottb-dashboards: + +******************************************************** +Synthetics built-in dashboards +******************************************************** + +.. meta:: + :description: Splunk Synthetics, built-in dashboards, dashboards, out of the box dashboards + +The built-in dashboards show helpful metrics on your subscription usage, trends in your test data, and filter out the test metrics based on your organization. These built-in dashboards are a convenient way to answer questions like: + +* % of failed runs and run success rates by test +* usage per organization on the volume of tests you run a month +* total run counts +* performance metrics and web vitals for Browser tests + +Go to Synthetics built-in dashboards +==================================== +To find these dashboards, go to: + +#. Select :guilabel:`Dashboards`. +#. Type in `Synthetic Monitoring`. +#. Choose the dashboard from the list what best suits your situation. + +Here is the list of all the available dashboards: + +.. image:: /_images/synthetics/Synth-built-in-dashboards.png + :width: 60% + :alt: Screenshot showing the main navigation menu which consists of the product offerings, APM, Infrastructure,Log Observer, Rum, Synthetics. Selected Dashboard view and hovering over the list for nine built-in dashboards available for Synthetics. + + +Troubleshoot an issue from a built-in dashboard +======================================================================== + +If you want to do additional troubleshooting and explore data from a built-in dashboard, select the settings symbol in any tile, then :guilabel:`Troubleshoot from this time window` in Splunk APM and Splunk RUM. + +.. image:: /_images/synthetics/ootb-dashboard-modal.png + :width: 40% + :alt: Screenshot showing the troubleshooting tab for a tile in the dashboard with an option to open the data in RUM or APM. + + +Dashboards for alerts and detectors +================================================== + +To create charts and dashboards for your Synthetics alerts and detectors, see: + +* :ref:`Link detectors to charts ` in Alerts & Detectors. + +* :ref:`Dashboards in Splunk Observability Cloud ` in Dashboards and Charts. + + +Learn more +============== + +* :ref:`Track service performance using dashboards in Splunk APM` +* :ref:`Create and customize dashboards` \ No newline at end of file diff --git a/synthetics/test-kpis/test-kpis.rst b/synthetics/test-kpis/test-kpis.rst new file mode 100644 index 000000000..dc15e193a --- /dev/null +++ b/synthetics/test-kpis/test-kpis.rst @@ -0,0 +1,56 @@ +.. _test-kpis: + +*************************************************** +Test performance KPIs +*************************************************** + +.. meta:: + :description: words + + +KPIs measure how well your tests are performing in a variety of circumstances. There are two tabs in this view on the test details page: availability and performance KPIs. The availability tab shows when the test was up and running versus failing, and if an auto-retry run occurred. + +Here are some ways you can troubleshoot issues in the performance KPI chart: + +* Zoom in on a range of time to isolate an issue. +* Play or pause windows of time during troubleshooting to and open run results, screen captures, and charts in context with the selected data. +* Automatically adjust data density adjusts for zoomed in views or summaries of larger time ranges. +* VIew up to 90 days of historical data for related run results. + + +Performance KPI chart settings +-------------------------------------------------- +The :guilabel:`Performance KPIs` chart offers a customizable visualization of your recent test results. + + .. list-table:: + :header-rows: 1 + :widths: 20 20 60 + + * - :strong:`Option` + - :strong:`Default` + - :strong:`Description` + + * - Time + - Last 8 hours + - Choose the amount of time shown in the chart. + + * - Segment by + - Location + - | Choose whether the data points are segmented by run location or no segmentation: + | + | - Choose :strong:`No segmentation` to view data points aggregated from across all locations, pages, and synthetic transactions in your test. + | - Choose :strong:`Location` to compare performance across multiple test locations. + | + + * - Locations + - All locations selected + - Choose the run locations you want to display on the chart. + + * - Filter + - All locations selected + - If you have enabled segmentation by location, choose the run locations you want to display on the chart. + + * - Metrics + - Run duration + - By default, the chart displays the :guilabel:`Duration` metric. Use the drop-down list to choose the metrics you want to view in the chart. + diff --git a/synthetics/uptime-test/uptime-test-results.rst b/synthetics/uptime-test/uptime-test-results.rst index 9f6abcbbd..af0605bc6 100644 --- a/synthetics/uptime-test/uptime-test-results.rst +++ b/synthetics/uptime-test/uptime-test-results.rst @@ -33,54 +33,7 @@ On the :guilabel:`Test History` page, view a customizable summary of recent run Customize the Performance KPIs chart -------------------------------------------------- -The :guilabel:`Performance KPIs` chart offers a customizable visualization of your recent test results. Use these steps to customize the visualization: - -In the :guilabel:`Performance KPIs` chart, use the selectors to adjust the following settings: - - .. list-table:: - :header-rows: 1 - :widths: 20 20 60 - - * - :strong:`Option` - - :strong:`Default` - - :strong:`Description` - - * - Time - - Last 8 hours - - Choose the amount of time shown in the chart. - - * - Interval - - Run level - - | Interval between each pair of data points. - | - | When you choose :strong:`Run level`, each data point on the chart corresponds to an actual run of the test; choosing larger intervals shows an aggregation of results over that time interval. - | - | If you choose a level higher than :strong:`Run level`, the data points you see are aggregations of multiple runs. You can select an aggregate data point in the chart to zoom in and view the data at a per-run level. - - * - Scale - - Linear - - Choose whether the y-axis has a linear or logarithmic scale. - - * - Segment by - - Location - - | Choose whether the data points are segmented by run location or no segmentation: - | - | - Choose :strong:`No segmentation` to view data points aggregated from across all locations, pages, and synthetic transactions in your test. - | - Choose :strong:`Location` to compare performance across multiple test locations. - | - - * - Locations - - All locations selected - - Choose the run locations you want to display on the chart. - - * - Filter - - All locations selected - - If you have enabled segmentation by location, choose the run locations you want to display on the chart. - - * - Metrics - - Run duration - - By default, the chart displays the :guilabel:`Duration` metric. Use the drop-down list to choose the metrics you want to view in the chart. - +See :ref:`test-kpis`. View results for a specific run ---------------------------------