diff --git a/_images/get-started/onboarding-guide-2point0-flowonly.svg b/_images/get-started/onboarding-guide-2point0-flowonly.svg new file mode 100644 index 000000000..418d2431f --- /dev/null +++ b/_images/get-started/onboarding-guide-2point0-flowonly.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/_images/get-started/onboarding-guide-2point0-initial.svg b/_images/get-started/onboarding-guide-2point0-initial.svg new file mode 100644 index 000000000..2c50cc1d2 --- /dev/null +++ b/_images/get-started/onboarding-guide-2point0-initial.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/_images/get-started/onboarding-guide-2point0-readiness.svg b/_images/get-started/onboarding-guide-2point0-readiness.svg new file mode 100644 index 000000000..b7ee89661 --- /dev/null +++ b/_images/get-started/onboarding-guide-2point0-readiness.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/_images/get-started/onboarding-guide-2point0-scaled.svg b/_images/get-started/onboarding-guide-2point0-scaled.svg new file mode 100644 index 000000000..eaa255241 --- /dev/null +++ b/_images/get-started/onboarding-guide-2point0-scaled.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/_static/signalfx-alabaster.css b/_static/signalfx-alabaster.css index c8810d662..fa61b636c 100644 --- a/_static/signalfx-alabaster.css +++ b/_static/signalfx-alabaster.css @@ -948,7 +948,7 @@ a.image-reference:hover margin-top: -35px; } -#welcome .newparawithicon [class^="icon-"], #welcome .newparawithicon [class*=" icon-"]{ +#welcome .newparawithicon [class^="icon-"], #welcome .newparawithicon [class*=" icon-"], #get-started-with-splunk-observability-cloud .newparawithicon [class^="icon-"], #get-started-with-splunk-observability-cloud .newparawithicon [class*=" icon-"]{ font-size: 28px; position: absolute; overflow: hidden; diff --git a/_static/signalfx-includes.css b/_static/signalfx-includes.css index 8e94eb1a3..cd23fbfd6 100755 --- a/_static/signalfx-includes.css +++ b/_static/signalfx-includes.css @@ -742,7 +742,7 @@ border-left: 1px solid #eee; margin: 0 0 10px 0px; } - #welcome .section .section{ + .section .section{ width:100%; } @@ -853,7 +853,7 @@ div.sphinxsidebar p { margin:17px 0 5px; } -#welcome { +#welcome, #get-started-with-splunk-observability-cloud { width:100%; } diff --git a/admin/admin-onboarding/admin-onboarding-guide.rst b/admin/admin-onboarding/admin-onboarding-guide.rst deleted file mode 100644 index d2db7ec31..000000000 --- a/admin/admin-onboarding/admin-onboarding-guide.rst +++ /dev/null @@ -1,55 +0,0 @@ -.. _admin-onboarding-guide: - -Admin guide for onboarding Splunk Observability Cloud -****************************************************** - -.. meta:: - :description: Guide for existing admins in a Splunk Observability Cloud organization to onboard Splunk Observability Cloud in their organization. - -.. toctree:: - :hidden: - :maxdepth: 3 - - Phase 1: Onboarding - Phase 2: Pilot rollout - Phase 3: Expansion and optimization - -Follow these steps to onboard Splunk Observability Cloud in your organization. To complete the onboarding process, ensure you have the admin role in your Splunk Observability Cloud organization. There are 3 phases to the onboarding journey for Splunk Observability Cloud: - -.. list-table:: - :header-rows: 1 - :widths: 33 33 33 - :width: 100% - - * - 1. Onboarding phase - - 2. Pilot rollout phase - - 3. Expansion and optimization phase - - * - Perform onboarding activities that connect Splunk Observability Cloud to your existing software framework. - - Part 1: Configure your user and team administration - - Part 2: Design your architecture and get data in - - See :ref:`phase1`. - - - Set up standards and procedures for end users, like development teams and site reliability engineers. - - Part 1: Plan your pilot rollout - - Part 2: Initial pilot rollout for Splunk Infrastructure Monitoring - - Part 3: Initial pilot rollout for Splunk Application Performance Monitoring - - See :ref:`phase2`. - - - Carry forward best practices and frameworks established during the pilot rollout to your infrastructure, applications, and teams. - - Part 1: Expand and optimize Splunk Infrastructure Monitoring - - Part 2: Expand and optimize Splunk Application Performance Monitoring - - See :ref:`phase3`. - - - diff --git a/admin/admin-onboarding/phase1/phase1-arch-gdi.rst b/admin/admin-onboarding/phase1/phase1-arch-gdi.rst deleted file mode 100644 index 3926dffe0..000000000 --- a/admin/admin-onboarding/phase1/phase1-arch-gdi.rst +++ /dev/null @@ -1,168 +0,0 @@ -.. _phase1-arch-gdi: - -Onboarding part 2: Design your architecture and get data in -********************************************************************* - -After completing :ref:`phase1-team-user-admin`, you are ready for the second part of the onboarding phase. In this part of the onboarding phase, you get familiar with important concepts, gather requirements, and begin integrating Splunk Observability Cloud into your existing environment. To design your architecture and get data in, complete the following tasks: - -.. meta:: - :description: - -#. :ref:`phase1-otel` -#. :ref:`phase1-arch-proto` -#. :ref:`phase1-network` -#. :ref:`phase1-metrics` -#. :ref:`phase1-host-k8s` -#. :ref:`phase1-3rd-party` -#. :ref:`phase1-apm` -#. :ref:`phase1-logs` -#. :ref:`phase1-dashboards-detectors` - -.. note:: - Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager throughout your onboarding process. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. - -.. _phase1-otel: - -Get familiar with OpenTelemetry concepts -========================================================== - -Spend some time to understand the concepts of the OpenTelemetry Collector. Pay special attention to configuring receivers, processors, exporters, and connectors since most OpenTelemetry configurations have each of these pipeline components. - -See :new-page:`https://opentelemetry.io/docs/concepts/`. - -.. _phase1-arch-proto: - -Create an architecture prototype -========================================================== - -Create a prototype architecture solution for Splunk Observability Cloud in your organization. Complete the following tasks to create a prototype: - -#. Get familiar with setting up and connecting applications to Splunk Observability Cloud. Set up an initial OpenTelemetry Collector on a commonly used platform, such as a virtual machine instance or a Kubernetes cluster. - - See :ref:`infrastructure-infrastructure` and :ref:`otel-intro` for more information. -#. In most cases, you also need to connect Splunk Observability Cloud to your cloud provider. To ingest data from cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP), you need to set up cloud integrations. - - See :ref:`supported-data-sources` for supported integrations. -#. Determine the OTel deployment mode you want to use: host (agent) and data forwarding (gateway). Host (agent) mode is the default mode. - - See :ref:`otel-deployment-mode` for more information. -#. When deploying OpenTelemetry in a large organization, it's critical to define a standardized naming convention for tagging and a governance process to ensure the convention is adhered to. Standardized naming also makes it easier to find metrics and identify usage. See :ref:`metric-dimension-names` and :new-page:`Naming conventions for tagging with OpenTelemetry and Splunk`. - - There are a few cases where incorrect naming affects in-product usage data: - - * If your organization uses host-based Splunk Observability Cloud licensing, your OpenTelemetry naming convention must use the OpenTelemetry host semantic convention to track usage and telemetry correctly. See :new-page:`the OpenTelemetry semantic conventions for hosts`. - * You must use the Kubernetes attributes processor for Kubernetes pods to ensure standard naming and accurate usage counting for host-based organizations. See :ref:`kubernetes-attributes-processor`. - - See :ref:`metric-dimension-names`. -#. Select at least 1 application or service to collect metrics from as part of your prototype. This helps you see the corresponding dashboards and detectors created when your metrics are received by Splunk Observability Cloud. For example, you can use OpenTelemetry receivers to include services like an Nginx server, an Apache web server, or a database such as MySQL. - - See :ref:`nginx`, :ref:`apache-httpserver`, or :ref:`mysql`. -#. Get familiar with the Splunk Observability Cloud receivers for various applications and services. Each receiver has corresponding dashboards and detectors that are automatically created for each integration after the receiver reaches over 50,000 data points. - - See :ref:`monitor-data-sources`, :ref:`built-in-dashboards`, and :ref:`autodetect`. - -.. _phase1-network: - -Analyze your required network communication -============================================= - -Analyze your required network communication by determining which ports need to be open, which protocols to use, and proxy considerations. - -* See :ref:`otel-exposed-endpoints` to determine which ports you need to open in the firewall and what protocols you need to turn on or off in the Collector. -* If your organization requires a proxy, see :ref:`allow-services`. - -.. _phase1-metrics: - -Analyze how to collect metrics from cloud providers -========================================================================== - -To monitor a cloud-based host, install the Splunk OTel collector on each host to send host metrics to Splunk Observability Cloud. Use the Cloud providers' filters to refine what data you bring in to Splunk Observability Cloud. You can limit the host metrics you send by excluding specific metrics that you don't need to monitor from the cloud provider. Excluding metrics from being consumed offers the following advantages: - -* You can control which host you monitor, instead of all hosts. -* You can retrieve advanced metrics without incurring extra cost. -* You can send metrics at a higher frequency without incurring extra cost, such as every 10 seconds by default instead of every 5 minutes or more, which is the typical default for cloud providers. - -See :ref:`get-started-connect` and :ref:`otel_deployments`. - - -.. _phase1-host-k8s: - -Configure and implement host and Kubernetes metrics -========================================================== - -The OpenTelemetry Collector automatically reads and detects different types of host or Kubernetes metadata from operating systems or from the cloud providers. See :ref:`host-metrics-receiver` or :ref:`otel-kubernetes-config` for more information about host or Kubernetes metadata. - -The OpenTelemetry Collector adds dimensions, metric tags, and span attributes which are known as tags. The most common metadata entry is the name of the host, which can come from different sources with different names. See :ref:`metrics-dimensions-mts` for details on the metadata the collector adds. - -To retrieve and modify your metadata, use the resource detection processor in the pipeline section of the OpenTelemetry Agent Configuration. Before installing the OpenTelemetry Collector on a host, verify that the resource detection module in the configuration file of the OpenTelemetry Collector matches the preferred metadata source. The order determines which sources are used. See :ref:`resourcedetection-processor`. - -.. _phase1-3rd-party: - -Collect data from third-party metrics providers -========================================================== - -When using the Splunk Distribution of OpenTelemetry Collector, you can use receivers to collect metrics data from third-party providers. For example, you can use the Prometheus receiver to scrape metrics data from any application that exposes a Prometheus endpoint. See :ref:`prometheus-receiver`. - -See :ref:`monitor-data-sources` to see a list of receivers. - -.. _phase1-apm: - -Bring data in for use in Splunk APM -====================================== - -Splunk Application Performance (APM) provides end-to-end visibility to help identify issues such as errors and latency across all tags of a service. Splunk APM produces infinite cardinality metrics and full-fidelity traces. Splunk APM also measures request, error, and duration (RED) metrics. See :ref:`apm-orientation`. - -To familiarize yourself with the key concepts of Splunk APM, see :ref:`apm-key-concepts`. - -.. _phase1-auto-instrument: - -Add an auto instrumentation library to a service to send traces to Splunk APM ---------------------------------------------------------------------------------- - -To send traces to Splunk APM, you need to deploy an auto instrumentation agent for each programming language or language runtime. To deploy an auto instrumentation agent, see :ref:`instrument-applications`. - -.. _phase1-discovery-mode: - -(Optional) Use the automatic discovery to instrument your applications ------------------------------------------------------------------------------------------- - -If you are deploying many similar services written in Java, .NET, or Node.js, deploy the OpenTelemetry Collector with automatic discover. Use automatic discovery if you don't have access to the source code or the ability to change the deployment. - -See :ref:`discovery_mode`. - -.. _phase1-profiling: - -(Optional) Turn on AlwaysOn Profiling to collect stack traces ------------------------------------------------------------------ - -Use AlwaysOn Profiling for deeper analysis of the behavior of select applications. Code profiling collects snapshots of the CPU call stacks and memory usage. After you get profiling data into Splunk Observability Cloud, you can explore stack traces directly from APM and visualize the performance and memory allocation of each component using the flame graph. - -Use this profiling data to gain insights into your code behavior to troubleshoot performance issues. For example, you can identify bottlenecks and memory leaks for potential optimization. - -See :ref:`profiling-intro`. - -.. _phase1-logs: - -Set up Log Observer Connect for the Splunk Platform -================================================================================================ - -If your organization has an entitlement for Splunk Log Observer Connect, Splunk Observability Cloud can automatically relate logs to infrastructure and trace data. - -See :ref:`logs-set-up-logconnect` or :ref:`logs-scp`. - -.. _phase1-dashboards-detectors: - -Review the default dashboards and detectors -========================================================== - -Splunk Observability Cloud automatically adds built-in-dashboards for each integration you use after it ingests 50,000 data points. Review these built-in dashboards when they are available. See :ref:`dashboards`. - -Splunk Observability Cloud also automatically adds the AutoDetect detectors that correspond to the integrations you are using. You can copy the AutoDetect detectors and customize them. See :ref:`autodetect`. - -Next step -=============== - -Next, prepare for a pilot rollout of Splunk Infrastructure Monitoring and Splunk Application Performance Monitoring. See :ref:`phase2`. - - - diff --git a/admin/admin-onboarding/phase1/phase1-team-user-admin.rst b/admin/admin-onboarding/phase1/phase1-team-user-admin.rst deleted file mode 100644 index 34e917c39..000000000 --- a/admin/admin-onboarding/phase1/phase1-team-user-admin.rst +++ /dev/null @@ -1,111 +0,0 @@ -.. _phase1-team-user-admin: - -Onboarding part 1: Configure your user and team administration -********************************************************************** - -.. meta:: - :description: - -In the first part of the onboarding phase, you make foundational decisions about your organization in Splunk Observability Cloud, including user access management, team structure, and token management. To configure your users and teams, complete the following tasks: - -#. :ref:`phase1-create-trial` -#. :ref:`phase1-user-access` -#. :ref:`phase1-custom-URL` -#. :ref:`phase1-teams-tokens` -#. :ref:`phase1-parent-child` - -.. note:: - Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager throughout your onboarding process. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. - -.. _phase1-create-trial: - -Create a trial for your organization -======================================== - -If you have a Splunk technical contact, they can create a Splunk Observability Cloud trial for your organization and provide you with the link to log in to your trial organization. Alternatively, you can sign up for a trial. See :ref:`o11y-trial`. - -.. _phase1-custom-URL: - -(Optional) Request a custom URL for your organization -========================================================= - -Create a Splunk support request to request a custom URL for your organization, for example, acme.signalfx.com. See :ref:`support` for support contact options. - -.. _phase1-user-access: - -Decide how to manage user access -======================================== - -Choose from these 3 options for managing user access: - -#. Use an external Lightweight Directory Access Protocol (LDAP) and control access through Single Sign-On (SSO). See :ref:`sso-label` for more information. -#. Use Splunk Observability Cloud user management to allow access using a username and password. See :ref:`user-managment-intro`. -#. Use Splunk Cloud Platform as the unified identity provider. See :ref:`unified-id-unified-identity` for more information. - -.. _phase1-teams-tokens: - -Plan your teams structure and token management to control access -===================================================================================== - -If you plan to roll out Splunk Observability Cloud across your organization you likely have multiple internal customers with different access requirements for the various features in Splunk Observability Cloud. To manage these internal customers, you can use the teams feature to organize users together in a team and manage them as a unit. - -Define team and token naming conventions ------------------------------------------- - -Before creating teams and tokens, determine your naming convention. This helps you to track token assignments and control data ingest limits. Aligning team and token names also helps you to identify token owners when viewing the usage reports. For example, you can align team and token names in the following way: - -* Team name: FRONTEND_DEV_TEAM -* Token names: FRONTEND_DEV_TEAM_INGEST, FRONTEND_DEV_TEAM_API, FRONTEND_DEV_TEAM_RUM - -See :ref:`admin-manage-usage`. - -Plan your team structure ---------------------------- - -A user with an admin role can manage teams, which includes adding and removing regular users and assigning a team admin. - -By default, users can join or leave teams at will. For larger organizations, you might want enhanced team security. Enhanced team security is useful if the teams are assigned a certain amount of usage rights with their associated tokens. See :ref:`enhanced-team-security`. - -You can also assign team-specific notifications for alerts triggered by the detectors that you set up. Team-specific notifications give your teams different escalation methods for their alerts. See :ref:`admin-team-notifications`. - -Manage your tokens --------------------- - -Use tokens to secure data ingest and API calls to Splunk Observability Cloud. Tokens are valid for 1 year and can be extended for another 60 days. Your organization has a default token that is automatically generated when the organization is created. - -With the admin role, you can deactivate tokens that are no longer needed. Create a plan to regularly deactivate and rotate tokens. - -You can also set limits for data ingestion for your tokens. Use limits to control how many metrics are ingested per token. Limits protect against unexpected data ingestion overage by ensuring teams can't over consume. - -See :ref:`admin-tokens` for more information about tokens. - -.. _phase1-parent-child: - -(Optional) Separate your teams with a parent-child setup -===================================================================================== - -If you want to create separate environments, you can use parent-child organizations. Perhaps you want a development environment and a production environment, or you want to make sure Team A is fully separated from Team B. Parent-child organizations are 2 or more separate organizations, where your original organization is the parent organization which includes your original usage entitlement. You can then have 1 or more organizations as child organizations within the parent organization. The organizations are fully separated, including users and data. - -You can request a parent-child organization setup by creating a support case. See :ref:`support` for support contact options. - -Next step -=============== - -Next, design your architecture and being bringing data in to Splunk Observability Cloud. See :ref:`phase1-arch-gdi`. - - - - - - - - - - - - - - - - - diff --git a/admin/admin-onboarding/phase1/phase1.rst b/admin/admin-onboarding/phase1/phase1.rst deleted file mode 100644 index f0633b43d..000000000 --- a/admin/admin-onboarding/phase1/phase1.rst +++ /dev/null @@ -1,21 +0,0 @@ -.. _phase1: - -Admin onboarding guide phase 1: Onboarding -**************************************************************** - -.. meta:: - :description: - -.. toctree:: - :hidden: - :maxdepth: 3 - - Part 1: Configure your user and team administration - Part 2: Design your architecture and get data in - -Your goal in the onboarding phase is to understand the platform and make sure your onboarding team is ready to support the rest of the organization. During this phase, your main focus is to make sure you and any staff that are responsible for administering Splunk Observability Cloud are ready to manage Splunk Observability Cloud within your organization. - -For this phase, complete the following topics: - -#. :ref:`Onboarding part 1: Configure your user and team administration`. -#. :ref:`Onboarding part 2: Design your architecture and get data in`. diff --git a/admin/admin-onboarding/phase2/phase2-apm.rst b/admin/admin-onboarding/phase2/phase2-apm.rst deleted file mode 100644 index bbb908412..000000000 --- a/admin/admin-onboarding/phase2/phase2-apm.rst +++ /dev/null @@ -1,118 +0,0 @@ -.. _phase2-apm: - -Pilot phase part 3: Initial pilot rollout for Splunk Application Performance Monitoring -***************************************************************************************** - -After completing :ref:`phase2-im`, you are ready for pilot rollout phase part 3. As with the Splunk Infrastructure Monitoring pilot rollout, your initial pilot rollout for Application Performance Monitoring (APM) focuses on bringing initial pilot teams with many microservices or connections to services into APM. - -To onboard APM, complete these tasks: - -#. :ref:`customize-APM-exp` -#. :ref:`deployment-environments` -#. :ref:`service-perf-dashboards` -#. :ref:`service-map-dependencies` -#. :ref:`inferred-services` -#. :ref:`error-spans` -#. :ref:`use-metricsets` -#. :ref:`tag-spotlight-values` -#. :ref:`apm-detectors` -#. :ref:`alwayson-profiling` - -.. note:: - Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager throughout your onboarding process. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. - -.. _customize-APM-exp: - -Customize Splunk APM for your organization -============================================= - -Get familiar with the options to customize Splunk APM to accommodate your organization. For example, here are some common customizations: - -* The most common customization is to index specific span tags that are important to your organization. As a Splunk APM administrator, you can index additional span tags to generate custom request, error, and duration (RED) metrics for tag values within a service. Indexed span tags are used throughout APM as filter values and values to use to break down views like the service map. -* You can turn on Database Query Performance to pinpoint whether a database is slowing your applications. Database Query Performance finds trends in aggregate database queries without needing database instrumentation. This helps service owners determine whether an increase in the latency or error rate of a service is related to a database. You can then use Database Query Performance to identify which database and query is contributing to the latency. -* You can use Business Workflows to group traces based on their initiating operation or another tag or endpoint. With Business Workflows, you can monitor end-to-end key performance indicators (KPIs) and find root causes and bottlenecks. - -See :ref:`customize-apm` for an overview of customization options for APM. - -.. _deployment-environments: - -Set up deployment environments -=================================== - -You likely want to set up various deployment environments. A deployment environment is a distinct deployment of your system or application that allows you to set up configurations that don’t overlap with configurations in other deployments of the same application. You can use separate deployment environments for different stages of the development process, such as development, staging, and production. For this pilot rollout, you might choose to start with only 1 deployment environment, for example a development or staging environment that facilitates testing. - -For details about setting up a deployment environment, see :ref:`apm-environments`. - -.. _service-perf-dashboards: - -Use dashboards to track service performance -============================================================= - -Get familiar with the Splunk APM built-in dashboards so you can use them to troubleshoot issues related to services, endpoints, and business workflows. For details about troubleshooting issues related to services, endpoints, and more, see :ref:`apm-dashboards`. - -.. _service-map-dependencies: - -Understand dependencies among your services in the service map -====================================================================== - -In a distributed environment, there is considerable complexity in how services are stitched together. Use the Splunk APM service map to understand how different services in your distributed environment interact with each other. Get familiar with the detailed breakdowns within the service map to understand how to accelerate troubleshooting services and dependencies. - -See :ref:`apm-service-map` for details about the service map. - -.. _inferred-services: - -Get familiar with how Splunk APM infers services -===================================================== - -If you have remote services that you can't instrument or have yet to instrument, Splunk APM infers the presence of these remote services. See :ref:`apm-inferred-services` to learn more. - -.. _error-spans: - -Learn how to analyze error spans -========================================== - -Get familiar with how to identify errors in a span through metadata tags. See :ref:`apm-errors` for more details. - -.. _use-metricsets: - -Learn how to use MetricSets -======================================= - -You can use 2 types of MetricSets in Splunk APM: - -* Monitoring MetricSets (MMS) are used for real-time monitoring and alerting. MMS are created by default for services, endpoints, and workflows. Each Monitoring MetricSet contains the following metrics: request rate, error rate and latency. MMS are stored for 13 months by default. -* Troubleshooting MetricSets (TMS) used for high-cardinality troubleshooting, filtering the service map, breaking down service level indicators (SLIs), and historical comparison for span and workflows. Troubleshooting MetricSets are created by default for services, endpoints, workflows, edges, and operations. Each TMS contains the following metrics: request rate, error rate and latency. TMS data is stored for 8 days by default. - -See :ref:`apm-metricsets`. - -.. _tag-spotlight-values: - -Learn how to use Tag Spotlight to analyze services -=========================================================================================================== - -Use Tag Spotlight to quickly discover granular trends across different user categories, environments, and so on that might be contributing to latency or errors on a service. Hone in on the latency and error rate peaks by drilling into top tags or specific tags and values. From Tag Spotlight, you can jump into a representative trace when you are ready to dive deeper. - -See :ref:`apm-tag-spotlight` to learn more. - -.. _apm-detectors: - -Set up APM detectors -=========================== - -Splunk APM automatically captures request, error, and duration (RED) metrics for each service in your application. Use these metrics to create dynamic alerts based on sudden change or historical anomalies. - -See :ref:`apm-alerts`. - -.. _alwayson-profiling: - -Learn how to troubleshoot using AlwaysOn Profiling -============================================================== - -If you enable AlwaysOn Profiling, you can perform deeper analysis of the behavior of select applications. Code profiling collects snapshots of the CPU call stacks and of memory usage. - -See :ref:`profiling-intro` to learn more about troubleshooting with AlwaysOn Profiling. - -Next step -=============== - -Next, begin expanding and optimizing Splunk Observability Cloud in your organiziation. See :ref:`phase3`. \ No newline at end of file diff --git a/admin/admin-onboarding/phase2/phase2-im.rst b/admin/admin-onboarding/phase2/phase2-im.rst deleted file mode 100644 index 6dd528f8c..000000000 --- a/admin/admin-onboarding/phase2/phase2-im.rst +++ /dev/null @@ -1,159 +0,0 @@ -.. _phase2-im: - - -Pilot rollout phase part 2: Initial pilot rollout for Splunk Infrastructure Monitoring -*************************************************************************************** - -After completing :ref:`phase2-rollout-plan`, you are ready for pilot rollout phase part 2. During this part of the pilot, focus on onboarding your pilot teams to Splunk Infrastructure Monitoring. This part of the implementation prepares you to monitor critical solutions and brings business value based on custom metrics. - -To onboard Infrastructure Monitoring, complete the following tasks: - -#. :ref:`onboard-imm-apps` -#. :ref:`otel-reqs` -#. :ref:`Advance configuration using OTel collector (for example, token as a secret, Kubernetes distribution) ` -#. :ref:`custom-dash-charts-metrics` -#. :ref:`detect-alert-config` -#. :ref:`plan-dimensions` -#. :ref:`ci-cd` -#. :ref:`templates-detect` -#. :ref:`automation-api` -#. :ref:`automation-terraform` -#. :ref:`customer-framework` - -.. note:: - Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager throughout your onboarding process. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. - -.. _onboard-imm-apps: - -Launch Infrastructure Monitoring applications -======================================================================================= - -#. For each of the participating teams, identify which services you want to ingest data from. -#. Install the OpenTelemetry (OTel) agent. -#. Configure the receivers and pipeline for these services. This creates the default dashboards and detectors for the services such as databases, message bus, and OS platform. - -After you set up these dashboards and detectors, the pilot teams can observe their application data in the built-in dashboards and create their own custom dashboards. - -* See :ref:`built-in-dashboards`. -* See :ref:`dashboard-create-customize`. - -.. _otel-reqs: - -Understand OTel sizing requirements -========================================== - -Before you start scaling up the use of the OTel agents, consider the OTel sizing guidelines. For details about the sizing guidelines, see :ref:`otel-sizing`. This is especially important on platforms such as Kubernetes where there can be a sudden growth from various autoscaling services. Ensure that the OTel agents can allocate sufficient memory and CPU needed to aid with a smooth rollout. - -.. _adv-conf-otel: - -Complete advanced configurations for the collector -======================================================= - -As you get ready to roll out your first pilot teams, further secure the Splunk OpenTelemetry Collector. For details, see :ref:`otel-security`. You can store your token as a secret or use different methods to securely store tokens and credentials outside the configuration.yaml file for the OTel agent. - -* For details on storing the token as a secrets, see :new-page:`Splunk OpenTelemetry Collector for Kubernetes` on GitHub -* For details on other methods, see :ref:`otel-other-configuration-sources`. - - -.. _custom-dash-charts-metrics: - -Create custom dashboards using charts based on ingested metrics -==================================================================================== - -As the metrics data is sent to Splunk Observability Cloud, start creating custom dashboards by combining metrics from different tools and services. See the following resources: - -* See :ref:`dashboards-best-practices`. -* For Splunk Observability Cloud training, see :new-page:`Free training `. -* Coordinate with your Splunk Sales Engineer to register for the Splunk Observability Cloud workshop. See :new-page:`Splunk Observability Cloud Workshops` - - -.. _detect-alert-config: - -Configure detectors and alerts for specific metric conditions -====================================================================== - -As with the custom dashboards, onboard the pilot team with the prepackaged autodetect detectors. Ensure that your teams understand how to develop their own sets of detectors according to each of their use cases, such as by adapting existing detectors or creating their own. See the following resources: - -* See :ref:`autodetect-intro`. -* For Splunk Observability Cloud training, see :new-page:`Free training `. -* Coordinate with your Splunk Sales Engineer to register for the Splunk Observability Cloud workshop. See :new-page:`Splunk Observability Cloud Workshops` - - -.. _plan-dimensions: - -Review metric names and ingested data -========================================================= - -After your initial onboarding of metrics data, review the name and the amount of metrics each team is ingesting. Make sure the ingest data matches the agreed naming convention for dimensions and properties. If needed, address the name and type of dimensions required to ingest into Splunk Infrastructure Monitoring. - -Ensure the teams follow the naming convention setup for metrics, so that you can speed up the development of charts and alerts and create alerts that can detect across a whole range of hosts and nodes. - -* For details about dimensions, see :ref:`metadata-dimension`. -* For details about properties, see :ref:`custom-properties`. -* For details about naming conventions for metrics, see :ref:`metric-dimension-names`. - -.. _ci-cd: - -Add Splunk Observability Cloud to your CI/CD pipeline -========================================================= - -You should have already deployed exporters and pipelines for OpenTelemetry agents. At this point you are ready to add services into your pipeline. For teams that are familiar with tools such as Ansible, Chef, or Puppet, use the exporter and pipeline templates using OpenTelemetry agents. - -You can also use the upstream OpenTelemetry Collector Contrib project, send data using the REST APIs, and send metrics using client libraries. - -* For details about adding receivers for a database, see :ref:`databases`. -* For information about using the upstream Collector, see :ref:`using-upstream-otel`. -* For details on the Splunk Observability Cloud REST APIs, see :ref:`rest-api-ingest`. -* For details on sending metrics using client libraries, see :new-page:`SignalFlow client libraries `. - - -.. _templates-detect: - -Create custom templates for detectors or alerts -============================================================== - -Create custom templates for detectors and alerts for teams to unify various detectors created by users in your teams. Templates prevent duplicating for detectors with similar alerting requirements. You can also deploy templates using Terraform. For more information about the signalfx_detector with Terraform, see :new-page:`https://registry.terraform.io/providers/splunk-terraform/signalfx/latest/docs/resources/detector` on the Terraform Registry. - - - -.. _automation-api: - -Prepare for automation using the REST API -================================================================================================================== - -Familiarize yourself with the REST API functions available for Splunk Observability Cloud. For example, you can use the REST API to extract charts, dashboards, or detectors from Splunk Observability Cloud. Most commonly, you can use the REST API to send historical metric time series (MTS) data to Splunk Observability Cloud using the API to correct previously-ingested MTS data. - -As a best practices, build templates necessary to onboard the reaming teams. - -* For details about Splunk Observability Cloud REST API, see :new-page:`Observability API Reference`. -* For details about using the Splunk Observability Cloud API to extract charts, see :new-page:`Charts API`. -* For details about using the Splunk Observability Cloud API to extract dashboards, see :new-page:`Dashboards API`. -* For details about using the Splunk Observability Cloud API to extract detectors, see :new-page:`Detectors API`. - - -.. _automation-terraform: - -Automate using Terraform -========================================================= - -You can automate a large number of deployments using Terraform. The Terraform provider uses the Splunk Observability Cloud REST API. - -Use Terraform to help set up integrations to cloud providers, dashboards, and alerts. You can also use Terraform to add customized charts and alerts to newly onboarded teams. - -To migrate from existing dashboard groups, dashboards and detectors to Terraform, you can use Python script. See :new-page:`Export dashboards script` on GitHub. - -* For details about the Terraform provider, see :new-page:`https://registry.terraform.io/providers/splunk-terraform/signalfx/latest` on the Terraform Registry. -* For information on using Terraform, see :ref:`terraform-config`. - - -.. _customer-framework: - -Finalize framework and adoption protocol -=============================================================================== - -As you onboard more teams with Splunk Observability Cloud, maintain review sessions to incorporate what you learned from previous onboardings. Review the feedback from the initial onboarded teams and engage with Splunk Observability Cloud Sales Engineers or Professional Services. Start utilizing resources available to your organization including engaging with your Splunk Observability Cloud Sales Engineer or Professional Services resources. These resources can help you with best practices and faster rollout. - -Next step -=============== - -Next, begin your initial pilot rollout for Splunk Application Performance Monitoring. :ref:`phase2-apm` \ No newline at end of file diff --git a/admin/admin-onboarding/phase2/phase2-rollout-plan.rst b/admin/admin-onboarding/phase2/phase2-rollout-plan.rst deleted file mode 100644 index b7730a055..000000000 --- a/admin/admin-onboarding/phase2/phase2-rollout-plan.rst +++ /dev/null @@ -1,101 +0,0 @@ -.. _phase2-rollout-plan: - -Pilot rollout phase part 1: Plan your pilot rollout -**************************************************************** - -After completing :ref:`phase1`, you are ready for phase 2, pilot rollout. - -Use the following information to guide your implementation of Splunk Infrastructure Monitoring and Splunk Application Performance Monitoring. - -- :ref:`pilots` -- :ref:`framework` -- :ref:`enable_integrations` -- :ref:`convention-deploy` -- :ref:`best-practices` -- :ref:`get-trained` - -.. note:: - Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager throughout your onboarding process. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. - -.. _pilots: - -Identify pilot teams and projects -===================================== - -Start planning the initial rollout to your organization's pilot teams. Identify your pilot teams and projects with approximate timelines and capacity requirements. - -There are 2 types of pilot teams to consider: - -* A set of teams that are ready or have started a new project and are using common technologies. -* A set of teams that have been using a non-standard technology. - -To avoid duplicating efforts, create a single service even if they are used by multiple teams. - -.. _framework: - -Set up an application framework -======================================= - -Once you know which teams are participating in the pilot and have collected their requirements, complete the following: - -#. :ref:`Identify initial metric, trace, and log integrations ` and enable them in the Splunk Observability Cloud. -#. :ref:`Identify a naming convention ` for the deployment environments for Splunk Application Performance Monitoring (APM). -#. :ref:`Establish best practices for Splunk Observability Cloud `. - -.. _enable_integrations: - -Identify and enable initial metric, trace, and log integrations ------------------------------------------------------------------------- - -Identify application tools that are used as part of services that the pilot team supports, such as database, message bus, and so on. Verify that the development languages used are supported by OpenTelemetry. For details, see :new-page:`https://opentelemetry.io/docs/instrumentation/`. - -Define a list of libraries required to support applications and those that are supported by OpenTelemetry to determine which applications require auto or manual instrumentation. For a list of languages supported by OpenTelemetry, see :new-page:`https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md`. - -Next, build your development pipeline: - -* Use automatic discovery on your hosts or Kubernetes cluster. For details, see :ref:`discovery_mode`. -* Use the automatic instrumentation for containers or virtual machines. For details, see :ref:`apm-gdi`. -* Identify the environment variables according to specific use cases. Each development language has its own settings, for example: - - * For Java information, see :ref:`advanced-java-otel-configuration`. - * For Node.js information, see :ref:`instrument-nodejs-applications`. - * For .NET information, see :ref:`instrument-otel-dotnet-applications`. - -.. _convention-deploy: - -Identify a naming convention for the deployment environments ------------------------------------------------------------------- - -To avoid overlapping configurations across other deployments of the same application, use defined deployment environments. For details about defining deployment environments, see :ref:`apm-environments`. - -You can also further filter Splunk Application Performance Monitoring (APM) data by defining teams, functions, and other tags, such as database names or frontend application names, to further utilize APM data filtering. - -To define these tags, you can use the standard method to add attributes to a trace of span using the OpenTelemetry environment variables. For more information on how to add context to spans tags, see :ref:`apm-add-context-trace-span`. - -.. _best-practices: - -Establish best practices for Splunk Observability Cloud -------------------------------------------------------------------------------- - -At this point you have some experience with configuration of the OpenTelemetry agents and auto instrumentation. You can now create guides for the teams that you want to onboard. - -Include the following items in our guide: - -* Which environment variables and command line parameters to set. For more information, see :ref:`advanced-java-otel-configuration` and :ref:`otel-install-linux-manual`. -* How to enable :ref:`AlwaysOn Profiling `. -* How to configure logs to add tracing information, depending on language. For a Java example, see :ref:`correlate-traces-with-logs-java`. -* Naming conventions for metrics and environments. For details on metric naming conventions, see :ref:`metric-dimension-names`. For naming environments, you can set the deployment environment as a span tag, which allows you to filter your APM by environments of interest. See :ref:`apm-environments` to learn more. - -.. _get-trained: - -Set up training plans for pilot users -=============================================== - -Splunk has training available to help you with your onboarding journey and best practices. For a list of free Splunk Observability Cloud courses, see :new-page:`Free training`. - -If building a center of excellence is required by your organization, the following certification path is available for :new-page:`Splunk O11y Cloud Certified Metrics Users `. - -Next step -=============== - -Next, begin your initial pilot rollout for Splunk Infrastructure Monitoring. See :ref:`phase2-im` \ No newline at end of file diff --git a/admin/admin-onboarding/phase2/phase2.rst b/admin/admin-onboarding/phase2/phase2.rst deleted file mode 100644 index c3380376e..000000000 --- a/admin/admin-onboarding/phase2/phase2.rst +++ /dev/null @@ -1,24 +0,0 @@ -.. _phase2: - - -Admin onboarding guide phase 2: Pilot rollout phase -**************************************************************** - -.. meta:: - :description: - -.. toctree:: - :hidden: - :maxdepth: 3 - - Part 1: Plan your pilot rollout - Part 2: Initial pilot rollout for Splunk Infrastructure Monitoring - Part 3: Initial pilot rollout for Splunk Application Performance Monitoring - -After completing :ref:`phase1`, you are ready for phase 2, pilot rollout. In the pilot rollout phase, your focus is to onboard your internal teams to Splunk Observability Cloud. These teams represent use cases that can be used to show the power and benefit of Splunk Observability Cloud to the rest of the organization. Your goal in the pilot rollout phase is to roll out initial pilots of Splunk Infrastructure Monitoring and Splunk Application Performance Monitoring. - -For this phase, complete the following topics: - -#. :ref:`Part 1: Plan your rollout `. -#. :ref:`Part 2: Initial pilot rollout for Splunk Infrastructure Monitoring `. -#. :ref:`Part 3: Initial pilot rollout for Splunk Application Performance Monitoring `. diff --git a/admin/admin-onboarding/phase3/phase3-apm.rst b/admin/admin-onboarding/phase3/phase3-apm.rst deleted file mode 100644 index 8d58cfb03..000000000 --- a/admin/admin-onboarding/phase3/phase3-apm.rst +++ /dev/null @@ -1,69 +0,0 @@ -.. _phase3-apm: - -Expansion and optimization part 2: Splunk Application Performance Monitoring -************************************************************************************* - -To expand and optimize Splunk Infrastructure Monitoring, complete the following tasks: - -1. :ref:`optimize-data` - -2. :ref:`bottlenecks` - -3. :ref:`data-links-apm` - -4. :ref:`onboard-apps` - -.. note:: - Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager throughout your onboarding process. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. - -.. _optimize-data: - -Optimize data usage -================================================================================================================ -Indexed tags are used to produce Troubleshooting MetricSets (TMS) and give visual insights through breakdowns for nodes and edges. Use Tag Spotlight to filter Service Level Indicators (SLIs) to specific tag values. Filter the service map. Indexed tags can include endpoints and operations. Indexed tags automatically generate SLIs and breakdowns. It is important to understand the cardinality contribution when indexing a span tag. - -To learn more, see the following topics: - -- :ref:`apm-metricsets` - -- :ref:`apm-span-tags` - -- :ref:`apm-index-tag-tips` - - - -.. _bottlenecks: - -Identify and address bottlenecks in code and architecture using AlwaysOn Profiling -================================================================================================================ -Using AlwaysOn Profiling in development environments helps identify bottlenecks in the code before turning on AlwaysOn Profiling in production environments. If you have an application or service using Java, Node.js, or .NET, turn on CPU profiling to get intra-service visibility to identify code issues that lead to a slow service. This also helps identify inefficiencies to reduce infrastructure footprint and spending. - -To learn more, see the following topics: - -- :ref:`profiling-intro` - -- :ref:`profiling-scenario-landingpage` - - -.. _data-links-apm: - -Use Data Links to connect APM properties to relevant resources -================================================================================================================ -After fully deploying Splunk APM, make sure you understand how to create global data links to link Splunk APM to outside resources such as Splunk Infrastructure Monitoring dashboards, Splunk Cloud Platform logs, Kibana logs, or custom URLs. - -To learn more, see the following topics: - -- :ref:`link-metadata-to-content` - -- :ref:`apm-create-data-links` - -- :ref:`apm-use-data-links` - - -.. _onboard-apps: - -Onboard all production applications -================================================================================================================ -During the expansion and optimization phase, you can automate most processes and add new services into Splunk Observability Cloud. You can continue expanding the OpenTelemetry agent configuration library for all production applications, which populates all the necessary metrics to build the desired charts, dashboards, and detectors. Continue to onboard all production applications. - -Congratulations on completing all 3 phases of onboarding Splunk Observability Cloud. Use this experience and any notes you might have to build a center of excellence that will grow as you expand your coverage and usage of Splunk Observability Cloud. \ No newline at end of file diff --git a/admin/admin-onboarding/phase3/phase3-im.rst b/admin/admin-onboarding/phase3/phase3-im.rst deleted file mode 100644 index 237ccfa6d..000000000 --- a/admin/admin-onboarding/phase3/phase3-im.rst +++ /dev/null @@ -1,256 +0,0 @@ -.. _phase3-im: - - -Expansion and optimization part 1: Splunk Infrastructure Monitoring -******************************************************************************* - -To expand and optimize Splunk Infrastructure Monitoring, complete the following tasks: - -1. :ref:`dashboards-charts` - -2. :ref:`advanced-detectors` - -3. :ref:`token-rotation` - -4. :ref:`mttr` - -5. :ref:`mpm` - -6. :ref:`network-exp` - -7. :ref:`usage-limits` - -8. :ref:`automate-workflows` - -9. :ref:`custom-use-cases` - -10. :ref:`prod-apps` - -11. :ref:`onboard-all` - -.. note:: - Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager throughout your onboarding process. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. - -.. _dashboards-charts: - -Build advanced dashboards and charts -================================================================================================================ -As part of the expansion process, ensure you familiarize teams with creating and customizing dashboards. Make sure your teams can complete these tasks: - -* Mirror and modify dashboards. -* Use dashboard filters and dashboard variables. -* Add text notes and event feeds to the dashboards. -* Use data links to dynamically link a dashboard to another dashboard, an external system, Splunk Application Performance Monitoring (APM), or Splunk Cloud Platform. -* Link metadata to related resources. - -For comprehensive documentation on these tasks, see the following topics: - -- :ref:`dashboards` - -- :ref:`data-visualization-charts` - -- :ref:`link-metadata-to-content` - -.. _advanced-detectors: - -Build advanced detectors -================================================================================================================ -Maximize your use of Splunk Infrastructure Monitoring by familiarizing your teams with advanced detectors. Advanced detectors enhance the basic list of alert conditions to take into account the different types of functions, such as additional firing, alert clearing conditions, or comparing 2 main functions using population_comparison. - -To learn more, see the following topics: - -- :ref:`get-started-detectoralert` - -- :ref:`scenarios-alerts-detectors` - -- :ref:`autodetect` - -- :ref:`create-detectors` - -- :ref:`linking-detectors` - -- :ref:`auto-clearing-alerts` - - -.. _token-rotation: - -Automate the token rotation process -================================================================================================================ -Because tokens expire after 1 year, you can automate token rotation by using an API call. For a given token, when the API runs to create a new token, the old token continues to work until the time you specified in the grace period. Wherever the old token is in use, use the API call to automate token rotation within the grace period. - -For example, you can use the API to rotate a token that a Kubernetes cluster uses to ingest metrics and trace data. The API generates a new token that you can store directly in the secret in the Kubernetes cluster as part of the automation so that the application retrieves the new token. - -To learn more, see the following topics: - -- :ref:`admin-tokens` - -- :ref:`admin-api-access-tokens` - -- :ref:`admin-tokens` - -- :ref:`admin-org-tokens` - - -.. _mttr: - -Identify and review mean time to resolution (MTTR) -================================================================================================================ - -When you use Splunk Observability Cloud, you can reduce the mean time to resolution (MTTR), of an issue. A long MTTR can be the result of many factors. - -.. list-table:: - :header-rows: 1 - :widths: 50, 50 - - * - :strong:`Cause of long MTTR` - - :strong:`Outcome` - - * - Appropriate people aren’t involved when an issue begins - - More time is spent finding the right people to fix the issue and approve the remediation - - * - Lack of insight into the effects on other systems - - More time is spent to analyze possible effects of a remediation procedure - - * - Teams use manual remediation procedures - - Because teams are too busy investigating and responding to incidents, they don’t have time to build automation and improve systems - - * - Teams don’t have time to update runbooks - - Without proper incident analysis and reporting, incident remediation runbooks often do not include critical information for resolving incidents - - -One factor might be the correct people aren't involved when an issue begins. After identifying the root cause, you must have the appropriate people to actually fix the issue, as well as the appropriate people to approve the remediation. - -Another factor causing a long MTTR can be a lack of insight into the effects on other systems. Without proper insight into how infrastructure and applications interconnect, it takes time to analyze the possible effects of a remediation procedure. - -A third cause of long MTTR can be that teams are using manual remediation procedures. Often teams don't have time to build automation and improve systems because they are too busy investigating and responding to incidents. - -A fourth factor can be that teams don't have time to update runbooks. Without proper incident analysis and reporting, incident remediation runbooks often do not include critical information for resolving incidents. - -With Splunk Infrastructure Monitoring, the following scenario typically results in a total latency of less than 4 minutes between deployment and rollback: - -1. A deployment happens. - -2. The deployment causes an incident. - -3. The incident triggers an alert. - -4. The alert triggers a rollback. - -After this process completes, requests are back to normal. See :ref:`practice-reliability-incident-response`. - -.. _mpm: - -Use metrics pipeline management tools to reduce cardinality of metric time series (MTS) -================================================================================================================ - -As metrics data usage, or cardinality, grows in Splunk Infrastructure Monitoring, the cost increases. - - -You can reduce overall monitoring cost and optimize your return on investment by storing less critical metrics data at a much lower cost. To do this, use metrics pipeline management (MPM) tools within Splunk Infrastructure Monitoring. With MPM, you can make the following optimizations: - -* Streamline storage and processing to evolve the metric analytics platform into a multitier platform. - -* Analyze reports to identify where to optimize usage. - -* Reduce metric time series (MTS) volume with rule-based metrics aggregation and filtering on dimensions. - -* Drop dimensions that are not needed. - -You can configure dimensions through the user interface, the API, and Terraform. - -For comprehensive documentation on MPM, see :ref:`metrics-pipeline-intro`. - - -.. _network-exp: - -Set up Network Explorer to monitor network environment -================================================================================================================ -Use the Splunk Distribution of OpenTelemetry Collector Helm chart to configure Network Explorer. Network Explorer inspects packets to capture network performance data with extended Berkeley Packet Filter (eBPF), technology which is run by Linux Kernel. eBPF allows programs to run in the operating system when the following kernel events occur: - -- When TCP handshake is complete - -- When TCP receives an acknowledgement for a packet - -Network Explorer captures network data that is passed on to the reducer and then to the Splunk OTel Collector. - -For Splunk OTel Collector to work with Network Explorer, you must install it in gateway mode. After installation, the Network Explorer navigator displays on the :guilabel:`Infrastructure` tab in Splunk Infrastructure Monitoring. - -For comprehensive documentation on Network Explorer, see :ref:`network-explorer`. - - -.. _usage-limits: - -Analyze and troubleshoot usage, limits, and throttles -================================================================================================================ -To view Splunk Observability Cloud Subscription Usage data within your organization, you must have the admin role. - -To analyze and troubleshoot usage, make sure you know how to complete the following tasks: - -* Understand the difference between host-based and MTS-based subscription usage -* Read available reports, such as monthly usage reports, hourly usage reports, dimension reports, and custom metric reports - -To learn more, see the following topics: - -- :ref:`sys-limits` - -- :ref:`data-o11y` - - -.. _automate-workflows: - -Automate key workflows to accelerate onboarding and standardize consistent practices -================================================================================================================ - -In this expansion and optimization phase, you can start to automate the onboarding process workflow. For example, consider automating team creation, token ingestion, HEC tokens for Log Observer Connect, and token rotation. Also consider prescriptive onboarding guides for instrumentation, such as automatic discovery and configuration with the Splunk Distribution of OpenTelemetry Collector, or using separate instrumentation agents, including predefining required environment variables. - -Use Splunk Observability Cloud REST APIs to automatically assign default dashboards and detectors to new teams through automation. - -To learn more, see the following topics: - -- :ref:`discovery_mode` - -- :ref:`dashboards-best-practices` - - -.. _custom-use-cases: - -Identify complex and customized use cases to enhance value and return on investment -================================================================================================================ -During the expansion and optimization phase, start identifying your teams' primary use cases and develop a plan to address their needs. Here are some examples of things teams might need to solve: - -- Handling large volumes of infrastructure data - -- Increasing developer efficiency to solve problems during deployment - -- Using Splunk Observability Cloud to monitor and control consumption rates of Kubernetes - -- Improving ROI (Return on Investment) - -- Information on how to improve MTTR (Mean Time To Resolution) - -- Ensuring and improving customer experience - -.. _prod-apps: - -Onboard all production applications -================================================================================================================ -During this phase, you can automate most processes and add new services into Splunk Observability Cloud. You can continue expanding the OTel agent configuration library for all production applications. Populate all the necessary metrics to build the desired charts, dashboards, and detectors. Continue to onboard all production applications. - - -.. _onboard-all: - -Onboard all users and teams -================================================================================================================ -During this phase, you can onboard all users and teams into Splunk Observability Cloud. Turn on enhanced team security to identify team managers and users. Use enhanced security within teams to control who can view and who can modify each dashboard and detector. - -To learn more, see the following topics: - -- :ref:`user-managment-intro` - -- :ref:`enhanced-team-security` - - -Next step -=============== - -Next, see :ref:`phase3-apm`. \ No newline at end of file diff --git a/admin/admin-onboarding/phase3/phase3.rst b/admin/admin-onboarding/phase3/phase3.rst deleted file mode 100644 index 4e2806579..000000000 --- a/admin/admin-onboarding/phase3/phase3.rst +++ /dev/null @@ -1,22 +0,0 @@ -.. _phase3: - -Admin onboarding guide phase 3: Expansion and optimization -******************************************************************************* - -.. toctree:: - :hidden: - :maxdepth: 3 - - Part 1: Expand and optimize Splunk Infrastructure Monitoring - Part 2: Expand and optimize Splunk Application Performance Monitoring - -After completing :ref:`phase1` and :ref:`phase2`, you are ready for phase 3, expansion and optimization. -In phase 3, you solidify the best practices and frameworks from the pilot rollout phase and apply them to a wider pool of infrastructure, applications, and teams. You begin by expanding and optimizing Splunk Application Performance Monitoring and Splunk Infrastructure Monitoring. - -For this phase, complete the following topics: - -1. :ref:`Expansion and optimization part 1: Splunk Infrastructure Monitoring `. - -2. :ref:`Expansion and optimization part 2: Splunk Application Performance Monitoring `. - - diff --git a/admin/admin.rst b/admin/admin.rst index 164525413..036b5e819 100644 --- a/admin/admin.rst +++ b/admin/admin.rst @@ -9,9 +9,9 @@ Set up your Splunk Observability Cloud organization The first step in getting started with Splunk Observability Cloud is setting up your organization. In Splunk Observability Cloud, an organization, or account, is the highest-level security grouping. Other organizations and their users can't access the data in your organization. -To set up your organization, create and carry out a plan for addressing the tasks described in this topic. See the :ref:`admin-onboarding-guide` for prescriptive guidance for setting up your organization and other onboarding tasks. +To set up your organization, create and carry out a plan for addressing the tasks described in this topic. See the :ref:`get-started-guide` for prescriptive guidance for setting up your organization and other tasks for getting started. -Many of these tasks require the admin role in Splunk Observability Cloud. If you choose to use Splunk Cloud Platform as your identity provider, you also need the sc_admin role in Splunk Cloud Platform. +Many of these tasks require the admin role in Splunk Observability Cloud. If you opt to use Splunk Cloud Platform as your identity provider, you also need the sc_admin role in Splunk Cloud Platform. The following table shows you aspects of your Splunk Observability Cloud organization that you can plan for and set up: @@ -23,7 +23,7 @@ The following table shows you aspects of your Splunk Observability Cloud organiz - :strong:`Link to documentation` - :strong:`Role required` - * - Choose from these 3 options for managing user access: + * - Select from these 3 options for managing user access: #. Use Splunk Cloud Platform as the unified identity provider. #. Use an external Lightweight Directory Access Protocol (LDAP) and control access through Single Sign-On (SSO). @@ -32,7 +32,7 @@ The following table shows you aspects of your Splunk Observability Cloud organiz See :ref:`sso-label` to control access through Single Sign-On (SSO). - See :ref:`user-managment-intro` to use Splunk Observability Cloud user management. + See :ref:`user-management-intro` to use Splunk Observability Cloud user management. - admin * - Allow Splunk Observability Cloud services in your network diff --git a/admin/user-management/user-management-intro.rst b/admin/user-management/user-management-intro.rst index f9dec1d45..1901e6be7 100644 --- a/admin/user-management/user-management-intro.rst +++ b/admin/user-management/user-management-intro.rst @@ -1,4 +1,4 @@ -.. _user-managment-intro: +.. _user-management-intro: ******************************************************************************** Manage users and teams diff --git a/alerts-detectors-notifications/alert-condition-reference/custom-threshold.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/custom-threshold.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/custom-threshold.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/custom-threshold.rst diff --git a/alerts-detectors-notifications/alert-condition-reference/heartbeat-check.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/heartbeat-check.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/heartbeat-check.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/heartbeat-check.rst diff --git a/alerts-detectors-notifications/alert-condition-reference/hist-anomaly.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/hist-anomaly.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/hist-anomaly.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/hist-anomaly.rst diff --git a/alerts-detectors-notifications/alert-condition-reference/index.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/index.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/index.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/index.rst diff --git a/alerts-detectors-notifications/alert-condition-reference/outlier-detection.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/outlier-detection.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/outlier-detection.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/outlier-detection.rst diff --git a/alerts-detectors-notifications/alert-condition-reference/resource-running-out.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/resource-running-out.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/resource-running-out.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/resource-running-out.rst diff --git a/alerts-detectors-notifications/alert-condition-reference/static-threshold.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/static-threshold.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/static-threshold.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/static-threshold.rst diff --git a/alerts-detectors-notifications/alert-condition-reference/sudden-change.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/sudden-change.rst similarity index 100% rename from alerts-detectors-notifications/alert-condition-reference/sudden-change.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-condition-reference/sudden-change.rst diff --git a/alerts-detectors-notifications/alert-message-variables-reference.rst b/alerts-detectors-notifications/alerts-and-detectors/alert-message-variables-reference.rst similarity index 100% rename from alerts-detectors-notifications/alert-message-variables-reference.rst rename to alerts-detectors-notifications/alerts-and-detectors/alert-message-variables-reference.rst diff --git a/alerts-detectors-notifications/alerts-detectors-notifications.rst b/alerts-detectors-notifications/alerts-and-detectors/alerts-detectors-notifications.rst similarity index 75% rename from alerts-detectors-notifications/alerts-detectors-notifications.rst rename to alerts-detectors-notifications/alerts-and-detectors/alerts-detectors-notifications.rst index 39364d054..13de73aa9 100644 --- a/alerts-detectors-notifications/alerts-detectors-notifications.rst +++ b/alerts-detectors-notifications/alerts-and-detectors/alerts-detectors-notifications.rst @@ -4,26 +4,48 @@ Introduction to alerts and detectors in Splunk Observability Cloud ************************************************************************** - +.. toctree:: + :maxdepth: 3 + :hidden: + + Best practices for creating detectors + Alerts and detectors scenario library TOGGLE + Use and customize AutoDetect alerts and detectors TOGGLE + create-detectors-for-alerts + detector-manage-permissions + link-detectors-to-charts + manage-notifications + preview-detector-alerts + View alerts + View detectors + detector-options + mute-notifications + auto-clearing-alerts + Troubleshoot detectors + Built-in alert conditions TOGGLE + alert-message-variables-reference .. meta:: :description: Splunk Observability Cloud uses detectors, events, alerts, and notifications to keep you informed when certain criteria are met. When a detector condition is met, the detector generates an event, triggers an alert, and can send one or more notifications. -Splunk Observability Cloud uses :strong:`detectors`, :strong:`events`, :strong:`alerts`, and :strong:`notifications` to keep you informed when certain criteria are met. +Splunk Observability Cloud uses detectors, events, alerts, and notifications to keep you informed when certain criteria are met. Active alerts and existing detectors can be found in tabs on the :strong:`Alerts` page, and events can be found in the :strong:`Events` sidebar, available from within any dashboard. -Sample scenarios of alerts and detectors -========================================== +.. raw:: html + + +

Example scenarios of alerts and detectors

+ - You want a message sent to a Slack channel or to an email address for the Ops team when CPU Utilization has reached the 95th percentile. - You want to be notified when the number of concurrent users is approaching a limit that might require you to spin up an additional AWS instance. -Active alerts and existing detectors can be found in tabs on the :strong:`Alerts` page, and events can be found in the :strong:`Events` sidebar, available from within any dashboard. - - -.. _detectors-definition: +For more example scenarios, see :ref:`scenarios-alerts-detectors`. -Detectors -================== +.. raw:: html + + +

Detectors

+ A :term:`detector` monitors signals on a plot line, as on a chart, and triggers alert events and clear events based on conditions you define in rules. Conceptually, you can think of a detector as a chart that can trigger alerts when a signal value crosses specified thresholds defined in alert rules. @@ -31,8 +53,11 @@ Rules trigger an alert when the conditions in those rules are met. Individual ru Detectors also evaluate streams against a specific condition over a period of time. When you apply analytics to a metric time series (MTS), it produces a stream, an object of SignalFlow query language. The MTS can contain raw data or the output of an analytics function. -Metadata in detectors --------------------------- +.. raw:: html + + +

Metadata in detectors

+ The metadata associated with MTS can be used to make detector definition simpler, more compact, and more resilient. @@ -42,26 +67,33 @@ If you want to track whether the CPU utilization remains below 80 for each of th If the population changes because the cluster has grown to 40 virtual machines, you can make a cluster- or service-level detector. If you include the :code:`service:kafka` dimension for the newly-added virtual machines, the existing detector's query includes all new virtual machines in the cluster in the threshold evaluation. -Dynamic threshold conditions ------------------------------------ +.. raw:: html + + +

Dynamic threshold conditions

+ + Setting static values for detector conditions can lead to noisy alerting because the appropriate value for one service or for a particular time of day might not be suitable for another service or a different time of day. For example, if your applications or services contain an elastic infrastructure, like Docker containers or EC2 autoscaling, the values for your alerts might vary by time of day. You can define dynamic thresholds to account for changes in streaming data. For example, if your metric exhibits cyclical behavior, you can define a threshold that is a one-week timeshifted version of the same metric. Suppose the relevant basis of comparison for your data is the behavior of a population, such as a clustered service. In that case, you can define your threshold as a value that reflects that behavior. For example, the 90th percentile for the metric across the entire cluster over a moving 15-minute window. To learn more, see :ref:`condition-reference`. +.. raw:: html + + +

Alerts

+ -Alerts -=========== When data in an input MTS matches a condition, the detector generates a trigger event and an alert that has a specific severity level. You can configure an alert to send a notification using Splunk On-Call. For more information, see the :new-page:`Splunk On-Call ` documentation. Alert rules use settings you specify for built-in alert conditions to define thresholds that trigger alerts. When a detector determines that the conditions for a rule are met, it triggers an alert, creates an event, and sends notifications (if specified). Detectors can send notifications via email, as well as via other systems, such as Slack, or via a webhook. - -.. _detector-dashboard: - -Interaction between detectors, events, alerts, and notifications -=================================================================== +.. raw:: html + + +

Interaction between detectors, events, alerts, and notifications

+ The interaction between detectors, events, alerts, and notifications is as follows: @@ -100,12 +132,14 @@ The boxes represent objects relating to the detector, and the diamonds represent D -.-> F["Notifications (optional)"] -What you can do with alerts and detectors -================================================== +.. raw:: html + + +

What you can do with alerts and detectors

+ The following table shows you what you can do with detectors, events, alerts, and notifications: - .. list-table:: :header-rows: 1 :widths: 50 50 diff --git a/alerts-detectors-notifications/auto-clearing-alerts.rst b/alerts-detectors-notifications/alerts-and-detectors/auto-clearing-alerts.rst similarity index 100% rename from alerts-detectors-notifications/auto-clearing-alerts.rst rename to alerts-detectors-notifications/alerts-and-detectors/auto-clearing-alerts.rst diff --git a/alerts-detectors-notifications/autodetect/autodetect-customize.rst b/alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-customize.rst similarity index 100% rename from alerts-detectors-notifications/autodetect/autodetect-customize.rst rename to alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-customize.rst diff --git a/alerts-detectors-notifications/autodetect/autodetect-intro.rst b/alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-intro.rst similarity index 100% rename from alerts-detectors-notifications/autodetect/autodetect-intro.rst rename to alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-intro.rst diff --git a/alerts-detectors-notifications/autodetect/autodetect-list.rst b/alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-list.rst similarity index 100% rename from alerts-detectors-notifications/autodetect/autodetect-list.rst rename to alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-list.rst diff --git a/alerts-detectors-notifications/autodetect/autodetect-subscribe-mute-turn-off.rst b/alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-subscribe-mute-turn-off.rst similarity index 100% rename from alerts-detectors-notifications/autodetect/autodetect-subscribe-mute-turn-off.rst rename to alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-subscribe-mute-turn-off.rst diff --git a/alerts-detectors-notifications/autodetect/autodetect-view.rst b/alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-view.rst similarity index 100% rename from alerts-detectors-notifications/autodetect/autodetect-view.rst rename to alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect-view.rst diff --git a/alerts-detectors-notifications/autodetect/autodetect.rst b/alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect.rst similarity index 100% rename from alerts-detectors-notifications/autodetect/autodetect.rst rename to alerts-detectors-notifications/alerts-and-detectors/autodetect/autodetect.rst diff --git a/alerts-detectors-notifications/create-detectors-for-alerts.rst b/alerts-detectors-notifications/alerts-and-detectors/create-detectors-for-alerts.rst similarity index 100% rename from alerts-detectors-notifications/create-detectors-for-alerts.rst rename to alerts-detectors-notifications/alerts-and-detectors/create-detectors-for-alerts.rst diff --git a/alerts-detectors-notifications/detector-manage-permissions.rst b/alerts-detectors-notifications/alerts-and-detectors/detector-manage-permissions.rst similarity index 100% rename from alerts-detectors-notifications/detector-manage-permissions.rst rename to alerts-detectors-notifications/alerts-and-detectors/detector-manage-permissions.rst diff --git a/alerts-detectors-notifications/detector-options.rst b/alerts-detectors-notifications/alerts-and-detectors/detector-options.rst similarity index 100% rename from alerts-detectors-notifications/detector-options.rst rename to alerts-detectors-notifications/alerts-and-detectors/detector-options.rst diff --git a/alerts-detectors-notifications/detectors-best-practices.rst b/alerts-detectors-notifications/alerts-and-detectors/detectors-best-practices.rst similarity index 100% rename from alerts-detectors-notifications/detectors-best-practices.rst rename to alerts-detectors-notifications/alerts-and-detectors/detectors-best-practices.rst diff --git a/alerts-detectors-notifications/link-detectors-to-charts.rst b/alerts-detectors-notifications/alerts-and-detectors/link-detectors-to-charts.rst similarity index 100% rename from alerts-detectors-notifications/link-detectors-to-charts.rst rename to alerts-detectors-notifications/alerts-and-detectors/link-detectors-to-charts.rst diff --git a/alerts-detectors-notifications/manage-notifications.rst b/alerts-detectors-notifications/alerts-and-detectors/manage-notifications.rst similarity index 100% rename from alerts-detectors-notifications/manage-notifications.rst rename to alerts-detectors-notifications/alerts-and-detectors/manage-notifications.rst diff --git a/alerts-detectors-notifications/mute-notifications.rst b/alerts-detectors-notifications/alerts-and-detectors/mute-notifications.rst similarity index 100% rename from alerts-detectors-notifications/mute-notifications.rst rename to alerts-detectors-notifications/alerts-and-detectors/mute-notifications.rst diff --git a/alerts-detectors-notifications/preview-detector-alerts.rst b/alerts-detectors-notifications/alerts-and-detectors/preview-detector-alerts.rst similarity index 100% rename from alerts-detectors-notifications/preview-detector-alerts.rst rename to alerts-detectors-notifications/alerts-and-detectors/preview-detector-alerts.rst diff --git a/alerts-detectors-notifications/scenarios-detectors-alerts/delay-detectors.rst b/alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/delay-detectors.rst similarity index 100% rename from alerts-detectors-notifications/scenarios-detectors-alerts/delay-detectors.rst rename to alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/delay-detectors.rst diff --git a/alerts-detectors-notifications/scenarios-detectors-alerts/find-detectors.rst b/alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/find-detectors.rst similarity index 100% rename from alerts-detectors-notifications/scenarios-detectors-alerts/find-detectors.rst rename to alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/find-detectors.rst diff --git a/alerts-detectors-notifications/scenarios-detectors-alerts/max-delay-detectors.rst b/alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/max-delay-detectors.rst similarity index 100% rename from alerts-detectors-notifications/scenarios-detectors-alerts/max-delay-detectors.rst rename to alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/max-delay-detectors.rst diff --git a/alerts-detectors-notifications/scenarios-detectors-alerts/monitor-autodetect.rst b/alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/monitor-autodetect.rst similarity index 100% rename from alerts-detectors-notifications/scenarios-detectors-alerts/monitor-autodetect.rst rename to alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/monitor-autodetect.rst diff --git a/alerts-detectors-notifications/scenarios-detectors-alerts/monitor-server-latency.rst b/alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/monitor-server-latency.rst similarity index 100% rename from alerts-detectors-notifications/scenarios-detectors-alerts/monitor-server-latency.rst rename to alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/monitor-server-latency.rst diff --git a/alerts-detectors-notifications/scenarios-detectors-alerts/scenarios-intro.rst b/alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/scenarios-intro.rst similarity index 100% rename from alerts-detectors-notifications/scenarios-detectors-alerts/scenarios-intro.rst rename to alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/scenarios-intro.rst diff --git a/alerts-detectors-notifications/scenarios-detectors-alerts/troubleshoot-noisy-detectors.rst b/alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/troubleshoot-noisy-detectors.rst similarity index 100% rename from alerts-detectors-notifications/scenarios-detectors-alerts/troubleshoot-noisy-detectors.rst rename to alerts-detectors-notifications/alerts-and-detectors/scenarios-detectors-alerts/troubleshoot-noisy-detectors.rst diff --git a/alerts-detectors-notifications/troubleshoot-detectors.rst b/alerts-detectors-notifications/alerts-and-detectors/troubleshoot-detectors.rst similarity index 100% rename from alerts-detectors-notifications/troubleshoot-detectors.rst rename to alerts-detectors-notifications/alerts-and-detectors/troubleshoot-detectors.rst diff --git a/alerts-detectors-notifications/view-alerts.rst b/alerts-detectors-notifications/alerts-and-detectors/view-alerts.rst similarity index 100% rename from alerts-detectors-notifications/view-alerts.rst rename to alerts-detectors-notifications/alerts-and-detectors/view-alerts.rst diff --git a/alerts-detectors-notifications/view-detectors.rst b/alerts-detectors-notifications/alerts-and-detectors/view-detectors.rst similarity index 100% rename from alerts-detectors-notifications/view-detectors.rst rename to alerts-detectors-notifications/alerts-and-detectors/view-detectors.rst diff --git a/gdi/get-data-in/connect/aws/aws-prereqs.rst b/gdi/get-data-in/connect/aws/aws-prereqs.rst index 56b451569..f1e2928d9 100644 --- a/gdi/get-data-in/connect/aws/aws-prereqs.rst +++ b/gdi/get-data-in/connect/aws/aws-prereqs.rst @@ -262,8 +262,8 @@ These are these permissions to allow Splunk Observability Cloud to collect AWS t - ``"kinesis:ListShards"`` - ``"kinesis:ListStreams"`` - ``"kinesis:ListTagsForStream"`` -- ``“kinesisanalytics:DescribeApplication”`` -- ``“kinesisanalytics:ListApplications”`` +- ``"kinesisanalytics:DescribeApplication"`` +- ``"kinesisanalytics:ListApplications"`` - ``"kinesisanalytics:ListTagsForResource"`` - ``"lambda:GetAlias"`` - ``"lambda:ListFunctions"`` diff --git a/gdi/get-data-in/connect/aws/aws-tutorial/tutorial-aws-use.rst b/gdi/get-data-in/connect/aws/aws-tutorial/tutorial-aws-use.rst index 24930001a..5bf45c513 100644 --- a/gdi/get-data-in/connect/aws/aws-tutorial/tutorial-aws-use.rst +++ b/gdi/get-data-in/connect/aws/aws-tutorial/tutorial-aws-use.rst @@ -113,4 +113,4 @@ Learn more * To learn how to jump between components of Splunk Observability Cloud by selecting related data, see :ref:`get-started-relatedcontent`. * To learn about additional data sources that you can monitor using Splunk Observability Cloud, see :ref:`supported-data-sources`. * To learn how to coordinate team efforts in Splunk Observability Cloud using team alerts and dashboards, see :ref:`admin-manage-teams` -* To learn more about the concepts used in this tutorial and Splunk Observability Cloud in general, see :ref:`welcome`. \ No newline at end of file + * To learn more about the concepts used in this tutorial and Splunk Observability Cloud in general, see :ref:`overview`. \ No newline at end of file diff --git a/gdi/get-data-in/gdi-guide/additional-resources.rst b/gdi/get-data-in/gdi-guide/additional-resources.rst index 8ce2f0920..9ed71086c 100644 --- a/gdi/get-data-in/gdi-guide/additional-resources.rst +++ b/gdi/get-data-in/gdi-guide/additional-resources.rst @@ -12,7 +12,7 @@ Now that you've set up your Splunk Observability Cloud components, learn more ab Coordinate team work around your data ------------------------------------------------------------------- -You can create and manage users and teams to collaborate in Splunk Observability Cloud. See :ref:`admin-onboarding-guide` to begin integrating Splunk Observability Cloud with your organization. +You can create and manage users and teams to collaborate in Splunk Observability Cloud. See :ref:`get-started-guide` to begin integrating Splunk Observability Cloud with your organization. Create dashboards and charts to monitor your data ------------------------------------------------------------------- diff --git a/gdi/get-data-in/get-data-in.rst b/gdi/get-data-in/get-data-in.rst index e36fb20ef..55de84bad 100644 --- a/gdi/get-data-in/get-data-in.rst +++ b/gdi/get-data-in/get-data-in.rst @@ -16,20 +16,21 @@ Get data into Splunk Observability Cloud gdi-guide/api-onboarding.rst gdi-guide/additional-resources.rst -Use Splunk Observability Cloud to achieve full-stack observability of all your data sources, including your infrastructure, applications, and user interfaces. Splunk Observability Cloud includes the following products: +Use Splunk Observability Cloud to achieve full-stack observability of all your data sources, including your infrastructure, applications, and user interfaces. Splunk Observability Cloud includes the following solutions: - :ref:`Splunk Infrastructure Monitoring ` - :ref:`Splunk Application Performance Monitoring (APM) ` - :ref:`Splunk Real User Monitoring (RUM) ` - :ref:`Splunk Log Observer Connect ` +- :ref:`Splunk Synthetic Monitoring ` - Splunk Synthetic Monitoring does not have a data import component -This guide provides four chapters that guide you through the process of setting up each component of Splunk Observability Cloud. +This guide provides 4 chapters that guide you through the process of setting up each component of Splunk Observability Cloud. .. raw:: html

How to use this guide

-You can set up each of Splunk's products, or you can choose individual components to set up. +You can set up each solution, or you can opt individual components to set up. If you're setting up all components, follow each part of each chapter in order. Otherwise, select the chapter or part you'd like to follow. @@ -47,4 +48,4 @@ If you're setting up all components, follow each part of each chapter in order. * :ref:`rum-onboarding` * :ref:`api-onboarding` - * :ref:`additional-resources` + * :ref:`additional-resources` \ No newline at end of file diff --git a/gdi/opentelemetry/collector-kubernetes/k8s-infrastructure-tutorial/k8s-activate-detector.rst b/gdi/opentelemetry/collector-kubernetes/k8s-infrastructure-tutorial/k8s-activate-detector.rst index caab9c88a..95bfa340a 100644 --- a/gdi/opentelemetry/collector-kubernetes/k8s-infrastructure-tutorial/k8s-activate-detector.rst +++ b/gdi/opentelemetry/collector-kubernetes/k8s-infrastructure-tutorial/k8s-activate-detector.rst @@ -38,4 +38,4 @@ Learn more ---------- * For more details about alerts and detectors, see :ref:`Introduction to alerts and detectors in Splunk Observability Cloud `. -* To learn more about the concepts in this tutorial, such as managing dashboards and teams, see :ref:`welcome`. \ No newline at end of file +* To learn more about the concepts in this tutorial, such as managing dashboards and teams, see :ref:`overview`. \ No newline at end of file diff --git a/gdi/opentelemetry/components/chrony-receiver.rst b/gdi/opentelemetry/components/chrony-receiver.rst index e61e86c09..8a607da1d 100644 --- a/gdi/opentelemetry/components/chrony-receiver.rst +++ b/gdi/opentelemetry/components/chrony-receiver.rst @@ -7,6 +7,113 @@ Chrony receiver .. meta:: :description: Go implementation of the command chronyc tracking to allow for portability across systems and platforms. -The Splunk Distribution of the OpenTelemetry Collector supports the Chrony receiver. Documentation is planned for a future release. +The Chrony receiver is a pure Go implementation of the command ``chronyc tracking`` which allows portability across systems and platforms. The receiver produces all of the metrics that would typically be captured by the tracking command. -To find information about this component in the meantime, see :new-page:`Chrony receiver ` on GitHub. +For more information about Chrony, see :new-page:`Red Hat's Chrony suite documentation `. + +Get started +====================== + +Follow these steps to configure and activate the component: + +1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform: + + - :ref:`otel-install-linux` + - :ref:`otel-install-windows` + - :ref:`otel-install-k8s` + +2. Configure the Chrony receiver as described in the next section. +3. Restart the Collector. + +Default configuration +-------------------------------- + +To activate the receiver, add ``chrony`` to the ``receivers`` section of your configuration file: + +.. code:: yaml + + receivers: + chrony/defaults: + endpoint: unix:///var/run/chrony/chronyd.sock # The default port by chronyd to allow cmd access + timeout: 10s # Allowing at least 10s for chronyd to respond before giving up + +Next, include the receiver in the ``metrics`` pipeline of the ``service`` section of your configuration file: + +.. code:: yaml + + service: + pipelines: + metrics: + receivers: + - chrony + +Advanced configuration +----------------------------------------------- + +You can use the following settings: + +* ``endpoint``. Required. The address where ``chrony`` communicates to. Allowed formats are: + + * udp://hostname:port + + * unixgram:///path/to/chrony/sock + + * unix:///path/to/chrony.sock - Note the triple slash. Unix is converted to unixgram. + +* ``timeout``. Optional. The total amount of time allowed to read and process the data from chronyd. Use at least 1 second. + +* ``collection_interval``. Optional. Determines how often to query Chrony. + +* ``initial_delay``. Optional. ``1s`` by default. Defines how long this receiver waits before starting. See more in :new-page:`Red Hat's Chrony suite documentation `. + + +* ``metrics``. Optional. Metrics to export. See the :new-page:`metric documentation in GitHub `. + +Configuration example +----------------------------------------------- + +See the following configuration example: + +.. code:: yaml + + receivers: + chrony: + endpoint: unix:///var/run/chrony/chronyd.sock + timeout: 10s + collection_interval: 30s + metrics: + ntp.skew: + enabled: true + ntp.stratum: + enabled: true + +.. _chrony-receiver-settings: + +Settings +====================== + +The following table shows the configuration options for the Chrony receiver: + +.. raw:: html + +
+ +.. _metrics-receiver-settings: + +Metrics +======================= + +The following metrics, resource attributes, and attributes, are available. + +.. raw:: html + +
+ +See also the :new-page:`metric documentation in GitHub `. + +.. include:: /_includes/activate-deactivate-native-metrics.rst + +Troubleshooting +====================== + +.. include:: /_includes/troubleshooting-components.rst diff --git a/gdi/opentelemetry/data-processing.rst b/gdi/opentelemetry/data-processing.rst index 0a3553092..bb3c0fa76 100644 --- a/gdi/opentelemetry/data-processing.rst +++ b/gdi/opentelemetry/data-processing.rst @@ -7,7 +7,7 @@ Process your data with pipelines .. meta:: :description: Learn how to process data collected with the Splunk Distribution of the OpenTelemetry Collector. -Use pipelines in your Collector's config file to define the path you want your ingested data to follow. Specify which components you want to use, starting from data reception using :ref:`receivers `, then data processing or modification with :ref:`processors `, until data finally exits the Collector through :ref:`exporters `. For an overview of all available components and theire behavior refer to :ref:`otel-components`. +Use pipelines in your Collector's config file to define the path you want your ingested data to follow. Specify which components you want to use, starting from data reception using :ref:`receivers `, then data processing or modification with :ref:`processors `, until data finally exits the Collector through :ref:`exporters `. For an overview of all available components and their behavior refer to :ref:`otel-components`. Pipelines operate on three data types: logs, traces, and metrics. To learn more about data in Splunk Observability Cloud, see :ref:`data-model`. diff --git a/get-started/contribute.rst b/get-started/contribute.rst index d692193cb..5e490d0c0 100644 --- a/get-started/contribute.rst +++ b/get-started/contribute.rst @@ -17,15 +17,21 @@ You can update the Splunk Observability Cloud documentation to fix typos or othe .. note:: If you're unsure about a change or have a different question about the docs, use the feedback form at the bottom of every page to send questions or comments to the Splunk Observability Cloud documentation team. -Prerequisites -============================== +.. raw:: html + + +

Prerequisites

+ To update the Splunk Observability Cloud documentation you need a GitHub account. -You can use an existing GitHub account or create one for free in the GitHub website. +You can use an existing GitHub account or create a new account for free in the GitHub website. -Edit this page link -============================== +.. raw:: html + + +

Edit this page link

+ On every page of the Splunk Observability Cloud documentation you can find an :guilabel:`Edit this page` link. Select the link to load the source of the document in a GitHub preview and start editing. @@ -35,9 +41,12 @@ On every page of the Splunk Observability Cloud documentation you can find an :g After you've completed your edit, GitHub prompts you to open a pull request and fill out the description of the changes using a template. -Within three days, the Splunk Observability Cloud documentation team reviews your pull request and might ask you to make some edits. If the changes are approved, the pull requests is approved and merged. +Within 3 days, the Splunk Observability Cloud documentation team reviews your pull request and might ask you to make some edits. If the changes are approved, the pull requests is approved and merged. -Contribution guidelines -============================== +.. raw:: html + + +

Contribution guidelines

+ You can learn more about how to build and test the docs locally, as well as review criteria, in the :new-page:`CONTRIBUTING.md ` file. diff --git a/get-started/get-started-guide/get-started-guide.rst b/get-started/get-started-guide/get-started-guide.rst new file mode 100644 index 000000000..256518307 --- /dev/null +++ b/get-started/get-started-guide/get-started-guide.rst @@ -0,0 +1,102 @@ +.. _get-started-guide: + +Get started guide for Splunk Observability Cloud admins +********************************************************* + +.. toctree:: + :hidden: + :maxdepth: 3 + + Phase 1: Onboarding readiness + Phase 2: Initial rollout + Phase 3: Scaled rollout + +The journey for getting started with Splunk Observability Cloud has 3 phases: onboarding readiness, initial rollout, and scaled rollout. In the onboarding readiness phase, you set up users, teams, and access controls using roles and token management and lay the groundwork for connectivity. Next, in the initial rollout phase, you get your data into Splunk Observability Cloud and set up the Splunk Observability Cloud products for your initial project team use cases. In the final scaled rollout phase, you establish repeatable observability practices using automation, data management, detectors, and dashboards. + +.. raw:: html + + +

How to use this guide

+ + + +* Use the following table to get a high-level overview of the primary setup steps involved in each phase. +* Use the links for each step to go directly to the detailed instructions or go to the phase topic to view all phase steps in detail. +* In the table, you can also reference optional and advanced configurations that you can make to your setup as part of each phase of your journey. +* Use the links to education resources for each phase to ensure you have the foundational knowledge and skills to successfully set up Splunk Observability Cloud. + +.. note:: This guide is for Splunk Observability Cloud users with the admin role. + +.. image:: /_images/get-started/onboarding-guide-2point0-flowonly.svg + :width: 100% + :alt: . + +.. list-table:: + :header-rows: 1 + :widths: 10 30 30 30 + :width: 100% + + * - :strong:`Information type` + - :strong:`Phase 1: Onboarding readiness` + - :strong:`Phase 2: Initial rollout` + - :strong:`Phase 3: Scaled rollout` + + * - :strong:`Phase description` + - Set up users, teams, and access controls through roles and token management and lay the groundwork for connectivity + - Bring data in and set up the Splunk Observability Cloud products for your initial project team use cases + - Increase usage across all user teams and establish repeatable observability practices through automation, data management, detectors, and dashboards + + * - :strong:`Primary setup steps` + - #. :ref:`phase1-create-trial` + #. :ref:`phase1-network` + #. :ref:`phase1-user-access` + #. :ref:`phase1-teams-tokens` + + See :ref:`get-started-guide-onboarding-readiness` for detailed steps. + + - #. :ref:`phase2-initial-environment` + #. :ref:`phase2-infra-mon` + #. :ref:`phase2-apm` + #. :ref:`phase2-rum` + #. :ref:`phase2-synthetics` + + See :ref:`get-started-guide-initial-rollout` for detailed steps. + + - #. :ref:`phase3-pipeline` + #. :ref:`phase3-rotate-token` + #. :ref:`phase3-mpm` + #. :ref:`phase3-names-data` + #. :ref:`phase3-dash-detect` + #. :ref:`phase3-onboard-all` + + See :ref:`get-started-guide-scaled-rollout` for detailed steps. + + * - :strong:`Optional and advanced configurations` + - * :ref:`advanced-config-custom-URL` + * :ref:`advanced-config-parent-child` + * :ref:`advanced-config-logs` + * :ref:`advanced-config-3rd-party` + + See :ref:`Phase 1 optional and advanced configurations `. + + - * :ref:`phase3-network-exp` + * :ref:`phase2-profiling` + * :ref:`phase2-related-content` + + See :ref:`Phase 2 optional and advanced configurations `. + + - * :ref:`phase3-data-links` + * :ref:`phase3-usage-limits` + + See :ref:`Phase 3 optional and advanced configurations `. + + * - :strong:`Education resources` + - * :new-page:`Free Splunk Observability Cloud courses` + * :new-page:`Full course catalog for Splunk Observability Cloud ` + * See the :new-page:`Curated started track for Splunk Observability Cloud ` to determine what courses to prioritize. + * :new-page:`Splunk Observability Cloud metrics user certification ` + - * :new-page:`Get familiar with OpenTelemetry concepts ` + * To learn more about the data model for Splunk Observability Cloud, see :ref:`data-model` + - * See :ref:`otel-sizing` to learn about OpenTelemetry sizing requirements. + * :new-page:`Splunk Observability Cloud Workshops` + * :new-page:`Curated training curriculum for Splunk Observability Cloud end users` \ No newline at end of file diff --git a/get-started/get-started-guide/initial-rollout.rst b/get-started/get-started-guide/initial-rollout.rst new file mode 100644 index 000000000..453f64635 --- /dev/null +++ b/get-started/get-started-guide/initial-rollout.rst @@ -0,0 +1,140 @@ +.. _get-started-guide-initial-rollout: + +Get started guide phase 2: Initial rollout +********************************************************* + +After completing the :ref:`get-started-guide-onboarding-readiness`, you are ready for phase 2, initial rollout. In the initial rollout phase, you get your data into Splunk Observability Cloud and set up the Splunk Observability Cloud products that apply to your organization. These products include Infrastructure Monitoring, Application Performance Monitoring (APM), Real User Monitoring (RUM), and Synthetics. + +To get a high-level overview of the entire getting started journey for Splunk Observability Cloud, see :ref:`get-started-guide`. + +.. note:: This guide is for Splunk Observability Cloud users with the admin role. + +.. image:: /_images/get-started/onboarding-guide-2point0-initial.svg + :width: 100% + :alt: + +To configure Splunk Observability Cloud solutions for initial rollout, complete the following tasks if they are relevant to your organization: + +#. :ref:`phase2-initial-environment` +#. :ref:`phase2-infra-mon` +#. :ref:`phase2-apm` +#. :ref:`phase2-rum` +#. :ref:`phase2-synthetics` + +.. note:: + Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager as you get started. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. + +.. _phase2-initial-environment: + +Select an initial rollout environment to get data in +======================================================== + +To get started with Splunk Observability Cloud, select an environment that supports the use of automatic discovery or the prepackaged integrations with cloud providers including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). See :ref:`discovery_mode` and :ref:`get-started-connect` for detailed setup steps. + +If you do not have an environment that supports the use of automatic discovery or the cloud service provider integrations, the following sections include additional options for getting data in for specific use cases. You can also get an overview in the :ref:`get-started-get-data-in` guide. + +.. _phase2-infra-mon: + +Set up Splunk Infrastructure Monitoring +========================================= + +Use Splunk Infrastructure Monitoring to get insight into and run analytics on your infrastructure and resources for physical and virtual components across hybrid and multicloud environments. Infrastructure Monitoring offers support for a broad range of integrations for collecting full-fidelity data, from system metrics for infrastructure components to custom data from your applications. + +To set up Splunk Infrastructure Monitoring, complete the following steps: + +#. Use the integrations for AWS, Azure, and GCP to collect infrastructure metrics for applications hosted on cloud service providers. See :ref:`get-started-connect`. +#. Use the integrations for Kubernetes, Linux, and Windows to collect higher-resolution infrastructure metrics and logs. + * For the most rapid deployment, use automatic discovery and configuration. See :ref:`discovery_mode`. + * If automatic discovery does not support your use case, install the Collector for your data source. See :ref:`get-started-k8s`, :ref:`get-started-linux`, or :ref:`get-started-windows`. + +.. _phase2-apm: + +Set up Splunk Application Performance Monitoring (APM) +======================================================== + +Use Splunk APM to monitor and troubleshoot microservices-based applications. Splunk APM monitors applications by collecting distributed traces, which are a collection of spans or actions that complete a transaction. After you instrument your applications, Splunk APM collects and analyzes every trace and span and provides full-fidelity, infinite-cardinality exploration of trace data. Use Splunk APM trace data to break down and analyze application performance across any dimension. + + +To set up Splunk APM, complete the following steps: + +#. If you used automatic discovery and configuration to instrument your infrastructure, you're already capturing APM data for supported technologies. See :ref:`discovery_mode`. + + To send APM trace data for technologies not supported by automatic discovery, deploy the Splunk Distribution of the OpenTelemetry Collector. Follow the guided setup steps for the Collector for Kubernetes, Linux, and Windows. See :ref:`get-started-k8s`, :ref:`get-started-linux`, or :ref:`get-started-windows`. +#. To instrument your applications, you can export spans to a Collector running on the host or in the Kubernetes cluster that you deployed in the previous step. The Collector endpoint varies depending on the language you are instrumenting. Use the specific guided setups for each language. See :ref:`get-started-application`. + +.. _phase2-rum: + +Set up Splunk Real User Monitoring (RUM) +========================================== + +Use Splunk RUM to get visibility into the experience of your end users across device types, web browsers, and geographies. Splunk RUM connects transactions from the web browser through back-end services, so your on-call engineers can spot slowness or errors, regardless of where a problem originates across a distributed system. + +To set up Splunk RUM, complete the following steps: + +#. To turn on RUM data capture, you need to create an access token. You can use an access token for either browser RUM or mobile RUM. Mobile RUM is available for both Android and iOS devices. See :ref:`rum-setup` for steps to set up an access token. +#. Use the guided setup to create the required code snippets to use to instrument your webpages. The JavaScript resources can be self-hosted, CDN-hosted, or deployed as an NPM package for single-page web applications. + * Go to the :new-page:`guided setup for browser instrumentation `. + * See :ref:`browser-rum-install` for detailed manual installation instructions. +#. Use the guided setup for iOS and Android mobile device monitoring. + * See :ref:`rum-mobile-android` for guided setup steps for Android. + * See :ref:`rum-mobile-ios` for guided setup steps for iOS. +#. To create a complete end-to-end view of every transaction from the end user interaction, through micro services, and ultimately database calls or other transaction termination points, link your RUM and APM data. You can link RUM and APM data as part of the instrumentation parameters. See :ref:`rum-apm-connection`. + +.. _phase2-synthetics: + +Set up Splunk Synthetic Monitoring +====================================== + +Use Splunk Synthetic Monitoring to monitor and alert across critical endpoints, APIs, and business transactions and proactively find to fix functionality or performance issues. Your engineering teams can embed automatic pass/fail tests of new code based on performance budgets and standards into CI/CD processes. You can use Splunk Synthetic Monitoring to improve W3C metrics and the Lighthouse Performance Score on which Google bases its search rankings. + +To get started with Splunk Synthetic Monitoring, create 1 of the 3 available tests: browser, uptime, or API. See :ref:`set-up-synthetics`. + +.. _phase2-advanced-config: + +Optional and advanced configurations +====================================================================== + +Consider these optional and advanced configurations to customize your setup as they apply to your organization. + +.. _phase3-network-exp: + +Set up Network Explorer to monitor network environment +---------------------------------------------------------- +Use the Splunk Distribution of OpenTelemetry Collector Helm chart to configure Network Explorer. Network Explorer inspects packets to capture network performance data with extended Berkeley Packet Filter (eBPF) technology, which is run by Linux Kernel. eBPF allows programs to run in the operating system when the following kernel events occur: + +- When TCP handshake is complete + +- When TCP receives an acknowledgement for a packet + +Network Explorer captures network data that is passed on to the reducer and then to the Splunk OTel Collector. + +For Splunk OTel Collector to work with Network Explorer, you must install it in gateway mode. After installation, the Network Explorer navigator displays on the :guilabel:`Infrastructure` tab in Splunk Infrastructure Monitoring. + +For comprehensive documentation on Network Explorer, see :ref:`network-explorer`. + +.. _phase2-profiling: + +Turn on AlwaysOn Profiling to collect stack traces +----------------------------------------------------------------- + +Use AlwaysOn Profiling for deeper analysis of the behavior of select applications. Code profiling collects snapshots of the CPU call stacks and memory usage. After you get profiling data into Splunk Observability Cloud, you can explore stack traces directly from APM and visualize the performance and memory allocation of each component using the flame graph. + +Use this profiling data to gain insights into your code behavior to troubleshoot performance issues. For example, you can identify bottlenecks and memory leaks for potential optimization. + +.. _phase2-related-content: + +Turn on Related Content +----------------------------- + +Turn on Related Content as part of your data integration setup so you can navigate between APM, Log Observer Connect, and Infrastructure Monitoring in Splunk Observability Cloud with your selected filters and context automatically applied to each view. See :ref:`get-started-relatedcontent`. + +Education resources +===================== + +* Get familiar with OpenTelemetry concepts, including the configuration of the pipeline components, including receivers, processors, exporters, and connectors. See :new-page:`https://opentelemetry.io/docs/concepts/`. +* To learn more about the data model for Splunk Observability Cloud, see :ref:`data-model`. + +Next step +=============== + +Next, prepare to scale your rollout of Splunk Observability Cloud. See :ref:`get-started-guide-scaled-rollout`. diff --git a/get-started/get-started-guide/onboarding-readiness.rst b/get-started/get-started-guide/onboarding-readiness.rst new file mode 100644 index 000000000..81b940307 --- /dev/null +++ b/get-started/get-started-guide/onboarding-readiness.rst @@ -0,0 +1,155 @@ +.. _get-started-guide-onboarding-readiness: + +Get started guide phase 1: Onboarding readiness +********************************************************* + +In the onboarding readiness phase of the getting started journey for Splunk Observability Cloud, you set up users, teams, and access controls using roles and token management. The following sections cover the primary setup steps for the onboarding readiness phase. + +To get a high-level overview of the entire getting started journey, see :ref:`get-started-guide`. + +.. note:: This guide is for Splunk Observability Cloud users with the admin role. + + +.. image:: /_images/get-started/onboarding-guide-2point0-readiness.svg + :width: 100% + :alt: + +To configure your users, teams, and tokens, complete the following primary tasks: + +#. :ref:`phase1-create-trial` +#. :ref:`phase1-network` +#. :ref:`phase1-user-access` +#. :ref:`phase1-teams-tokens` + +.. note:: + Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager as you get started. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. + +.. _phase1-create-trial: + +Create a trial for your organization +======================================== + +If you have a Splunk technical contact, they can create a Splunk Observability Cloud trial for your organization and provide you with the link to log in to your trial organization. Alternatively, you can sign up for a trial. See :ref:`o11y-trial`. + +.. _phase1-network: + +Analyze your network communication and access requirements +============================================================ + +Before you begin bringing data into Splunk Observability Cloud from your infrastructure and applications, analyze your required network communications and access requirements. + +#. Validate that network connections between your environment and Splunk Observability Cloud are allowed. See :ref:`otel-exposed-endpoints` to determine which ports you need to open in the firewall and what protocols you need to turn on or off in the Collector. +#. If your organization requires a proxy, see :ref:`allow-services`. +#. For Kubernetes, you need administrator access to monitored hosts of Kubernetes clusters to install the Splunk Distribution of the OpenTelemetry Collector. +#. Whether you use a guided setup for data management or an advanced installation method, you use the Splunk Distribution of the OpenTelemetry Collector to ingest, process, and export metric, trace, logs, and metadata into Splunk Observability Cloud. You can run the Splunk Distribution of the OpenTelemetry Collector as a custom user, not a root or admin user. For the majority of use cases, the collector doesn't require privileged access to function. + #. Collector components might require privileged access. Use care when allowing privilege access for components. For example, a receiver might require the Collector to run in a privileged mode, which might be a security concern. Receivers and exporters might expose buffer, queue, payload, and worker settings in configuration parameters. Setting these parameters might expose the Collector to additional attack vectors including resource exhaustion. + #. Collector components might also require external permissions including network access or role-based access. + + See :ref:`otel-security` for more details about managing your architecture security. + +.. _phase1-user-access: + +Decide how to manage user access +======================================== + +Select from these 3 options for managing user access: + +#. Use Splunk Cloud Platform as the unified identity provider. See :ref:`unified-id-unified-identity` for more information. +#. Use an external Lightweight Directory Access Protocol (LDAP) and control access through Single Sign-On (SSO). See :ref:`sso-label` for more information. +#. Use Splunk Observability Cloud user management to allow access using a username and password. See :ref:`user-management-intro`. + +.. _phase1-teams-tokens: + +Plan your team structure and token management strategy to control access +===================================================================================== + +If you plan to roll out Splunk Observability Cloud across your organization, you likely have multiple internal customers with different access requirements for the various features in Splunk Observability Cloud. Complete the following steps to create a consistent team structure and corresponding token management strategy. + +#. :ref:`team-token-names` +#. :ref:`team-structure` +#. :ref:`token-mgmt` + +.. _team-token-names: + +Define team and token naming conventions +------------------------------------------ + +Before creating teams and tokens, determine your naming convention. A naming convention helps you to track token assignments and control data-ingestion limits. Aligning team and token names also helps you to identify token owners when viewing the usage reports. For example, you can align team and token names in the following way: + +* Team name: FRONTEND_DEV_TEAM +* Token names: FRONTEND_DEV_TEAM_INGEST, FRONTEND_DEV_TEAM_API, FRONTEND_DEV_TEAM_RUM + +.. _team-structure: + +Plan your team structure +--------------------------- + +Create a plan for your team structure and user roles within teams. A user with an admin role can manage teams, which includes adding and removing users and assigning a team manager. For an overview of the various team roles and permissions, see :ref:`about-team-roles`. + +By default, every user can join any team in your organization. If you want to restrict users from being able to join any team, you can turn on the enhanced team security setting. Use enhanced team security to assign usage rights to each team and their associated tokens. See :ref:`enhanced-team-security`. + +.. _token-mgmt: + +Manage your tokens +-------------------- + +Use tokens to secure data ingestion and API calls in Splunk Observability Cloud. Tokens are valid for 1 year and you can extend them for another 60 days. Your organization has a default token that is automatically generated when the organization is created. + +To learn more about token management, see the following topics: + +* See :ref:`admin-tokens`. +* See :ref:`admin-manage-usage`. + +.. _phase1-advanced-config: + +Optional and advanced configurations +====================================================================== + +Consider these optional and advanced configurations to customize your setup as they apply to your organization. + +.. _advanced-config-custom-url: + +Request a custom URL for your organization +-------------------------------------------------------------- + +Create a Splunk support request to request a custom URL for your organization, for example, acme.signalfx.com. See :ref:`support` for support contact options. + +.. _advanced-config-parent-child: + +Separate your teams with a parent-child setup +-------------------------------------------------------------- + +If you want to create separate environments, you can use parent-child organizations. Perhaps you want a development environment and a production environment, or you want to make sure Team A is fully separated from Team B. Parent-child organizations are 2 or more separate organizations, where your original organization is the parent organization which includes your original usage entitlement. You can then have 1 or more organizations as child organizations within the parent organization. The organizations are fully separated, including users and data. + +You can request a parent-child organization setup by creating a support case. See :ref:`support` for support contact options. + +.. _advanced-config-logs: + +Set up Log Observer Connect for the Splunk Platform +-------------------------------------------------------------- + +If your organization has an entitlement for Splunk Log Observer Connect, Splunk Observability Cloud can automatically relate logs to infrastructure and trace data. + +See :ref:`logs-set-up-logconnect` or :ref:`logs-scp`. + +.. _advanced-config-3rd-party: + +Collect data from third-party metrics providers +-------------------------------------------------------------- + +When using the Splunk Distribution of OpenTelemetry Collector, you can use receivers to collect metrics data from third-party providers. For example, you can use the Prometheus receiver to scrape metrics data from any application that exposes a Prometheus endpoint. See :ref:`prometheus-receiver`. + +See :ref:`monitor-data-sources` for a list of receivers. + +Education resources +===================== + +* For a list of free Splunk Observability Cloud courses, see :new-page:`Free training`. +* For the full course catalog for Splunk Observability Cloud, see :new-page:`Full course catalog for Splunk Observability Cloud `. + * See the :new-page:`Curated track for Splunk Observability Cloud ` to determine what courses to prioritize. +* Follow the Splunk Observability Cloud metrics user certification if you want to build a center of excellence for observability in your organization. See :new-page:`Splunk Observability Cloud metrics user certification `. + +Next step +=============== + +Next, prepare for an initial rollout of the Splunk Observability Cloud products that are relevant to your organization. See :ref:`get-started-guide-initial-rollout`. \ No newline at end of file diff --git a/get-started/get-started-guide/scaled-rollout.rst b/get-started/get-started-guide/scaled-rollout.rst new file mode 100644 index 000000000..1e45fbd7c --- /dev/null +++ b/get-started/get-started-guide/scaled-rollout.rst @@ -0,0 +1,164 @@ +.. _get-started-guide-scaled-rollout: + +Get started guide phase 3: Scaled rollout +********************************************************* + +After completing the :ref:`get-started-guide-initial-rollout`, you are ready for phase 3, scaled rollout. In the final scaled rollout phase, you establish repeatable observability practices using automation, data management, detectors, and dashboards. The following sections cover the primary setup steps for the scaled rollout phase. + +To get a high-level overview of the entire getting started journey for Splunk Observability Cloud, see :ref:`get-started-guide`. + +.. note:: This guide is for Splunk Observability Cloud users with the admin role. + + +.. image:: /_images/get-started/onboarding-guide-2point0-scaled.svg + :width: 100% + :alt: + +To increase usage across all user teams and establish repeatable observability practices through automation, data management, detectors, and dashboards, complete the following tasks: + +#. :ref:`phase3-pipeline` +#. :ref:`phase3-rotate-token` +#. :ref:`phase3-mpm` +#. :ref:`phase3-names-data` +#. :ref:`phase3-dash-detect` +#. :ref:`phase3-onboard-all` + +.. note:: + Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager as you get started. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice. + +.. _phase3-pipeline: + +Add Splunk Observability Cloud to your deployment pipeline +============================================================ + +After completing the initial rollout phase, you have deployed a Collector instance with limited configuration. For the scaled rollout, you can expand your Collector pipelines with more components and services. + +* See :ref:`otel-configuration` for an overview of the available options to install, configure, and use the Splunk Distribution of the OpenTelemetry Collector. +* See :ref:`otel-data-processing` to learn how data is processed in Collector pipelines. +* See the :ref:`otel-components` documentation to see the available components you can add to the Collector configuration. + +You can also use other ingestion methods, like the following: + +* To send data using the Splunk Observability Cloud REST APIs, see :ref:`rest-api-ingest`. +* To send metrics using client libraries, see :new-page:`SignalFlow client libraries `. +* For information about using the upstream Collector, see :ref:`using-upstream-otel`. + +.. _phase3-rotate-token: + +Automate the token rotation process +====================================== + +Because tokens expire after 1 year, you need to automate the rotation of tokens using an API call. For a given token, when the API creates a new token, the old token continues to work until the time you specified in the grace period. Wherever the old token is in use, use the API call to automatically rotate the token within the grace period. + +For example, you can use the API to rotate the token that a Kubernetes cluster uses to ingest metrics and trace data. When you use the API to generate a new token, you can store the new token directly in the secret in the Kubernetes cluster as part of the automation. + +To learn more, see the following topics: + +- :ref:`admin-org-tokens` +- :new-page:`Org tokens API endpoint documentation` + +.. _phase3-mpm: + +Use metrics pipeline management tools to reduce cardinality of metric time series (MTS) +========================================================================================= + +As metrics data usage and cardinality grows in Splunk Infrastructure Monitoring, your cost increases. Use metrics pipeline management (MPM) tools within Splunk Infrastructure Monitoring to streamline storage and processing to reduce overall monitoring cost. With MPM, you can make the following optimizations: + +* Streamline storage and processing to create a multitier metric analytics platform. + +* Analyze reports to identify where to optimize usage. + +* Use rule-based metrics aggregation and filtering on dimensions to reduce MTS volume. + +* Drop dimensions that are not needed. + +You can configure dimensions through the user interface, the API, and Terraform. + +For comprehensive documentation on MPM, see :ref:`metrics-pipeline-intro`. + +.. _phase3-names-data: + +Review metric names and ingested data +========================================================================================= + +To prepare for a successful scaled deployment, consider your naming conventions for tokens and custom metrics in Splunk Observability Cloud. A consistent, hierarchical naming convention for metrics makes it easier to find metrics, identify usage, and create charts and alerts across a range of hosts and nodes. + +#. See :ref:`metric-dimension-names` for guidance on creating a naming convention for your organization. +#. After bringing in metrics data, review the name and the metrics volume each team is ingesting. Make sure the ingest data matches the naming convention for dimensions and properties. + +.. _phase3-dash-detect: + +Build custom dashboards and detectors +========================================================================================= + +Dashboards are groupings of charts that visualize metrics. Use dashboards to provide your team with actionable insight into your system at a glance. Use detectors to monitor your streaming data against a specific condition that you specify to keep users informed when certain criteria are met. + +Build custom dashboards +----------------------------- + +#. Splunk Observability Cloud automatically adds built-in-dashboards for each integration you use after it ingests 50,000 data points. Review these built-in dashboards when they are available. See :ref:`view-dashboards` and :ref:`dashboards-list-imm`. +#. Learn how to create and customize dashboards. Make sure your teams can complete these tasks: + #. Clone, share, and mirror dashboards. + #. Use dashboard filters and dashboard variables. + #. Add text notes and event feeds to your dashboards. + #. Use data links to dynamically link a dashboard to another dashboard or external system such as Splunk APM, the Splunk platform, or a custom URL. + + For comprehensive documentation on these tasks, see :ref:`dashboards`. + +Build custom detectors +----------------------------- + +#. Splunk Observability Cloud also automatically adds the AutoDetect detectors that correspond to the integrations you are using. You can copy the AutoDetect detectors and customize them. See :ref:`autodetect`. +#. Create custom detectors to trigger alerts that address your use cases. See :ref:`get-started-detectoralert`. +#. You can create advanced detectors to enhance the basic list of alert conditions to take into account the different types of functions, such as additional firing, alert clearing conditions, or comparing 2 functions using the population_comparison function. + * See the :new-page:`library of SignalFlow for detectors ` on GitHub. + * To get started with SignalFlow, see :new-page:`Analyze data using SignalFlow ` in the developer guide. + +.. _phase3-onboard-all: + +Onboard all users and teams +================================================================================================================ + +Your final step of the scaled rollout phase is to onboard all users and teams and configure who can view and modify various aspects of Splunk Observability Cloud. + +#. See :ref:`user-management-intro` to get started managing users, teams, and roles. +#. If you haven't already done so, turn on enhanced security to identify team managers and control who can view and modify dashboards and detectors. See :ref:`enhanced-team-security`. +#. Assign team-specific notifications for alerts triggered by the detectors that you set up. Team-specific notifications give your teams different escalation methods for their alerts. See :ref:`admin-team-notifications`. + +.. _phase3-advanced-config: + +Optional and advanced configurations +====================================================================== + +Consider these optional and advanced configurations to customize your setup as they apply to your organization. + +.. _phase3-data-links: + +Use global data links to link properties to relevant resources +--------------------------------------------------------------- + +Create global data links to link Splunk Observability Cloud dashboards to other dashboards, external systems, custom URLs, or Splunk Cloud Platform logs. To learn more, see :ref:`link-metadata-to-content`. + +.. _phase3-usage-limits: + +Analyze and troubleshoot usage, limits, and throttles +--------------------------------------------------------------- + +To analyze and troubleshoot usage, make sure you know how to complete the following tasks: + +* Understand the difference between host-based and MTS-based subscriptions in Infrastructure Monitoring. +* Understand the difference between host-based and trace-analyzed-per-minute (TAPM) subscriptions in APM. +* Understand per-product system limits. +* Read available reports, such as monthly and hourly usage reports, dimension reports, and custom metric reports. + +To learn more, see the following topics: + +* :ref:`per-product-limits` +* :ref:`subscription-overview` + +Education resources +==================== + +* Before you start scaling up the use of the OpenTelemetry agents, consider the OpenTelemetry sizing guidelines. This is especially important on platforms such as Kubernetes where there can be a sudden growth from various autoscaling services. For details about the sizing guidelines, see :ref:`otel-sizing`. +* Coordinate with your Splunk Sales Engineer to register for the Splunk Observability Cloud workshop. See :new-page:`Splunk Observability Cloud Workshops`. +* To begin creating a training curriculum for your Splunk Observability Cloud end users see the :new-page:`Curated training for end users`. diff --git a/get-started/get-started.rst b/get-started/get-started.rst new file mode 100644 index 000000000..aac5ac7a1 --- /dev/null +++ b/get-started/get-started.rst @@ -0,0 +1,84 @@ +.. _get-started: + +Get started with Splunk Observability Cloud +****************************************************** + +.. meta:: + :description: Learn how to get started with Splunk Observability Cloud. + +Everything you need to know to get started with Splunk Observability Cloud. + +.. role:: icon-info +.. rst-class:: newparawithicon + +:icon-info:`.` :strong:`Introduction to Splunk Observability Cloud` + +.. rst-class:: newcard + +:strong:`Overview` +Splunk Observability Cloud overview :ref:`overview` + +.. rst-class:: newcard + +:strong:`Architecture` +Splunk Observability Cloud architecture :ref:`architecture` + +.. rst-class:: newcard + +:strong:`Service description` +Benefits and service terms :ref:`o11y-service-description` + +.. role:: icon-cogs +.. rst-class:: newparawithicon + +:icon-cogs:`.` :strong:`Get started for Splunk Observability Cloud admins` + +.. rst-class:: newcard + +:strong:`Get started guide for admins` +Get started guide for Splunk Observability Cloud admins :ref:`get-started-guide` + +.. rst-class:: newcard + +:strong:`Get your data in` +Guide for getting your data into Splunk Observability Cloud :ref:`get-started-get-data-in` + +.. rst-class:: newcard + +:strong:`Get started with the collector` +Get started: Understand and use the collector :ref:`otel-understand-use` + +.. role:: icon-info +.. rst-class:: newparawithicon + +:icon-info:`.` :strong:`Scenarios and tutorials` + +.. rst-class:: newcard + +:strong:`Scenarios` +Goal-based scenarios for using Splunk Observability Cloud :ref:`scenario-landing` + +.. rst-class:: newcard + +:strong:`Tutorials` +Task-based tutorials to accomplish a task in Splunk Observability Cloud :ref:`tutorial-landing` + +.. role:: icon-users +.. rst-class:: newparawithicon + +:icon-users:`.` :strong:`Education and community resources` + +.. rst-class:: newcard + +:strong:`Course offerings` +Splunk Observability Cloud course offerings :new-page:`https://www.splunk.com/en_us/training/course-catalog.html?sort=Newest&filters=filterGroup4SplunkObservabilityCloud%2CfilterGroup4SplunkSyntheticMonitoring%2CfilterGroup4SplunkInfrastructureMonitoring%2CfilterGroup4SplunkITSI%2CfilterGroup4SplunkAPM%2CfilterGroup4SplunkOnCall%2CfilterGroup4SplunkRUM%2CfilterGroup4SplunkLogObserver%2CfilterGroup4SplunkInsights` + +.. rst-class:: newcard + +:strong:`Community blog` +Get the latest updates from the Splunk community :new-page:`https://community.splunk.com/t5/Community-Blog/bg-p/Community-Blog` + +.. rst-class:: newcard + +:strong:`Join the community` +Get the latest updates from the Splunk community :new-page:`https://community.splunk.com/t5/Welcome/bd-p/gs-welcome` \ No newline at end of file diff --git a/get-started/o11y-trial.rst b/get-started/o11y-trial.rst index ad4fa745c..4ae5e711e 100644 --- a/get-started/o11y-trial.rst +++ b/get-started/o11y-trial.rst @@ -1,76 +1,83 @@ .. _o11y-trial: -****************************************************** -Free trial of Splunk Observability Cloud -****************************************************** +Splunk Observability Cloud free trial and guided onboarding +************************************************************ .. meta:: :description: About the free trial available for Splunk Observability Cloud. -The trial install will guide you through the steps to create your Splunk Observability Cloud trial environment. As part of the trial, Hipster shop - the Splunk Observability Cloud trial shop - will be deployed to a local minikube cluster as a set of Docker containers that will provide metrics and traces. To set up your minikube cluster and OpenTelemetry collector you'll also need Helm and gsed installed for the automation to configure the cluster. +The trial guides you through the steps to create your Splunk Observability Cloud trial environment. As part of the trial, Hipster shop - the Splunk Observability Cloud trial shop - is deployed to a local minikube cluster as a set of Docker containers that provide metrics and traces. To set up your minikube cluster and OpenTelemetry collector you also need Helm and the gnu-sed editor installed for the automation to configure the cluster. -You can try out Splunk Observability Cloud for 14 days, absolutely free. You can explore the trial in two ways: +You can try out Splunk Observability Cloud for 14 days, absolutely free. You can explore the trial in 2 ways: * Use the sample data in a pre-instrumented environment (Hipster shop). * Use your own data by instrumenting your applications with OpenTelemetry. -For an introduction to Splunk Observability Cloud products, see :ref:`welcome`. +For an introduction to Splunk Observability Cloud products, see :ref:`overview`. For information about how to use these products together to address real-life scenarios, see :ref:`get-started-scenario`. -Sign up for the trial -============================ +.. raw:: html + + +

Sign up for the trial

+ If this is your first experience with Splunk Observability Cloud, here's how you can sign up for your free trial. -#. Navigate to one of the following URLs: +#. Navigate to 1 of the following URLs: * For AWS regions, see :new-page:`https://www.splunk.com/en_us/download/o11y-cloud-free-trial.html`. * For GCP regions, see :new-page:`https://www.splunk.com/en_us/download/observability-for-google-cloud-environments.html` #. In the free trial sign-up window, select the location closest to the region you are in. Options include: United States, Europe, Asia Pacific (Australia), Asia Pacific (Japan). Select :guilabel:`Next`. #. Enter your contact information. Note: - The name and email address is used to create the first user on the system and is granted the admin role automatically. - - The company name is used to name the organization. Select a name which describes your account as well as its function. For example, ACME Dev platform. + - The company name is used to name the organization. Select a name which describes your account as well as its function. For example, Acme Dev platform. #. Agree to the terms and conditions and select :guilabel:`Start Free Trial`. -#. You will receive an email with a link to sign in to your org. If this takes longer than ten minutes, check your spam folder. +#. You will receive an email with a link to log in to your org. If this takes longer than ten minutes, check your spam folder. #. In the email, select :guilabel:`Verify` or paste the link into your browser. #. Create your password and select :guilabel:`Sign in Now`. -What you'll see when you sign in -==================================== - +.. raw:: html + + +

What you'll see when you sign in

+ .. image:: /_images/get-started/trial-exp.png :width: 80% :alt: Free trial first sign-in view -When you first sign in, you see your Home page. You can show onboarding content by selecting the action menu (|more|) in the upper right-hand corner. This will display helpful videos and links on most pages to help you get started. +When you first log in, you see your Home page. You can show onboarding content by selecting the action menu (|more|) in the upper right-hand corner. This displays helpful videos and links on most pages to help you get started. -You can also expand the left-hand navigation menu to show the full names of the sections instead of the icons only, by selecting the double angle brackets in the bottom left-hand corner. +You can also expand the navigation menu to show the full names of the sections instead of the icons only, by selecting the double angle brackets in the bottom corner. .. image:: /_images/get-started/trial1.png :width: 80% - :alt: The right-angle brackets in the bottom, left corner of the UI expands the navigation menu. + :alt: The right-angle brackets in the bottom, corner of the UI expands the navigation menu. +.. raw:: html + + +

Guided onboarding

+ - -Guided onboarding -========================= - -There are five steps to the guided onboarding. The UI guides you through each of the steps, providing the commands and links you require. +There are 5 steps to the guided onboarding. The UI guides you through each of the steps, providing the commands and links you require. #. Preparing the prerequisites. #. Install OpenTelemetry. #. Install the Hipster Shop into your local cluster. -#. Create traffic by exploring the Hipster Shop. Clicking around the Hipster Shop site will generate traces and metrics for you to view in Splunk Observability Cloud. +#. Create traffic by exploring the Hipster Shop. Clicking around the Hipster Shop site generates traces and metrics for you to view in Splunk Observability Cloud. #. Explore the results in Application Performance Monitoring (APM). +.. raw:: html + + +

Prerequisites

+ -Pre-Requisites ---------------------- - -The first step is to set up some pre-requistes for the demo enviornmnet. The trial UI will guide you through this and link to the resources you need. +The first step is to set up some prerequisites for the demo environment. The trial UI guides you through this and link to the resources you need. To run the demo environment, install and have functioning versions of: @@ -80,17 +87,24 @@ To run the demo environment, install and have functioning versions of: - GSED: GNU implementations of the stream editor. gnu-sed is used in the configuration script for the kubernetes manifests. - See :new-page:`https://formulae.brew.sh/formula/gnu-sed`. - The Hipster Shop cluster requires a minimum 4 GB of memory. -Install the OpenTelemetry collector ------------------------------------------------- -To install the OpenTelemetry collector, you'll need to know: +.. raw:: html + + +

Install the OpenTelemetry collector

+ + +To install the OpenTelemetry collector, you need to know: - Your Splunk Observability Cloud realm. To locate your realm, see :new-page:`View your realm and org info `. - Your Splunk Observability Cloud access token. For details, see :ref:`admin-org-tokens`. -Install the Hipster Shop -------------------------------------- +.. raw:: html + + +

Install the Hipster Shop

+ -The Hipster Shop allows you to generate sample data. To install the Hipster shop demo locally, you'll need your Real User Management (RUM) token. For instructions, see :ref:`rum-access-token`. +Use the Hipster Shop to generate sample data. To install the Hipster shop demo locally, you need your Real User Management (RUM) token. For instructions, see :ref:`rum-access-token`. Once you have installed and configured the Hipster Shop environment, you can generate traffic and explore the results in your Splunk Observability Cloud trial organization. diff --git a/get-started/o11y.rst b/get-started/o11y.rst deleted file mode 100644 index df1a4e901..000000000 --- a/get-started/o11y.rst +++ /dev/null @@ -1,113 +0,0 @@ -.. _get-started-o11y: - -****************************************************** -Get started with Splunk Observability Cloud -****************************************************** - -.. meta:: - :description: Learn how to get started with Splunk Observability Cloud in five steps. - -This topic covers five high-level steps you can follow to get started with Splunk Observability Cloud and its products, which include Splunk Infrastructure Monitoring, Splunk Application Performance Monitoring (APM), Splunk Real User Monitoring (RUM), and Splunk Log Observer Connect. - -For an introduction to Splunk Observability Cloud products, see :ref:`welcome`. - -For information about how these products can be used together to address real-life scenarios, see :ref:`get-started-scenario`. - -Follow these steps to set up and make the most of Splunk Observability Cloud: - -.. list-table:: - :header-rows: 1 - :widths: 60, 40 - - * - :strong:`Configuration stages` - - :strong:`Task overview` - - * - :ref:`get-started-plan` with: - - Single sign-on, Access tokens, Admins and users, Teams, Notification service integrations (Jira, PagerDuty, and more) - - * - :ref:`get-started-gdi` from your: - - Cloud services, Servers, Server applications, Clusters, Applications, Serverless functions, User interfaces - - * - :ref:`get-started-explore` using: - - Infrastructure Monitoring, Real User Monitoring, Log Observer Connect, Application Performance Monitoring, Related Content - - * - :ref:`get-started-customize`: - - Detectors and alerts, Custom dashboards, Span tags, Business workflows, Logs pipeline, Custom data - - * - :ref:`get-started-datalinks` from dashboards and alerts to: - - Splunk Observability Cloud dashboards, Splunk Cloud Platform, Splunk Enterprise, Custom URLs, Kibana logs - - - - -.. _get-started-plan: - - -1. Create a plan and set up your organization -================================================= - -Before you start, create a plan for how you want to set up your Splunk Observability Cloud organization. For information about how to plan for and set up your Splunk Observability Cloud organization, see :ref:`admin-admin`. - - -.. _get-started-gdi: - -2. Get data into Splunk Observability Cloud -============================================== - -Gather all the data from your environment in Splunk Observability Cloud to achieve full-stack observability. For information about how to get data in, see :ref:`get-started-get-data-in`. - -As a part of getting data in, make sure to consider bringing in data in a way that allows :ref:`get-started-relatedcontent`, a feature that automatically correlates data between different views within Splunk Observability Cloud. When turned on, the Related Content bar displays automatically when you select a relevant element and lets you take a data-driven investigative approach. - -To learn more about Splunk Observability Cloud's data model, refer to :ref:`data-model`. - -.. _get-started-explore: - -3. Explore and analyze your data -======================================================== - -Once you have data coming into Splunk Observability Cloud, it's time to do some exploring. For example, you can: - -- Use :ref:`Infrastructure Monitoring ` to analyze the performance of cloud services, hosts, and containers, or view the health of your infrastructure at a glance, and view outlier conditions in your hybrid infrastructure. - -- Use :ref:`APM ` to analyze the performance of applications down to the microservice level, investigate latencies in your application requests, and monitor inbound and outbound dependencies for each service. - -- Use :ref:`RUM ` to analyze the performance of web and mobile applications and keep track of how users are interacting with your front-end services, including page load times and responsiveness. - -- Use :ref:`Log Observer Connect ` to pinpoint interesting log events and troubleshoot issues with your infrastructure and cloud services. - -- As described in step :ref:`get-started-gdi`, if you turned on :ref:`get-started-relatedcontent` when setting up your data integrations, you can select options in the Related Content bar to seamlessly navigate between APM, Log Observer Connect, and Infrastructure Monitoring with your selected filters and context automatically applied to each view. - -- Use the :ref:`mobile app ` to check system critical metrics in Splunk Observability Cloud on the go, access real-time alerts with visualizations, and view mobile-friendly dashboards. - - -.. _get-started-customize: - -4. Set up alerts and customize your experience -======================================================== - -Now that you've explored and familiarized yourself with the data you have coming into Splunk Observability Cloud, set up detectors to issue alerts about your data and customize your Splunk Observability Cloud experience. - -- Set up :ref:`detectors ` to send alerts when your incoming data contains conditions you want to know about. - -- In addition to exploring your data using Infrastructure Monitoring navigators and built-in :ref:`dashboards `, you can also create new dashboards and customize existing ones. - -- In addition to the built-in data you already have coming into Splunk Observability Cloud, you can also bring in custom data. For more information, see :ref:`Configure and instrument applications to send custom data ` and :ref:`Use the Splunk Observability Cloud API to send custom data `. - -- Customize your APM experience by setting up business workflows and creating span tags that add metadata to traces sent to APM. For more information, see :ref:`apm-workflows` and :ref:`apm-add-context-trace-span`. - - -.. _get-started-datalinks: - -5. Create global data links -======================================================== - -Now that you've customized your Splunk Observability Cloud experience, create global data links to further enrich the user experience. - -Global data links provide convenient access to related resources, such as Splunk Observability Cloud dashboards, Splunk Cloud Platform and Splunk Enterprise, custom URLs, and Kibana logs in the context of the following locations in Splunk Observability Cloud: - -- Dashboards -- Alerts -- APM -- Infrastructure Monitoring navigators - -For more information, see :ref:`link-metadata-to-content`. diff --git a/get-started/welcome.rst b/get-started/overview.rst similarity index 58% rename from get-started/welcome.rst rename to get-started/overview.rst index 535c38628..25d41208f 100644 --- a/get-started/welcome.rst +++ b/get-started/overview.rst @@ -1,4 +1,4 @@ -.. _welcome: +.. _overview: ************************************* Splunk Observability Cloud overview @@ -7,7 +7,7 @@ Splunk Observability Cloud overview .. meta:: :description: This page provides an overview of the products and features provided by Splunk Observability Cloud -Splunk Observability Cloud provides full-fidelity monitoring and troubleshooting across infrastructure, applications, and user interfaces, in real-time and at any scale, to help you: +Splunk Observability Cloud provides full-fidelity monitoring and troubleshooting across infrastructure, applications, and user interfaces, in real time and at any scale, to help you: - Keep your services reliable @@ -15,11 +15,11 @@ Splunk Observability Cloud provides full-fidelity monitoring and troubleshooting - Innovate faster -Choose from :ref:`over 100 supported open standards-based integrations ` with common data sources to get data from your on-premise and cloud infrastructure, applications and services, and user interfaces into Splunk Observability Cloud. +Select from :ref:`over 100 supported open standards-based integrations ` with common data sources to get data from your on-premises and cloud infrastructure, applications and services, and user interfaces into Splunk Observability Cloud. -When you send data from each layer of your full-stack environment to Splunk Observability Cloud, it transforms raw metrics, traces, and logs into actionable insights in the form of dashboards, visualizations, alerts, and more. To learn more about Splunk Observability Cloud's data model, refer to :ref:`data-model`. +When you send data from each layer of your full-stack environment to Splunk Observability Cloud, it transforms raw metrics, traces, and logs into actionable insights in the form of dashboards, visualizations, alerts, and more. To learn more about the data model for Splunk Observability Cloud, refer to :ref:`data-model`. -Splunk Observability Cloud's suite of products and features allow you to quickly and intelligently respond to outages and identify root causes, while also giving you the data-driven guidance you need to optimize performance and productivity going forward. Use Splunk Observability Cloud search to quickly locate the service, traceID, dashboard, chart, or metrics-based content you are interested in. For details, see :ref:`gsearch`. +The Splunk Observability Cloud suite of products and features allow you to quickly and intelligently respond to outages and identify root causes, while also giving you the data-driven guidance you need to optimize performance and productivity going forward. Use Splunk Observability Cloud search to quickly locate the service, traceID, dashboard, chart, or metrics-based content you are interested in. For details, see :ref:`gsearch`. The following diagram provides a high-level view of how each Splunk Observability Cloud product plays its part to provide you with full-stack observability: @@ -27,56 +27,39 @@ The following diagram provides a high-level view of how each Splunk Observabilit :width: 70% :alt: This screenshot shows how Splunk Observability Cloud products serve the different layers and processes in an organization's environment. -For information about how these products can be used together to address real-life scenarios, see :ref:`get-started-scenario`. To get started with Splunk Observability Cloud, see :ref:`get-started-o11y`. +For information about how these products can be used together to address real-life scenarios, see :ref:`get-started-scenario`. For information about Splunk Observability Cloud packaging and pricing, see :new-page:`Pricing - Observability `. Start learning about how the following Splunk Observability Cloud products work to provide you with unified, end-to-end observability of your environment: -- :ref:`welcome-imm` - -- :ref:`welcome-apm` (APM) - -- :ref:`welcome-rum` (RUM) - -- :ref:`welcome-synthmon` - -- :ref:`welcome-logobs` - -- :ref:`welcome-oncall` - -- :ref:`welcome-mobile` - -- :ref:`welcome-it` - -- :ref:`welcome-content-packs` - .. note:: For a list of benefits and service terms of Splunk Observability Cloud, see :ref:`o11y-service-description`. -.. _welcome-gdi: - -Get data in using supported integrations to hundreds of common data sources -================================================================================ +.. raw:: html + + +

Get data in using supported integrations to hundreds of common data sources

+ -The first step toward full-stack observability is getting data from your environment into Splunk Observability Cloud. Get data in using any of our over 100 supported integrations to common data sources. +The first step toward full-stack observability is getting data from your environment into Splunk Observability Cloud. Get data in using over 100 supported integrations to common data sources. For more information about getting data into Splunk Observability Cloud, see :ref:`get-started-get-data-in`. +.. raw:: html + + +

Splunk Infrastructure Monitoring

+ -.. _welcome-imm: - -Splunk Infrastructure Monitoring -================================ - -Gain insights into and perform powerful, capable analytics on your infrastructure and resources across hybrid and multi-cloud environments with Splunk Infrastructure Monitoring. Infrastructure Monitoring offers support for a broad range of integrations for collecting all kinds of data, from system metrics for infrastructure components to custom data from your applications. +Gain insights into and perform powerful, capable analytics on your infrastructure and resources across hybrid and multicloud environments with Splunk Infrastructure Monitoring. Infrastructure Monitoring offers support for a broad range of integrations for collecting all kinds of data, from system metrics for infrastructure components to custom data from your applications. For more information, see :ref:`wcidw-imm` - -.. _welcome-apm: - -Splunk Application Performance Monitoring -========================================= +.. raw:: html + + +

Splunk Application Performance Monitoring

+ Collect traces and spans to monitor your distributed applications with Splunk APM. A trace is a collection of actions, or spans, that occur to complete a transaction. Splunk APM collects and analyzes every span and trace from each of the services that you have connected to Splunk Observability Cloud to give you full-fidelity access to all of your application data. @@ -84,60 +67,61 @@ For more information, see :ref:`get-started-apm` For information about how APM can be used to address real-life scenarios, see :ref:`apm-scenarios-intro`. - -.. _welcome-rum: - -Splunk Real User Monitoring -=========================== +.. raw:: html + + +

Splunk Real User Monitoring

+ Splunk Real User Monitoring provides insights about the performance and health of the front-end user experience of your application. Splunk RUM collects performance metrics, web vitals, errors, and other forms of data to allow you to detect and troubleshoot problems in your application, measure the health of your application, and assess the performance of your user experience. For more information, see :ref:`get-started-rum`. - -.. _welcome-synthmon: - -Splunk Synthetic Monitoring -====================================== +.. raw:: html + + +

Splunk Synthetic Monitoring

+ Splunk Synthetics Monitoring is a platform to synthetically measure performance of your web-based properties. It offers features that provide insights that allow you to optimize uptime and performance of APIs, service endpoints, and end user experiences and prevent web performance issues. For more information, see the :ref:`intro-synthetics`. - -.. _welcome-logobs: - -Splunk Log Observer Connect -====================================== +.. raw:: html + + +

Splunk Log Observer Connect

+ Troubleshoot your application and infrastructure behavior using high-context logs in Splunk Observability Cloud. With Splunk Log Observer Connect, you can perform codeless queries on logs to detect the source of problems in your systems. -For more information, see :ref:`LogObserverFeatures`. - - -.. _welcome-oncall: +For more information, see :ref:`logs-intro-logconnect`. -Splunk On-Call -========================= +.. raw:: html + + +

Splunk On-Call

+ Splunk On-Call incident response software aligns log management, monitoring, chat tools, and more, for a single-pane of glass into system health. Splunk On-Call automates delivery of alerts to get the right alert, to the right person, at the right time. For more information, see the :new-page:`Splunk On-Call documentation `. - -.. _welcome-mobile: - -Splunk Observability Cloud for Mobile -====================================== +.. raw:: html + + +

Splunk Observability Cloud for Mobile

+ Splunk Observability Cloud for Mobile is an iOS and Android companion mobile app to Splunk Observability Cloud. You can use Splunk Observability Cloud for Mobile to check system critical metrics in Splunk Observability Cloud on the go, access real-time alerts with visualizations, and view mobile-friendly dashboards. For more information, see :ref:`intro-to-mobile`. -.. _welcome-it: - -Splunk IT Essentials Work and Splunk IT Service Intelligence -=================================================================== +.. raw:: html + + +

Splunk IT Essentials Work and Splunk IT Service Intelligence

+ Splunk IT Essentials Work (ITE Work) is a free application that helps you get started with monitoring and analyzing your IT infrastructure. @@ -145,11 +129,22 @@ Splunk IT Service Intelligence (ITSI) is a premium IT operations solution that p For more information about these applications, see the :new-page:`IT operations product overview `. -.. _welcome-content-packs: - -Splunk App for Content Packs -====================================== +.. raw:: html + + +

Splunk App for Content Packs

+ Quickly set up your IT Service Intelligence (ITSI) or IT Essentials Work (ITE Work) environment using prepackaged content such as KPI base searches, service templates, saved glass tables, and other knowledge objects. For more information, see the :new-page:`Overview of the Splunk App for Content Packs `. + +.. raw:: html + + +

Learn more

+ + +For information about how these products can be used together to address real-life scenarios, see :ref:`get-started-scenario`. + +For information about Splunk Observability Cloud packaging and pricing, see :new-page:`Pricing - Observability `. \ No newline at end of file diff --git a/get-started/service-description.rst b/get-started/service-description.rst index 0d0abd65b..8c1b9dc58 100644 --- a/get-started/service-description.rst +++ b/get-started/service-description.rst @@ -25,11 +25,11 @@ The following sections describe the features, capabilities, limitations, and con .. note:: For the service description of Splunk Cloud Platform see :new-page:`Splunk Cloud Platform Service Details `. - -.. _sd-terms-policies: - -Service term and policies -=========================================================== +.. raw:: html + + +

Service term and policies

+ The following links access important terms and policies documents that pertain to Splunk Observability Cloud. Be sure to read these documents to have a clear understanding of the service. If you have any questions, contact your Splunk sales representative. @@ -40,92 +40,118 @@ The following links access important terms and policies documents that pertain t - :new-page:`Splunk Data Security and Privacy ` - :new-page:`Splunk Observability Cloud - Security Addendum ` - -.. _sd-data: - -Data ingestion and retention -=========================================================== +.. raw:: html + + +

Data ingestion and retention

+ Splunk Observability Cloud provides software and APIs that allow you to ingest data from your on-premises infrastructure, applications, user interfaces, cloud services, servers, network devices, and more. Splunk Observability Cloud provides guided setups that help you install and configure OpenTelemetry instrumentation. See :ref:`get-started-get-data-in` for more information. .. note:: All editions of Splunk Observability Cloud include Log Observer Connect, which let you analyze logs you've ingested to Splunk Cloud Platform and Splunk Enterprise at no additional cost. See :ref:`lo-connect-landing` for more information. -Splunk OpenTelemetry Collector ------------------------------------------------------------ +.. raw:: html + + +

Splunk OpenTelemetry Collector

+ The Splunk Distribution of OpenTelemetry Collector is an open-source software agent capable of collecting traces, metrics, and logs from a wide variety of hosts, containers, and services. You are responsible for installing, configuring, transforming, sending data, and managing your Collector instances, including maintaining version compatibility and installing, configuring, and managing Collector components. See :ref:`otel-intro` for more information. Splunk provides support for the Splunk Distribution of OpenTelemetry Collector. See :ref:`using-upstream-otel` for more information. - -Integration with cloud service providers ------------------------------------------------------------- +.. raw:: html + + +

Integration with cloud service providers

+ You can configure Splunk Observability Cloud to connect to services in AWS, Azure, and Google Cloud Platform to retrieve metrics and logs. See :ref:`get-started-connect` for more information. Splunk instrumentation can help you instrument serverless applications to bring traces and application metrics to Splunk Observability Cloud. See :ref:`instrument-serverless-functions`. -Splunk distributions of OpenTelemetry instrumentation ------------------------------------------------------------ +.. raw:: html + + +

Splunk distributions of OpenTelemetry instrumentation

+ The Splunk distributions of OpenTelemetry instrumentation are open-source software agents and libraries that can instrument back-end applications and front-end experiences for Splunk APM and Splunk RUM. Setup, configuration, transformation, and sending data from the instrumentation agents and libraries is your responsibility, including maintaining version compatibility and installing, configuring, and managing automatic and manual instrumentations. See :ref:`get-started-application` and :ref:`rum-gdi` for more information. Splunk officially supports the Splunk distributions of OpenTelemetry instrumentation, including manual instrumentation. - -Ingest API endpoints -------------------------------------------------------------- +.. raw:: html + + +

Ingest API endpoints

+ You can use the REST API to send telemetry directly to Splunk Observability Cloud. This might be useful when you can't use the Splunk Distribution of OpenTelemetry Collector or when you have specific networking or security requirements. See :ref:`rest-api-ingest` for more information. If your organization has stringent networking security policies that apply to sending data to third parties, see :ref:`allow-services`. - -Private connectivity -------------------------------------------------------------- +.. raw:: html + + +

Private connectivity

+ If you prefer not to send data to Splunk public endpoints using HTTPS, you can use AWS Private Link to ingest data from sources deployed on AWS. See :ref:`aws-privatelink` for more information. - -Data retention -------------------------------------------------------------- +.. raw:: html + + +

Data retention

+ When you send data to Splunk Observability Cloud, it is ingested and stored for a period of time that varies depending on the product and type of contract. See :ref:`data-o11y` for more information. You can monitor subscription usage for each product depending on the type of subscription. See :ref:`subscription-overview` for more information. -.. _sd-subscriptions: - -Subscription types, expansions, renewals, and terminations -=========================================================== +.. raw:: html + + +

Subscription types, expansions, renewals, and terminations

+ Your subscription to Splunk Observability Cloud depends on the Splunk product: host-based or usage-based for Splunk IM and Splunk APM, or web sessions for Splunk RUM or synthetics check for Splunk Synthetic Monitoring. -Host-based subscriptions ------------------------------------------------------------- +.. raw:: html + + +

Host-based subscriptions

+ Host-based subscriptions base billing on the total number of unique hosts reporting data to Splunk Observability Cloud on an hourly basis, then calculate the average of those hourly measurements across each billing month. The calculation is done for each host, container, custom metric, and high resolution metric. A host is a physical, non-virtualized environment, or a virtual instance in a virtualized or public cloud environment, that reports metric data to Splunk Observability Cloud. You can increase the amount of hosts or containers per host if needed. -Usage-based subscription ---------------------------------------------- +.. raw:: html + + +

Usage-based subscriptions

+ Usage-based pricing is suited for custom metrics, containerized environments, and monitoring serverless environments or cloud services that don't provide a view of underlying hosts. Usage is calculated depending on the product or feature. For example, Splunk Infrastructure Monitoring usage-based pricing relies on metric time series (MTS), whereas Splunk Real User Monitoring calculates usage from the number of web sessions. For more information on subscription usage and monitoring in Splunk Observability Cloud, see :ref:`subscription-overview`. -Overages ----------------------------------------------- +.. raw:: html + + +

Overages

+ Splunk Observability Cloud overages are based on usage measured over a month. Overages are incurred if the monthly usage is higher than your paid subscription. Splunk Observability Cloud provides transparent usage data with granular daily detailed reports on all monitored hosts, containers, and metrics. You can also turn on alerts or setup tokens to manage your usage. See :ref:`subscription-overview` for more information. -.. _sd-suites: - -Suite offerings ------------------------------------------------ +.. raw:: html + + +

Suite offerings

+ Splunk Observability Cloud is also available in different suites, including Splunk Observability Cloud Enterprise Edition and Splunk Observability Cloud Commercial Edition. See :new-page:`Suites ` on Splunk.com for more information. -.. _sd-subscription: - -Subscription updates, renewals, and terminations ---------------------------------------------------- +.. raw:: html + + +

Subscription updates, renewals, and terminations

+ You can update or expand your Splunk Observability Cloud subscription any time during the term of the subscription to meet your business needs. For example, you can: @@ -146,16 +172,19 @@ For additional information, see: - :new-page:`Splunk Offerings Purchase Capacity and Limitations ` on Splunk.com - :new-page:`Splunk Success Plans ` on Splunk.com - -.. _sd-regions: - -Available regions or realms -=========================================================== +.. raw:: html + + +

Available regions or realms

+ Splunk Observability Cloud is available in the following global regions. Each Cloud provider region is mapped to a Splunk Observability Cloud realm, which determines access URLs and endpoints. -Realm to region equivalence ----------------------------------------------------------- +.. raw:: html + + +

Realm to region equivalence

+ The following table shows which cloud regions correspond to each realm in Splunk Observability Cloud. @@ -191,9 +220,11 @@ The following table shows which cloud regions correspond to each realm in Splunk - AWS AP Tokyo (ap-northeast-1) - - -Available components per region or realm ----------------------------------------------------------- +.. raw:: html + + +

Available components per region or realm

+ The following components are available for each global region. Each Cloud provider region is mapped to a Splunk Observability Cloud realm, which determines access URLs and endpoints. @@ -269,10 +300,11 @@ For additional information, see: - :ref:`Note about realms` - :new-page:`Observability for Google Cloud Environments ` -.. _sd-compliance: - -Compliance and certifications -=========================================================== +.. raw:: html + + +

Compliance and certifications

+ Splunk has attained a number of compliance attestations and certifications from industry-leading auditors as part of our commitment to adhere to industry standards worldwide and part of our efforts to safeguard customer data. The following compliance attestations/certifications are available: @@ -282,46 +314,82 @@ Splunk has attained a number of compliance attestations and certifications from - :strong:`Cloud Security Alliance (CSA) Security, Trust, & Assurance Registry (STAR)`: Splunk Observability Cloud participates in the voluntary CSA STAR Level 1 Self Assessment to document compliance with CSA- published best practices. We submit our security and privacy self-assessments using the :new-page:`Cloud Controls Matrix ` and :new-page:`GDPR Code of Conduct ` based on the CSA Consensus Assessment Initiative Questionnaire (CAIQ). -For information regarding the availability of service components between the AWS and Google Cloud regions, see :ref:`sd-regions`. +.. raw:: html + + +

For information regarding the availability of service components between the AWS and Google Cloud regions, see Available regions or realms.

+ For additional information, see: - :new-page:`Compliance at Splunk ` - -.. _sd-security: - -Security -=========================================================== +.. raw:: html + + +

Security

+ The security and privacy of your data is key to you and your organization, and Splunk makes this a top priority. Splunk Observability Cloud is designed and delivered using key security controls described in the following sections. -Data encryption ------------------------------------------------------------ +.. raw:: html + + +

Data encryption

+ All data in transit to and from Splunk Observability Cloud is TLS 1.2+ encrypted. Splunk Observability Cloud uses AES 256-bit encryption by default. Encryption key management processes are in place to help ensure the secure generation, storage, distribution and destruction of encryption keys. -Data handling ------------------------------------------------------------ - -Your data is stored securely in a Splunk Observability Cloud realm that corresponds to a cloud service provider's region. See :ref:`sd-regions` for more information on regions and realms. - -Splunk retains Customer Content stored in its cloud computing services for at least thirty days after the expiration or termination of the subscription. See :ref:`sd-subscription` for more information. - -For information on data retention, see :ref:`sd-data`. - -Security controls and compliance ------------------------------------------------------- - -Splunk has attained a number of compliance attestations and certifications from industry-leading auditors. See :ref:`sd-compliance` for information on compliance certifications. - -Realm security ------------------------------------------------------------- +.. raw:: html + + +

Data handling

+ + +.. raw:: html + + +

Your data is stored securely in a Splunk Observability Cloud realm that corresponds to a cloud service provider's region. See Available regions or realms for more information on regions and realms.

+ + +.. raw:: html + + +

Splunk retains Customer Content stored in its cloud computing services for at least thirty days after the expiration or termination of the subscription. See Subscription types, expansions, renewals, and terminations for more information.

+ + +.. raw:: html + + +

For information on data retention, see Data ingestion and retention.

+ + +.. raw:: html + + +

Security controls and compliance

+ + +.. raw:: html + + +

Splunk has attained a number of compliance attestations and certifications from industry-leading auditors. See Available regions or realms for information on compliance certifications.

+ + +.. raw:: html + + +

Realm security

+ Every realm in Splunk Observability Cloud runs in a secured environment on a stable operating system and in a network that is hardened to industry standards. Realms are scanned for threats on a regular basis. -User authentication and access ------------------------------------------------------- +.. raw:: html + + +

User authentication and access

+ + You can configure authentication using Single-sign on (SSO) integrations implementing SAML 2.0, such as Ping, Okta, or AzureAD. See :ref:`sso-about` for more information. @@ -333,24 +401,27 @@ For additional information, see: - :new-page:`Splunk Data Privacy & Security ` - :new-page:`Splunk Observability Cloud Security Addendum ` -.. _sd-slas: - -Service level agreements -=========================================================== +.. raw:: html + + +

Service level agreements

+ The :new-page:`Splunk Observability Cloud Service Level Schedule ` document describes the uptime SLA and exclusions. You may claim service credits in the event of SLA failures, as set forth in the Splunk SLA schedule. +.. raw:: html + + +

Status page

+ -Status page -------------------------------------------- - -You can check the current status of Splunk Observability Cloud realms through the :new-page:`https://status.signalfx.com ` status page. Each status page lets you subscribe to updates. +You can check the current status of Splunk Observability Cloud realms through the :new-page:`https://status.signalfx.com ` status page. You can subscribe to updates on the status pages. - -.. _sd-compatibility: - -Supported browsers -=========================================================== +.. raw:: html + + +

Supported browsers

+ Splunk Observability Cloud works as expected when using the latest and next-to-latest official releases of the following browsers: @@ -361,21 +432,21 @@ Splunk Observability Cloud works as expected when using the latest and next-to-l See :ref:`supported-browsers` for more information. - -.. _sd-limits: - -System limits per product -=========================================================== +.. raw:: html + + +

System limits per product

+ Splunk Observability Cloud service limits are described in :ref:`per-product-limits`. Service limits may vary based on your Splunk Observability Cloud subscription. Some limits depend on a combination of configuration, system load, performance, and available resources. Unless noted, the service limit is identical for all regions. Contact Splunk if your requirements are different or exceed what is recommended in :ref:`per-product-limits`. - -.. _sd-support: - -Technical support -=========================================================== +.. raw:: html + + +

Technical support

+ Splunk Observability Cloud subscriptions include technical support. For more information regarding support terms and program options, see :new-page:`Splunk Support Programs `. Also note the following: @@ -384,11 +455,11 @@ Splunk Observability Cloud subscriptions include technical support. For more inf For additional information, see :ref:`support`. - -.. _sd-auth: - -Users and authentication -=========================================================== +.. raw:: html + + +

Users and authentication

+ You are responsible for creating and administering your users's accounts, the roles and capabilities assigned to them, the authentication method, and global password policies. To control what your Splunk Observability Cloud users can do, you assign them roles that have a defined set of specific capabilities. You can assign roles using Splunk Observability Cloud in the browser or through the REST API. See :ref:`users-assign-roles-ph3`. @@ -396,7 +467,10 @@ Roles give Splunk Observability Cloud users access to features and permission to You can configure Splunk Observability Cloud to use SAML authentication for single sign-on (SSO). To use multifactor authentication, you must use a SAML 2.0 identity provider that supports multifactor authentication. Only SHA-256 signatures in the SAML message between your IdP and Splunk Observability Cloud are supported. You are responsible for the SAML configuration of your IdP including the use of SHA-256 signatures. See :ref:`sso-about`. -Unified identity ----------------------------------------------------- +.. raw:: html + + +

Unified identity

+ When Splunk Cloud Platform customers purchase or start a trial of Splunk Observability Cloud, users can access both platforms using a single identity. A user's role-based access to Splunk Cloud Platform indexes carries over to Splunk Observability Cloud. Administrators can set up all users in a central location, Splunk Cloud Platform. Users can log into Splunk Observability Cloud with SSO using their Splunk Cloud Platform credentials. Users can examine logs from the Splunk Cloud Platform instance in Log Observer Connect upon provisioning with no additional setup. See :ref:`unified-id-unified-identity` for more information. diff --git a/get-started/support.rst b/get-started/support.rst index 81037dfaf..2be992bf7 100644 --- a/get-started/support.rst +++ b/get-started/support.rst @@ -11,7 +11,7 @@ Splunk Observability Cloud provides multiple ways to get help with the product. - Submit a case in the :new-page:`Splunk Support Portal ` - Available to Splunk Observability Cloud customers - - For more information, see :ref:`support-portal` + - For more information, see following section - Call :new-page:`Splunk Customer Support ` - Available to Splunk Observability Cloud customers @@ -26,11 +26,11 @@ Splunk Observability Cloud provides multiple ways to get help with the product. To learn about even more support options, see :new-page:`Splunk Customer Success `. - -.. _support-portal: - -Use the Splunk Support Portal -=================================== +.. raw:: html + + +

Use the Splunk Support Portal

+ On March 17, 2022, the Splunk Observability Cloud (SignalFx) Support site joined the :new-page:`Splunk Support Portal `, where you can create new cases, update your open cases, and search the knowledge base. You'll continue to receive the same world-class support from your Splunk Observability Cloud support engineers. @@ -44,15 +44,15 @@ If you have an existing Splunk Observability Cloud (SignalFx) Support site accou For example, if you select the :guilabel:`Splunk Support Portal` link on the Splunk Observability Cloud application home page or navigate to :menuselection:`Settings` then :menuselection:`Support` in the application, you aren't automatically logged in to the Splunk Support Portal. - To log in to the Splunk Support Portal, you must :ref:`create a Splunk account `, if you don't already have one. You use your Splunk account credentials to log in to the Splunk Support Portal. + To log in to the Splunk Support Portal, you must create a Splunk account, if you don't already have an account. You use your Splunk account credentials to log in to the Splunk Support Portal. Not sure if you have a Splunk account or can't remember your password or username? Use the :guilabel:`Forgot your password or username?` functionality on the :new-page:`Splunk Account Login page `. - -.. _create-splunk-account: - -Create a Splunk account -------------------------------- +.. raw:: html + + +

Create a Splunk account

+ 1. Go to the :new-page:`Create Your Account page ` and complete the form to register for a Splunk account. Make sure to sign up using your business email address. @@ -74,6 +74,11 @@ Create a Splunk account After your Splunk Support Portal entitlements have been set, you can submit and update cases for your products. +.. raw:: html + + +

Submit a Splunk Support Portal case

+ .. _submit-support-case: diff --git a/index.rst b/index.rst index 59f27acb4..9c819d21a 100644 --- a/index.rst +++ b/index.rst @@ -21,7 +21,7 @@ Learn about the basic elements of Splunk Observability Cloud and all it can do f .. rst-class:: newcard :strong:`Overview` -Splunk Observability Cloud overview :ref:`welcome` +Splunk Observability Cloud overview :ref:`overview` .. rst-class:: newcard @@ -35,8 +35,8 @@ A collection of task-based tutorials to achieve a goal in Splunk Observability C .. rst-class:: newcard -:strong:`Admin onboarding guide` -Admin guide for onboarding Splunk Observability Cloud :ref:`admin-onboarding-guide` +:strong:`Get started guide for admins` +Get started guide for Splunk Observability Cloud admins :ref:`get-started-guide` .. rst-class:: newcard @@ -114,12 +114,12 @@ Query logs to identify root causes :ref:`logs-intro-logconnect` .. rst-class:: newcard :strong:`Synthetic Monitoring` -Proactively monitor the performance of web resources :ref:`welcome-synthmon` +Proactively monitor the performance of web resources :ref:`intro-synthetics` .. rst-class:: newcard :strong:`All products` -Learn more about all Splunk Observability Cloud products :ref:`welcome` +Learn more about all Splunk Observability Cloud products :ref:`overview` .. role:: icon-wrench .. rst-class:: newparawithicon @@ -271,42 +271,42 @@ To keep up to date with changes in the products, see the Splunk Observability Cl :caption: Get started :maxdepth: 2 - get-started/welcome + get-started/get-started .. toctree:: - :maxdepth: 3 + :maxdepth: 3 - Service description + Overview .. toctree:: - :maxdepth: 3 + :maxdepth: 3 - Get started + Architecture .. toctree:: - :maxdepth: 3 + :maxdepth: 3 - About Mobile TOGGLE + Service description .. toctree:: - :maxdepth: 3 + :maxdepth: 3 - Splunk Observability Cloud architecture + Get started guide for admins TOGGLE .. toctree:: :maxdepth: 3 - Contribute to our documentation + Free and paid courses .. toctree:: :maxdepth: 3 - Free and paid courses + Free trial and guided onboarding .. toctree:: :maxdepth: 3 - Free trial experience + About Mobile TOGGLE .. toctree:: :maxdepth: 3 @@ -347,11 +347,6 @@ To keep up to date with changes in the products, see the Splunk Observability Cl :caption: Administer Splunk Observability Cloud :maxdepth: 3 - Admin onboarding guide TOGGLE - -.. toctree:: - :maxdepth: 3 - admin/admin .. toctree:: @@ -503,87 +498,7 @@ To keep up to date with changes in the products, see the Splunk Observability Cl :caption: Alerts, detectors, and SLOs :maxdepth: 3 - Introduction to alerts and detectors - -.. toctree:: - :maxdepth: 3 - - Best practices for detectors - -.. toctree:: - :maxdepth: 3 - - Alerts and detectors scenario library TOGGLE - -.. toctree:: - :maxdepth: 3 - - Use and customize AutoDetect alerts and detectors TOGGLE - -.. toctree:: - :maxdepth: 3 - - Create detectors to trigger alerts - -.. toctree:: - :maxdepth: 3 - - alerts-detectors-notifications/detector-manage-permissions - -.. toctree:: - :maxdepth: 3 - - Link detectors to charts - -.. toctree:: - :maxdepth: 3 - - Manage notification subscribers - -.. toctree:: - :maxdepth: 3 - - Preview detector alerts - -.. toctree:: - :maxdepth: 3 - - View alerts - -.. toctree:: - :maxdepth: 3 - - View detectors - -.. toctree:: - :maxdepth: 3 - - Mute alert notifications - -.. toctree:: - :maxdepth: 3 - - Auto-clear alerts - -.. toctree:: - :maxdepth: 3 - - Troubleshoot detectors - -.. toctree:: - :maxdepth: 3 - - Detector options - -.. toctree:: - :maxdepth: 3 - - Built-in alert conditions TOGGLE - -.. toctree:: - :maxdepth: 3 - - alerts-detectors-notifications/alert-message-variables-reference + Alerts and detectors TOGGLE .. toctree:: :maxdepth: 3 @@ -927,3 +842,8 @@ To keep up to date with changes in the products, see the Splunk Observability Cl :maxdepth: 3 Glossary + +.. toctree:: + :maxdepth: 3 + + Contribute to our documentation diff --git a/infrastructure/metrics-pipeline/aggregate-drop-use-case.rst b/infrastructure/metrics-pipeline/aggregate-drop-use-case.rst index df25ed3d7..43e8b864c 100644 --- a/infrastructure/metrics-pipeline/aggregate-drop-use-case.rst +++ b/infrastructure/metrics-pipeline/aggregate-drop-use-case.rst @@ -10,7 +10,11 @@ Scenario: Combine aggregation and dropping rules to control your metric cardinal |hr| -:strong:`Available in Enterprise Edition`. For more information, see :ref:`sd-subscriptions`. +.. raw:: html + + +

Available in Enterprise Edition. For more information, see Subscription types, expansions, renewals, and terminations.

+ |hr| diff --git a/infrastructure/metrics-pipeline/data-dropping-impact.rst b/infrastructure/metrics-pipeline/data-dropping-impact.rst index 79446bb30..3671c2656 100644 --- a/infrastructure/metrics-pipeline/data-dropping-impact.rst +++ b/infrastructure/metrics-pipeline/data-dropping-impact.rst @@ -11,7 +11,11 @@ Impact and benefits of archiving or dropping data |hr| -:strong:`Available in Enterprise Edition`. For more information, see :ref:`sd-subscriptions`. +.. raw:: html + + +

Available in Enterprise Edition. For more information, see Subscription types, expansions, renewals, and terminations.

+ |hr| diff --git a/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst b/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst index a7233b2f7..5e76719e0 100644 --- a/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst +++ b/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst @@ -10,7 +10,11 @@ Introduction to metrics pipeline management |hr| -:strong:`Available in Enterprise Edition`. For more information, see :ref:`sd-subscriptions`. +.. raw:: html + + +

Available in Enterprise Edition. For more information, see Subscription types, expansions, renewals, and terminations.

+ |hr| diff --git a/infrastructure/metrics-pipeline/metrics-pipeline.rst b/infrastructure/metrics-pipeline/metrics-pipeline.rst index c18c329e4..aa8b4005a 100644 --- a/infrastructure/metrics-pipeline/metrics-pipeline.rst +++ b/infrastructure/metrics-pipeline/metrics-pipeline.rst @@ -10,7 +10,11 @@ Metrics pipeline management in Splunk Infrastructure Monitoring |hr| -:strong:`Available in Enterprise Edition`. For more information, see :ref:`sd-subscriptions`. +.. raw:: html + + +

Available in Enterprise Edition. For more information, see Subscription types, expansions, renewals, and terminations.

+ |hr| diff --git a/infrastructure/metrics-pipeline/mpm-rule-agreggation.rst b/infrastructure/metrics-pipeline/mpm-rule-agreggation.rst index c4633954e..7fb221982 100644 --- a/infrastructure/metrics-pipeline/mpm-rule-agreggation.rst +++ b/infrastructure/metrics-pipeline/mpm-rule-agreggation.rst @@ -9,7 +9,11 @@ Use aggregation rules to control your data volume |hr| -:strong:`Available in Enterprise Edition`. For more information, see :ref:`sd-subscriptions`. +.. raw:: html + + +

Available in Enterprise Edition. For more information, see Subscription types, expansions, renewals, and terminations.

+ |hr| diff --git a/infrastructure/metrics-pipeline/mpm-rule-routing.rst b/infrastructure/metrics-pipeline/mpm-rule-routing.rst index 8500a5b68..77f4eb824 100644 --- a/infrastructure/metrics-pipeline/mpm-rule-routing.rst +++ b/infrastructure/metrics-pipeline/mpm-rule-routing.rst @@ -9,8 +9,12 @@ Use data routing to keep, archive, or discard your metrics |hr| -:strong:`Available in Enterprise Edition`. For more information, see :ref:`sd-subscriptions`. - +.. raw:: html + + +

Available in Enterprise Edition. For more information, see Subscription types, expansions, renewals, and terminations.

+ + |hr| Use data routing to choose how to ingest and store all the metric time series (MTS) that have the same metric. Routing options include to keep metrics in real-time, archive them, or drop them altogether. diff --git a/infrastructure/metrics-pipeline/use-case-archive.rst b/infrastructure/metrics-pipeline/use-case-archive.rst index d03ed9a00..21c5fcabc 100644 --- a/infrastructure/metrics-pipeline/use-case-archive.rst +++ b/infrastructure/metrics-pipeline/use-case-archive.rst @@ -10,7 +10,11 @@ Scenario: Improve storage use and costs by routing and archiving your data |hr| -:strong:`Available in Enterprise Edition`. For more information, see :ref:`sd-subscriptions`. +.. raw:: html + + +

Available in Enterprise Edition. For more information, see Subscription types, expansions, renewals, and terminations.

+ |hr| diff --git a/metrics-and-metadata/data-model.rst b/metrics-and-metadata/data-model.rst index 19d911d71..02c8c0d6e 100644 --- a/metrics-and-metadata/data-model.rst +++ b/metrics-and-metadata/data-model.rst @@ -7,7 +7,7 @@ Data types in Splunk Observability Cloud .. meta:: :description: Learn about the data types available in Splunk Observability Cloud: metrics, events, traces, and logs. -The :ref:`Splunk Observability Cloud platform ` provides you with the tools to collect, manage, and visualize the following data types: metrics, events, logs, and traces. +Splunk Observability Cloud provides you with the tools to collect, manage, and visualize the following data types: metrics, events, logs, and traces. With Splunk Observability Cloud's features, you'll be able to build charts and dashboards, and set up alerts and other system notification methods. This will help you better understand the performance of your systems and services, detect anomalies, or plan deployments and enhancements. diff --git a/scenarios-tutorials/scenario-landing.rst b/scenarios-tutorials/scenario-landing.rst index 7b11cc2fc..d5ce89f77 100644 --- a/scenarios-tutorials/scenario-landing.rst +++ b/scenarios-tutorials/scenario-landing.rst @@ -23,7 +23,7 @@ This is the collection of scenarios available for Splunk Observability Cloud. Us * - :strong:`Category` - :strong:`Scenario` - * - :ref:`Splunk Observability Cloud ` + * - :ref:`Splunk Observability Cloud ` - :ref:`scenario-security` * - :ref:`OpenTelemetry ` - :ref:`otel-collector-scenario` diff --git a/scenarios-tutorials/tutorial-landing.rst b/scenarios-tutorials/tutorial-landing.rst index 80e652486..647602f8c 100644 --- a/scenarios-tutorials/tutorial-landing.rst +++ b/scenarios-tutorials/tutorial-landing.rst @@ -18,4 +18,4 @@ This is the collection of tutorials available for Splunk Observability Cloud. Us For specific scenarios and use cases, see :ref:`scenario-landing`. -For an overview of Splunk Observability Cloud and how to send your data in, go to :ref:`welcome` and :ref:`get-started-get-data-in`. +For an overview of Splunk Observability Cloud and how to send your data in, go to :ref:`overview` and :ref:`get-started-get-data-in`. diff --git a/synthetics/set-up-synthetics/set-up-synthetics.rst b/synthetics/set-up-synthetics/set-up-synthetics.rst index 5f5521428..758d63453 100644 --- a/synthetics/set-up-synthetics/set-up-synthetics.rst +++ b/synthetics/set-up-synthetics/set-up-synthetics.rst @@ -72,6 +72,8 @@ The following table outlines which test might work for the scenario you want to * Multiple step API transactions +.. _setup-first-test: + Set up your first test ============================== After you choose which type of test you want to use, follow these steps to set up your test: