diff --git a/_includes/gdi/troubleshoot-zeroconfig-k8s.rst b/_includes/gdi/troubleshoot-zeroconfig-k8s.rst index b714ce6f8..ca83e31ff 100644 --- a/_includes/gdi/troubleshoot-zeroconfig-k8s.rst +++ b/_includes/gdi/troubleshoot-zeroconfig-k8s.rst @@ -21,17 +21,6 @@ Examine logs to make sure that the operator and cert manager are working. * ``kubectl logs -l app=cainjector`` * ``kubectl logs -l app=webhook`` -Resolve certificate manager issues ----------------------------------------- - -A hanging operator can indicate issues with the certificate manager. - -* Check the logs of your cert-manager pods. -* Restart the cert-manager pods. -* Ensure that your cluster has only one instance of cert-manager. This includes ``certmanager``, ``certmanager-cainjector``, and ``certmanager-webhook``. - -See the official cert manager troubleshooting guide for more information: :new-page:`https://cert-manager.io/docs/troubleshooting/`. - Validate certificates --------------------------- diff --git a/_includes/requirements/collector-linux.rst b/_includes/requirements/collector-linux.rst index 25bd31cb3..8883feb27 100644 --- a/_includes/requirements/collector-linux.rst +++ b/_includes/requirements/collector-linux.rst @@ -2,7 +2,7 @@ The Collector supports the following Linux distributions and versions: * Amazon Linux: 2, 2023. Log collection with Fluentd is not currently supported for Amazon Linux 2023. * CentOS, Red Hat, or Oracle: 7, 8, 9 -* Debian: 9, 10, 11 +* Debian: 11, 12 * SUSE: 12, 15 for version 0.34.0 or higher. Log collection with Fluentd is not currently supported. * Ubuntu: 16.04, 18.04, 20.04, 22.04, and 24.04 * Rocky Linux: 8, 9 diff --git a/admin/authentication/SSO/sso.rst b/admin/authentication/SSO/sso.rst index 1c82f70c9..1f65cd731 100644 --- a/admin/authentication/SSO/sso.rst +++ b/admin/authentication/SSO/sso.rst @@ -84,7 +84,9 @@ for an Okta login service integration. -When you set up SSO, the default role for a user signing in to Splunk Observability Cloud through SSO is the :guilabel:`power` role. You can change the default SSO role to any of the available roles in Splunk Observability Cloud. +When you set up SSO, the default role for a user signing in to Splunk Observability Cloud through SSO is the :guilabel:`power` role. You can change the default SSO role to any of the available roles in Splunk Observability Cloud. These are :guilabel:`admin`, :guilabel:`power`, :guilabel:`usage`, and :guilabel:`read_only`. To learn more about roles, see :ref:`roles-and-capabilities`. + +.. note:: Changing the default SSO role affects only new SSO users. If a user already has an existing role defined by the previous default SSO role, you must change it manually. To change a user's role, see :ref:`assign-role-existing`. To change the default SSO role, do the following: diff --git a/admin/user-management/roles/users-assign-roles.rst b/admin/user-management/roles/users-assign-roles.rst index dff5fac1e..c44a3920b 100644 --- a/admin/user-management/roles/users-assign-roles.rst +++ b/admin/user-management/roles/users-assign-roles.rst @@ -39,6 +39,8 @@ To assign roles when inviting new users, follow these steps: #. Select :guilabel:`Send Invitation` to confirm. +.. _assign-role-existing: + Assign roles to an existing user ===================================== @@ -47,7 +49,7 @@ To assign roles to a user that's already a member of your organization, follow t #. From the left navigation menu, select :menuselection:`Settings` then :menuselection:`Users`. #. Find the name of the user. #. Select the :guilabel:`Actions` (|verticaldots|) menu icon next to the username, then select :menuselection:`Manage Roles`. -#. In the :guilabel:`Manage Roles` dialog box, select one or more of the available roles, then select the right-pointing arrow to move the roles to the :guilabel:`Selected Roles` panel. +#. In the :guilabel:`Manage Roles` dialog box, select one or more of the available roles. Make sure to deselect any roles, including the default SSO role, :guilabel:`power`, if you no longer want the user to have that role. Select the right-pointing arrow to move the roles to the :guilabel:`Selected Roles` panel. #. Select :guilabel:`Assign Roles` to confirm. .. note:: You can use the :guilabel:`Add All` link to add all available roles to a user. diff --git a/gdi/opentelemetry/automatic-discovery/k8s/k8s-backend.rst b/gdi/opentelemetry/automatic-discovery/k8s/k8s-backend.rst index 95c1db9ef..fdfa49ee0 100644 --- a/gdi/opentelemetry/automatic-discovery/k8s/k8s-backend.rst +++ b/gdi/opentelemetry/automatic-discovery/k8s/k8s-backend.rst @@ -87,25 +87,14 @@ Populate values.yaml with the following fields and values: operatorcrds: install: true -You might need to populate the file with additional values depending on your environment. See :ref:`k8s-auto-discovery-add-certificates` and :ref:`k8s-auto-discovery-setup-traces` for more information. +You might need to populate the file with additional values depending on your environment. See :ref:`k8s-auto-discovery-add-crds` and :ref:`k8s-auto-discovery-setup-traces` for more information. -.. _k8s-auto-discovery-add-certificates: +.. _k8s-auto-discovery-add-crds: -Add certificates and OpenTelemetry CRDs +Add OpenTelemetry CRDs ------------------------------------------ -The Operator requires certain TLS certificates to work. Use the following command to check whether a certificate manager is available: - -.. code-block:: yaml - - # Check if cert-manager is already installed, don't deploy a second cert-manager. - kubectl get pods -l app=certmanager --all-namespaces - -If a certificate manager isn't available in the cluster, add ``certmanager.enabled=true`` to your values.yaml file. - -The Operator for Kubernetes also requires you to install OpenTelemetry Custom Resource Definitions (CRDs). To do this, add ``operatorcrds.install=true`` to your values.yaml file. - -The following example YAML includes ``certmanager.enabled=true`` and ``operatorcrds.install=true``: +The Operator for Kubernetes requires you to install OpenTelemetry Custom Resource Definitions (CRDs). To do this, add ``operatorcrds.install=true`` to your values.yaml file: .. code-block:: yaml :emphasize-lines: 7,8 @@ -115,9 +104,7 @@ The following example YAML includes ``certmanager.enabled=true`` and ``operatorc splunkObservability: realm: accessToken: - - certmanager: - enabled: true + operator: enabled: true operatorcrds: @@ -248,7 +235,7 @@ Verify all the OpenTelemetry resources are deployed successfully Resources include the Collector, the Operator, webhook, and instrumentation. Run the following commands to verify the resources are deployed correctly. -The pods running in the collector namespace must include the following: +The pods running in the Collector namespace must include the following: .. code-block:: yaml @@ -256,19 +243,15 @@ The pods running in the collector namespace must include the following: # NAME READY # NAMESPACE NAME READY STATUS # monitoring splunk-otel-collector-agent-lfthw 2/2 Running - # monitoring splunk-otel-collector-cert-manager-6b9fb8b95f-2lmv4 1/1 Running - # monitoring splunk-otel-collector-cert-manager-cainjector-6d65b6d4c-khcrc 1/1 Running - # monitoring splunk-otel-collector-cert-manager-webhook-87b7ffffc-xp4sr 1/1 Running # monitoring splunk-otel-collector-k8s-cluster-receiver-856f5fbcf9-pqkwg 1/1 Running # monitoring splunk-otel-collector-opentelemetry-operator-56c4ddb4db-zcjgh 2/2 Running -The webhooks in the collector namespace must include the following: +The webhooks in the Collector namespace must include the following: .. code-block:: yaml kubectl get mutatingwebhookconfiguration.admissionregistration.k8s.io # NAME WEBHOOKS AGE - # splunk-otel-collector-cert-manager-webhook 1 14m # splunk-otel-collector-opentelemetry-operator-mutation 3 14m The instrumentation in the collector namespace must include the following: diff --git a/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/config-k8s-for-java.rst b/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/config-k8s-for-java.rst index 6bf202970..0e9e13811 100644 --- a/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/config-k8s-for-java.rst +++ b/gdi/opentelemetry/automatic-discovery/k8s/k8s-java-traces-tutorial/config-k8s-for-java.rst @@ -53,9 +53,6 @@ Now, you need to configure Helm to correctly install the Splunk Distribution of * - ``environment`` - ``prd`` or your desired environment name - Tags data that the application sends to Splunk Observability Cloud, allowing you to see the data in Splunk APM - * - ``certmanager.enabled`` - - ``true`` - - Activates the certification manager for Helm * - ``operatorcrds.install`` - ``true`` - Installs the CRDs used by the OpenTelemetry Kubernetes Operator diff --git a/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst b/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst index f63102429..02d222e82 100644 --- a/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst +++ b/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst @@ -255,13 +255,6 @@ For the Operator: operator: enabled: true -Additionally, deploy the cert-manager for the Operator if it hasn't been already. - -.. code-block:: yaml - - certmanager: - enabled: true - With the above configuration: * The Collector is set up to receive profiling data. diff --git a/gdi/opentelemetry/collector-kubernetes/kubernetes-helm-releases.rst b/gdi/opentelemetry/collector-kubernetes/kubernetes-helm-releases.rst index 0f30064b5..ac13711bf 100644 --- a/gdi/opentelemetry/collector-kubernetes/kubernetes-helm-releases.rst +++ b/gdi/opentelemetry/collector-kubernetes/kubernetes-helm-releases.rst @@ -26,8 +26,6 @@ Optional releases for subcharts * :new-page:`https://github.com/open-telemetry/opentelemetry-helm-charts/releases` -* :new-page:`https://github.com/cert-manager/cert-manager/releases` - .. _helm-chart-images: Helm chart images @@ -50,11 +48,6 @@ Optional add-on feature images * :new-page:`https://quay.io/signalfx/splunk-otel-collector-windows` * :new-page:`https://registry.access.redhat.com` * :new-page:`https://ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator` -* :new-page:`https://quay.io/jetstack/cert-manager-controller` -* :new-page:`https://quay.io/jetstack/cert-manager-acmesolver` -* :new-page:`https://quay.io/jetstack/cert-manager-webhook` -* :new-page:`https://quay.io/jetstack/cert-manager-cainjector` -* :new-page:`https://quay.io/jetstack/cert-manager-ctl` * :new-page:`https://ghcr.io/signalfx/splunk-otel-java/splunk-otel-java` * :new-page:`https://ghcr.io/signalfx/splunk-otel-js/splunk-otel-js` * :new-page:`https://ghcr.io/signalfx/splunk-otel-dotnet/splunk-otel-dotnet` diff --git a/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst b/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst index 2d9650f25..d3fe55962 100644 --- a/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst +++ b/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst @@ -286,26 +286,6 @@ Pod level metrics and dimensions - - Yes - * - ``k8s.statefulset.desired_pods`` - - Desired number of StatefulSets in the pod - - - - Yes - - * - ``k8s.statefulset.current_pods`` - - Current number of StatefulSets in the pod - - - - Yes - - * - ``k8s.statefulset.ready_pods`` - - Number of ready StatefulSets in the pod - - - - Yes - - * - ``k8s.statefulset.updated_pods`` - - Number of updated StatefulSets in the pod - - - - Yes - Node level metrics and dimensions ============================================================================ @@ -653,6 +633,18 @@ Other available metrics include: * - ``k8s.daemonset.ready_nodes`` - Yes + * - ``k8s.statefulset.desired_pods`` + - Yes + + * - ``k8s.statefulset.current_pods`` + - Yes + + * - ``k8s.statefulset.ready_pods`` + - Yes + + * - ``k8s.statefulset.updated_pods`` + - Yes + * - ``k8s.hpa.max_replicas`` - Yes diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst b/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst index 5468c9ff5..a4463213d 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst @@ -2,7 +2,7 @@ .. _linux-ansible: ******************************************************** -Deploy the Collector with Ansible for Linux +Deploy the Collector for Linux with Ansible ******************************************************** .. meta:: @@ -16,7 +16,7 @@ The following Linux distributions and versions are supported: * Amazon Linux: 2, 2023. Log collection with Fluentd isn't supported for Amazon Linux 2023. * CentOS, Red Hat, or Oracle: 7, 8, 9 -* Debian: 9, 10, 11 +* Debian: 11, 12 * SUSE: 12, 15 for Collector version 0.34.0 or higher. Log collection with Fluentd isn't supported. * Ubuntu: 16.04, 18.04, 20.04, and 22.04 diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst b/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst index a406dd2e1..0fa56b6cd 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst @@ -30,7 +30,7 @@ The following Linux distributions and versions: * Amazon Linux: 2 * CentOS, Red Hat, Oracle: 7, 8, 9 -* Debian: 9, 10, 11 +* Debian: 11, 12 * SUSE: 12, 15 (Note: Only for Collector versions 0.34.0 or higher. Log collection with Fluentd not currently supported.) * Ubuntu: 18.04, 20.04, 22.04 diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst b/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst index e48caa270..2fffb150e 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst @@ -2,7 +2,7 @@ .. _linux-puppet: ******************************************************** -Deploy the Collector with Puppet for Linux +Deploy the Collector for Linux with Puppet ******************************************************** .. meta:: @@ -14,7 +14,7 @@ Currently, we support the following Linux distributions and versions: - Amazon Linux: 2, 2023. Log collection with Fluentd isn't supported for Amazon Linux 2023. - CentOS / Red Hat / Oracle: 7, 8, 9 -- Debian: 9, 10, 11 +- Debian: 11, 12 - SUSE: 12, 15 (Note: Only applicable for Collector versions v0.34.0 or higher. Log collection with Fluentd not currently supported.) - Ubuntu: 16.04, 18.04, 20.04, 22.04 diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst b/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst index 9fd071bee..031ec76ef 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst @@ -30,7 +30,7 @@ The following Linux distributions and versions are supported: * Amazon Linux: 2, 2023. Log collection with Fluentd isn't supported for Amazon Linux 2023. * CentOS, Red Hat, Oracle: 7, 8, 9 -* Debian: 9, 10, 11 +* Debian: 11, 12 * SUSE: 12, 15 (Note: Only for Collector versions 0.34.0 or higher. Log collection with Fluentd not currently supported.) * Ubuntu: 18.04, 20.04, 22.04 @@ -113,7 +113,7 @@ For Linux, the formula accepts the attributes described in the following table: - ``false`` * - ``td_agent_version`` - Version of the td-agent (Fluentd) package to install - - ``3.7.1-0`` for Debian 9 and ``4.3.0`` for other distros + - ``4.3.0`` * - ``splunk_fluentd_config`` - The path to the Fluentd configuration file on the remote host. - ``/etc/otel/collector/fluentd/fluent.conf``