Skip to content
This repository was archived by the owner on Sep 2, 2025. It is now read-only.

Commit c236d68

Browse files
Merge branch 'main' into gschatz-syn-kpis
2 parents 79b783e + 7a5818d commit c236d68

File tree

19 files changed

+138
-978
lines changed

19 files changed

+138
-978
lines changed

admin/authentication/authentication-tokens/org-tokens.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ To rotate an access token with the API, use the ``POST /token/{name}/rotate`` en
237237

238238
.. code-block:: bash
239239
240-
curl -X POST "https://api.{realm}.signalfx.com/v2/token/{name}/rotate?graceful={gracePeriod}" \
240+
curl -X POST "https://api.{realm}.signalfx.com/v2/token/{name}/rotate?graceful={gracePeriod}&secondsUntilExpiry={secondsUntilExpiry}" \
241241
-H "Content-type: application/json" \
242242
-H "X-SF-TOKEN: <your-user-session-api-token-value>"
243243
@@ -247,6 +247,7 @@ Follow these steps:
247247
#. Enter your API session token in the ``your-user-session-api-token-value`` field. To find or create an API session token, see :ref:`admin-api-access-tokens`.
248248
#. Provide the name of the token you want to rotate in the ``name`` field.
249249
#. Optionally, provide a grace period, in seconds, in the ``gracePeriod`` field.
250+
#. Optionally, provide the seconds until your token expires in the ``secondsUntilExpiry`` field. This can be any value between 0 second and 5,676,000,000 seconds (18 years), inclusive. If left unspecified, the token remains valid for 30 days.
250251
#. Call the API endpoint to rotate the token.
251252

252253
For example, the following API call rotates ``myToken`` and sets a grace period of 604800 seconds (7 days) before the previous token secret expires.

apm/apm-spans-traces/span-formats.rst

Lines changed: 0 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -98,19 +98,4 @@ For more information on the ingest API endpoints, see :new-page:`Send APM traces
9898

9999
.. note:: You can also send trace data in OTLP format directly to Splunk Observability Cloud using the gRPC endpoint, either directly or from an OpenTelemetry Collector. See :ref:`grpc-data-ingest`.
100100

101-
.. _apm-formats-smart-agent:
102101

103-
Span formats compatible with the Smart Agent (deprecated)
104-
============================================================
105-
106-
The Smart Agent can receive the following span formats with the ``signalfx-forwarder`` monitor:
107-
108-
- Jaeger: gRPC and Thrift
109-
- Zipkin v1, v2 JSON
110-
111-
The Smart Agent can export the following span formats using the ``writer`` exporter:
112-
113-
- Zipkin v1, v2 JSON
114-
- SAPM
115-
116-
To configure the Smart Agent for Splunk APM, see :ref:`smart-agent`.

gdi/monitors-databases/postgresql.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,8 +73,7 @@ The following table shows the configuration options for the
7373
- no
7474
- ``list of strings``
7575
- List of databases to send database-specific metrics about. If
76-
omitted, metrics about all databases will be sent. This is an
77-
:ref:`overridable set <filtering-smart-agent>`
76+
omitted, metrics about all databases will be sent.
7877
(**default:** ``[*]``)
7978
-
8079

gdi/monitors-hosts/filesystems.rst

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -105,8 +105,7 @@ The following table shows the configuration options for this monitor.
105105
- ``fsTypes``
106106
- no
107107
- ``list of strings``
108-
- The filesystem types to include. This is an
109-
:ref:`overridable set <filtering-smart-agent>` If this is
108+
- The filesystem types to include. If this is
110109
not set, the default value is the set of all
111110
**non-logical/virtual filesystems** on the system. On Linux
112111
this list is determined by reading the ``/proc/filesystems``
@@ -117,8 +116,7 @@ The following table shows the configuration options for this monitor.
117116
- ``mountPoints``
118117
- no
119118
- ``list of strings``
120-
- The mount paths to include/exclude. This is an
121-
:ref:`overridable set <filtering-smart-agent>` **Note**:
119+
- The mount paths to include/exclude. **Note**:
122120
If you are using the hostFSPath option, do not include the
123121
``/hostfs/`` mount in the filter. If both this and ``fsTypes``
124122
are specified, the two filters combine in an AND relationship.

gdi/monitors-network/net-io.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,8 +68,7 @@ integration:
6868
- ``interfaces``
6969
- no
7070
- ``list of strings``
71-
- The network interfaces to send metrics about. This is an
72-
:ref:`overridable set <filtering-smart-agent>`
71+
- The network interfaces to send metrics about.
7372
(**default:**
7473
``[* !/^lo\d*$/ !/^docker.*/ !/^t(un|ap)\d*$/ !/^veth.*$/ !/^Loopback*/]``)
7574

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
.. _collector-addon-configure-instance:
2+
.. _collector-addon-mode:
3+
4+
*********************************************************************************************
5+
Configure the deployment mode of your Splunk Add-on Collector instance
6+
*********************************************************************************************
7+
8+
.. meta::
9+
:description: Configure the deployment mode of the Technical Add-on OpenTelemetry Collector instance
10+
11+
.. toctree::
12+
:maxdepth: 5
13+
:hidden:
14+
15+
The OpenTelemetry Collector has different :ref:`deployment modes <otel-deployment-mode>`:
16+
17+
* Host monitoring (agent): This is the default value and the simplest configuration. The Splunk Add-on for the OpenTelemetry Collector, when configured as an agent, sends data to Splunk Observability Cloud.
18+
19+
* Data forwarding (gateway): When configured as a gateway, your Splunk Add-on for the OpenTelemetry Collector collects data from one or more agents before forwarding it to Splunk Observability Cloud.
20+
21+
* As an agent that sends data to a gateway: To use a gateway instance, you create one or more instances of Splunk add-on for the OpenTelemetry Collector as agents that send data to that gateway instance.
22+
23+
.. _collector-addon-mode-agent:
24+
25+
Deploy the Splunk Add-on for the OpenTelemetry Collector as an agent
26+
============================================================================================================================================
27+
28+
As an agent, the OpenTelemetry Collector sends data directly to Splunk Observability Cloud. This is the default configuration. Learn more at :ref:`collector-agent-mode`.
29+
30+
If your instance is not configured as an agent and you want to configure it as an agent, edit your inputs.conf file and update the variable ``Splunk_config`` to reflect your agent configuration file name. You can find this file in your directory at ``/otelcol/config/``. The default file name is ``ta-agent-config.yaml``. If you are using a custom configuration file, provide that file name.
31+
32+
.. _collector-addon-mode-gateway:
33+
34+
Deploy the Splunk Add-on for the OpenTelemetry Collector as a gateway
35+
============================================================================================================================================
36+
37+
If deployed as a gateway, the Collector instance can collect data from one or more Collector instances deployed as agents. The gateway instance then sends that data to Splunk Observability Cloud. Learn more at :ref:`collector-gateway-mode`.
38+
39+
To configure your Splunk Add-on for OpenTelemetry Collector as a gateway:
40+
41+
#. Edit your inputs.conf file to update the variable ``Splunk_config`` with your gateway configuration file name. You can find this file in your directory at ``/otelcol/config/``. The default file name for the gateway file is ``ta-gateway-config.yaml``. If you are using a custom configuration file, provide that file name.
42+
43+
#. Set the ``splunk_listen_interface`` value to ``0.0.0.0`` or to the specific IP address that sends data to this gateway in ``local/inputs.conf``.
44+
45+
.. caution:: You must also configure one or more Collector instances as agents that send data to your new gateway.
46+
47+
.. _collector-addon-mode-send:
48+
49+
Configure Splunk Add-on for OpenTelemetry Collector as an agent that sends data to a gateway
50+
============================================================================================================================================
51+
52+
You can set up one or more Collector instances as agents that send data to another instance that is set up as a gateway. See more at :ref:`collector-agent-to-gateway`.
53+
54+
To do this configure an instance that works as a gateway, and then one or more instances that operate as agents:
55+
56+
#. Create your gateway, if you have not already done so. See :ref:`collector-addon-mode-gateway` for more information.
57+
58+
#. Edit your inputs.conf file to update the variable ``Splunk_config`` to reflect your gateway configuration file name. You can find the default configuration file in your directory at ``/otelcol/config/``. The default file name for this configuration file is ``ta-agent-to-gateway-config.yaml``. If you are using a custom configuration file, provide that file name.
59+
60+
#. In the README directory, open ``inputs.conf.spec`` and copy the attribute for the ``splunk_gateway_url``.
61+
62+
#. Paste this value into ``ta-agent-to-gateway-config.yaml`` and then update the value for this setting with the gateway IP address.

gdi/opentelemetry/collector-addon/collector-addon-install.rst

Lines changed: 1 addition & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Install the Technical Add-on for the Splunk OpenTelemetry Collector
1111
:maxdepth: 5
1212
:hidden:
1313

14-
You can install the Splunk Add-on for the OpenTelemetry Collector to a :ref:`single <collector-addon-install-uf>` or to :ref:`multiple <collector-addon-install-server>` universal forwarder instances.
14+
You can download the Splunk Add-on for the OpenTelemetry Collector from :new-page:`Splunkbase <https://splunkbase.splunk.com/app/7125>` and install it to a :ref:`single <collector-addon-install-uf>` or to :ref:`multiple <collector-addon-install-server>` universal forwarder instances.
1515

1616
The following applies:
1717

@@ -94,56 +94,3 @@ Follow these steps to install the Splunk Add-on for the OpenTelemetry Collector
9494

9595
#. In :guilabel:`Splunk Infrastructure Monitoring`, navigate to the host where you deployed the Splunk Add-on for the OpenTelemetry Collector and select it to explore its metrics and status. For more information, see :ref:`use-navigators-imm`.
9696

97-
.. _collector-addon-mode:
98-
99-
Configure the deployment mode of your Splunk Add-on Collector instance
100-
============================================================================================================================================
101-
102-
The OpenTelemetry Collector has different :ref:`deployment modes <otel-deployment-mode>`:
103-
104-
* Host monitoring (agent): This is the default value and the simplest configuration. The Splunk Add-on for the OpenTelemetry Collector, when configured as an agent, sends data to Splunk Observability Cloud.
105-
106-
* Data forwarding (gateway): When configured as a gateway, your Splunk Add-on for the OpenTelemetry Collector collects data from one or more agents before forwarding it to Splunk Observability Cloud.
107-
108-
* As an agent that sends data to a gateway: To use a gateway instance, you create one or more instances of Splunk add-on for the OpenTelemetry Collector as agents that send data to that gateway instance.
109-
110-
.. _collector-addon-mode-agent:
111-
112-
Deploy the Splunk Add-on for the OpenTelemetry Collector as an agent
113-
------------------------------------------------------------------------------------------------------------------------
114-
115-
As an agent, the OpenTelemetry Collector sends data directly to Splunk Observability Cloud. This is the default configuration. Learn more at :ref:`collector-agent-mode`.
116-
117-
If your instance is not configured as an agent and you want to configure it as an agent, edit your inputs.conf file and update the variable ``Splunk_config`` to reflect your agent configuration file name. You can find this file in your directory at ``/otelcol/config/``. The default file name is ``ta-agent-config.yaml``. If you are using a custom configuration file, provide that file name.
118-
119-
.. _collector-addon-mode-gateway:
120-
121-
Deploy the Splunk Add-on for the OpenTelemetry Collector as a gateway
122-
------------------------------------------------------------------------------------------------------------------------
123-
124-
If deployed as a gateway, the Collector instance can collect data from one or more Collector instances deployed as agents. The gateway instance then sends that data to Splunk Observability Cloud. Learn more at :ref:`collector-gateway-mode`.
125-
126-
To configure your Splunk Add-on for OpenTelemetry Collector as a gateway:
127-
128-
#. Edit your inputs.conf file to update the variable ``Splunk_config`` with your gateway configuration file name. You can find this file in your directory at ``/otelcol/config/``. The default file name for the gateway file is ``ta-gateway-config.yaml``. If you are using a custom configuration file, provide that file name.
129-
130-
#. Set the ``splunk_listen_interface`` value to ``0.0.0.0`` or to the specific IP address that sends data to this gateway in ``local/inputs.conf``.
131-
132-
.. caution:: You must also configure one or more Collector instances as agents that send data to your new gateway.
133-
134-
.. _collector-addon-mode-send:
135-
136-
Configure Splunk Add-on for OpenTelemetry Collector as an agent that sends data to a gateway
137-
------------------------------------------------------------------------------------------------------------------------
138-
139-
You can set up one or more Collector instances as agents that send data to another instance that is set up as a gateway. See more at :ref:`collector-agent-to-gateway`.
140-
141-
To do this configure an instance that works as a gateway, and then one or more instances that operate as agents:
142-
143-
#. Create your gateway, if you have not already done so. See :ref:`collector-addon-mode-gateway` for more information.
144-
145-
#. Edit your inputs.conf file to update the variable ``Splunk_config`` to reflect your gateway configuration file name. You can find the default configuration file in your directory at ``/otelcol/config/``. The default file name for this configuration file is ``ta-agent-to-gateway-config.yaml``. If you are using a custom configuration file, provide that file name.
146-
147-
#. In the README directory, open ``inputs.conf.spec`` and copy the attribute for the ``splunk_gateway_url``.
148-
149-
#. Paste this value into ``ta-agent-to-gateway-config.yaml`` and then update the value for this setting with the gateway IP address.

gdi/opentelemetry/collector-addon/collector-addon-intro.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,12 @@ Splunk Add-On for the OpenTelemetry Collector
1212
:hidden:
1313

1414
Install the Technical Add-on <collector-addon-install.rst>
15+
Deployment modes <collector-addon-configure-instance.rst>
1516
Configure the Technical Add-on <collector-addon-configure.rst>
17+
Upgrade the Technical Add-on <collector-addon-upgrade.rst>
1618
Troubleshooting <collector-addon-troubleshooting.rst>
1719

18-
Use the Splunk Add-on for the OpenTelemetry Collector to collect traces and metrics for Splunk Observability Cloud.
20+
Use the Splunk Add-on for the OpenTelemetry Collector to collect traces and metrics with Splunk Observability Cloud.
1921

2022
You have two ways to install and configure the Splunk Add-on for the OpenTelemetry Collector:
2123

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
.. _collector-addon-upgrade:
2+
3+
*********************************************************************************************
4+
Upgrade the Technical Add-on for the Splunk OpenTelemetry Collector
5+
*********************************************************************************************
6+
7+
.. meta::
8+
:description: Upgrade the Technical Add-on for the Splunk Distribution of the OpenTelemetry Collector.
9+
10+
.. toctree::
11+
:maxdepth: 5
12+
:hidden:
13+
14+
To upgrade the Technical Add-on for the Splunk Distribution of the OpenTelemetry Collector using a deployment server follow these steps:
15+
16+
#. Download the upgraded version of the Splunk Add-on for the OpenTelemetry Collector from Splunkbase. See :new-page:`Splunkbase's Splunk Add-on for the OpenTelemetry Collector <https://splunkbase.splunk.com/app/7125>`.
17+
#. Expand your downloaded file.
18+
#. Copy the expanded Splunk_TA_otel/ folder to the :guilabel:`SPLUNK_HOME > etc > deployment apps` directory.
19+
#. Restart the deployment server.
20+
21+

gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -503,3 +503,32 @@ Cluster Receiver support
503503
The Cluster receiver is a 1-replica deployment of the OpenTelemetry Collector. Because the Kubernetes control plane can select any available node to run the cluster receiver pod (unless ``clusterReceiver.nodeSelector`` is explicitly set to pin the pod to a specific node), ``hostPath`` or ``local`` volume mounts don't work for such environments.
504504

505505
Data persistence is currently not applicable to the Kubernetes cluster metrics and Kubernetes events.
506+
507+
Monitor OpenShift infrastructure nodes
508+
============================================
509+
510+
By default, the Splunk Distribution of OpenTelemetry Collector for Kubernetes doesn't collect data from OpenShift infrastructure nodes.
511+
512+
You can customize the Collector Helm Chart file to activate data collection from OpenShift infrastructure nodes. To do so, complete the following steps:
513+
514+
#. Open your values.yaml file for the Helm Chart.
515+
#. Copy and paste the following YAML snippet into the values.yaml file:
516+
517+
.. code-block:: yaml
518+
519+
tolerations:
520+
- key: node-role.kubernetes.io/master
521+
effect: NoSchedule
522+
- key: node-role.kubernetes.io/control-plane
523+
effect: NoSchedule
524+
- key: node-role.kubernetes.io/infra
525+
effect: NoSchedule
526+
operator: Exists
527+
528+
#. Install the Collector using the Helm Chart:
529+
530+
.. code-block:: bash
531+
532+
helm install my-splunk-otel-collector --values values.yaml splunk-otel-collector-chart/splunk-otel-collector
533+
534+
.. note:: Monitoring OpenShift infrastructure nodes might pose a security risk depending on which method you used to create the Kubernetes environment.

0 commit comments

Comments
 (0)