Skip to content
This repository was archived by the owner on Sep 2, 2025. It is now read-only.

Commit 0fc5a24

Browse files
Updates, feedback
1 parent 2e54b26 commit 0fc5a24

File tree

4 files changed

+49
-71
lines changed

4 files changed

+49
-71
lines changed

gdi/opentelemetry/collector-addon/collector-addon-install.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Follow these steps to install the Splunk Add-on for OpenTelemetry Collector to a
4545

4646
#. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file.
4747

48-
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
48+
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
4949

5050
#. Restart Splunkd. Your Add-on solution is now deployed.
5151

@@ -75,7 +75,7 @@ Follow these steps to install the Splunk Add-on for the OpenTelemetry Collector
7575

7676
#. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file.
7777

78-
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
78+
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
7979

8080
#. In :strong:`Splunk Web`, select :guilabel:`Settings > Forwarder Management` to access your deployment server.
8181

gdi/opentelemetry/collector-kubernetes/k8s-troubleshooting/troubleshoot-k8s-sizing.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,17 +50,17 @@ For example:
5050

5151
.. code-block::
5252
53-
2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "sapm", "error": "server responded with 429", "interval": "4.4850027s"}
54-
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "sapm", "dropped_items": 1348}
53+
2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "otlphttp", "error": "server responded with 429", "interval": "4.4850027s"}
54+
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "otlphttp", "dropped_items": 1348}
5555
56-
If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``sapm`` exporter:
56+
If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``otlphttp`` exporter:
5757

5858
.. code-block:: yaml
5959
6060
agent:
6161
config:
6262
exporters:
63-
sapm:
63+
otlphttp:
6464
sending_queue:
6565
queue_size: 512
6666

gdi/opentelemetry/components/otlphttp-exporter.rst

Lines changed: 43 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -7,24 +7,26 @@ OTLP/HTTP exporter
77
.. meta::
88
:description: The OTLP/HTTP exporter allows the OpenTelemetry Collector to send metrics, traces, and logs via HTTP using the OTLP format. Read on to learn how to configure the component.
99

10-
The OTLP/HTTP exporter sends metrics, traces, and logs through HTTP using the OTLP format. The supported pipeline types are ``traces``, ``metrics``, and ``logs``. See :ref:`otel-data-processing` for more information.
11-
12-
You can also use the OTLP exporter for advanced options to send data using the OTLP format. See more at :ref:`otlp-exporter`.
10+
.. note:: Use the OTLP/HTTP exporter as the default method to send traces to Splunk Observability Cloud.
1311

14-
If you need to bypass the Collector and send data in the OTLP format directly to Splunk Observability Cloud:
12+
The OTLP/HTTP exporter sends metrics, traces, and logs through HTTP using the OTLP format. The supported pipeline types are ``traces``, ``metrics``, and ``logs``. See :ref:`otel-data-processing` for more information.
1513

16-
* To send metrics, use the otlp endpoint. Find out more in the dev portal at :new-page:`Sending data points <https://dev.splunk.com/observability/docs/datamodel/ingest>`. Note that this option only accepts protobuf payloads.
17-
18-
* To send traces, use the gRPC endpoint. For more information, see :ref:`grpc-data-ingest`.
14+
You can also use the OTLP exporter for advanced options to send data using gRPC protocol. See more at :ref:`otlp-exporter`.
1915

2016
Read more about the OTLP format at the OTel repo :new-page:`OpenTelemetry Protocol Specification <https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/specification.md>`.
2117

2218
Get started
2319
======================
2420

21+
.. note::
22+
23+
This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector to send traces to Splunk Observability Cloud when deploying in host monitoring (agent) mode. See :ref:`otel-deployment-mode` for more information.
24+
25+
For details about the default configuration, see :ref:`otel-kubernetes-config`, :ref:`linux-config-ootb`, or :ref:`windows-config-ootb`. You can customize your configuration any time as explained in this document.
26+
2527
Follow these steps to configure and activate the component:
2628

27-
1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
29+
1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
2830

2931
- :ref:`otel-install-linux`
3032
- :ref:`otel-install-windows`
@@ -33,63 +35,54 @@ Follow these steps to configure and activate the component:
3335
2. Configure the exporter as described in the next section.
3436
3. Restart the Collector.
3537

36-
The OTLP/HTTP exporter is not included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector. If you want to add it, the following settings are required:
38+
Configuration options
39+
--------------------------------
3740

38-
* ``endpoint``. The target base URL to send data to, for example ``https://example.com:4318``. No default value.
41+
The following settings are required:
3942

40-
* Each type of signal is added to this base URL. For example, for traces, ``https://example.com:4318/v1/traces``.
43+
* ``traces_endpoint``. The target URL to send trace data to. ``https://ingest.<realm>.signalfx.com/v2/trace/otlp`` for Splunk Observability Cloud.
4144

42-
The following settings are optional:
45+
The following settings are optional and can be added to the configuration for more advanced use cases:
4346

44-
* ``logs_endpoint``. The target URL to send log data to.
45-
46-
* For example, ``https://example.com:4318/v1/logs``.
47-
* If this setting is present, the endpoint setting is ignored for logs.
47+
* ``logs_endpoint``. The target URL to send log data to. For example, ``https://example.com:4318/v1/logs``.
4848

49-
* ``metrics_endpoint``. The target URL to send metric data to.
50-
51-
* For example, ``https://example.com:4318/v1/metrics``.
52-
* If this setting is present, the endpoint setting is ignored for metrics.
49+
* ``metrics_endpoint``. The target URL to send metric data to. For example, ``"https://ingest.us0.signalfx.com/v2/trace/otlp"`` to send metrics to Splunk Observability Cloud.
5350

54-
* ``traces_endpoint``. The target URL to send trace data to.
55-
56-
* For example, ``https://example.com:4318/v1/traces``.
57-
* If this setting is present, the endpoint setting is ignored for traces.
58-
59-
* ``tls``. See :ref:`TLS Configuration Settings <otlphttp-exporter-settings>` in this document for the full set of available options.
51+
* ``tls``. See :ref:`TLS Configuration Settings <otlphttp-exporter-settings>` in this document for the full set of available options. Only applicable for sending data to a custom endpoint.
6052

6153
* ``timeout``. ``30s`` by default. HTTP request time limit. For details see :new-page:`https://golang.org/pkg/net/http/#Client`.
6254

6355
* ``read_buffer_size``. ``0`` by default. ReadBufferSize for HTTP client.
6456

6557
* ``write_buffer_size``. ``512 * 1024`` by default. WriteBufferSize for the HTTP client.
6658

67-
Sample configurations
59+
Sample configuration
6860
--------------------------------
6961

7062
To send traces and metrics to Splunk Observability Cloud using OTLP over HTTP, configure the ``metrics_endpoint`` and ``traces_endpoint`` settings to the REST API ingest endpoints. For example:
7163

7264
.. code-block:: yaml
7365
74-
exporters:
75-
otlphttp:
76-
metrics_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/datapoint/otlp"
77-
traces_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp"
78-
compression: gzip
79-
headers:
80-
"X-SF-Token": "${SPLUNK_ACCESS_TOKEN}"
81-
82-
To complete the configuration, include the receiver in the required pipeline of the ``service`` section of your
66+
exporters:
67+
otlphttp:
68+
# The target URL to send trace data to. By default it's set to ``https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp``.
69+
traces_endpoint: https://ingest.<realm>.signalfx.com/v2/trace/otlp
70+
# Set of HTTP headers added to every request.
71+
headers:
72+
# X-SF-Token is the authentication token provided by Splunk Observability Cloud.
73+
X-SF-Token: <access_token>
74+
75+
To complete the configuration, include the exporter in the required pipeline of the ``service`` section of your
8376
configuration file. For example:
8477

8578
.. code:: yaml
8679
87-
service:
88-
pipelines:
89-
metrics:
90-
exporters: [otlphttp]
91-
traces:
92-
exporters: [otlphttp]
80+
service:
81+
pipelines:
82+
metrics:
83+
exporters: [otlphttp]
84+
traces:
85+
exporters: [otlphttp]
9386
9487
Configuration examples
9588
--------------------------------
@@ -98,13 +91,11 @@ This is a detailed configuration example:
9891

9992
.. code-block:: yaml
10093
101-
10294
endpoint: "https://1.2.3.4:1234"
103-
tls:
104-
ca_file: /var/lib/mycert.pem
105-
cert_file: certfile
106-
key_file: keyfile
107-
insecure: true
95+
traces_endpoint: https://ingest.us0.signalfx.com/v2/trace/otlp
96+
metrics_endpoint: https://ingest.us0.signalfx.com/v2/datapoint/otlp
97+
headers:
98+
X-SF-Token: <access_token>
10899
timeout: 10s
109100
read_buffer_size: 123
110101
write_buffer_size: 345
@@ -119,20 +110,15 @@ This is a detailed configuration example:
119110
multiplier: 1.3
120111
max_interval: 60s
121112
max_elapsed_time: 10m
122-
headers:
123-
"can you have a . here?": "F0000000-0000-0000-0000-000000000000"
124-
header1: 234
125-
another: "somevalue"
126113
compression: gzip
127114
128115
Configure gzip compression
129116
--------------------------------
130117

131-
By default, gzip compression is turned on. To turn it off, use the following configuration:
118+
By default, gzip compression is turned on. To turn it off use the following configuration:
132119

133120
.. code-block:: yaml
134121
135-
136122
exporters:
137123
otlphttp:
138124
...
@@ -147,23 +133,21 @@ The following table shows the configuration options for the OTLP/HTTP exporter:
147133

148134
.. raw:: html
149135

150-
<div class="metrics-standard" category="included" url="https://raw.githubusercontent.com/splunk/collector-config-tools/main/cfg-metadata/exporter/otlphttp.yaml"></div>
136+
<div class="metrics-standard" category="included" url="https://raw.githubusercontent.com/splunk/collector-config-tools/main/cfg-metadata/exporter/otlphttp.yaml"></div>
151137

152138

153139
Troubleshooting
154140
======================
155141

156-
157-
158142
.. raw:: html
159143

160-
<div class="include-start" id="troubleshooting-components.rst"></div>
144+
<div class="include-start" id="troubleshooting-components.rst"></div>
161145

162146
.. include:: /_includes/troubleshooting-components.rst
163147

164148
.. raw:: html
165149

166-
<div class="include-stop" id="troubleshooting-components.rst"></div>
150+
<div class="include-stop" id="troubleshooting-components.rst"></div>
167151

168152

169153

gdi/opentelemetry/metrics-internal-collector.rst

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -191,12 +191,6 @@ These are the Collector's internal metrics.
191191
* - ``otelcol_receiver_refused_spans``
192192
- Number of spans that could not be pushed into the pipeline
193193

194-
* - ``otelcol_sapm_requests_failed``
195-
- Number of failed HTTP requests
196-
197-
* - ``otelcol_sapm_spans_exported``
198-
- Number of spans successfully exported
199-
200194
* - ``otelcol_scraper_errored_metric_points``
201195
- Number of metric points that couldn't be scraped
202196

0 commit comments

Comments
 (0)