You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 2, 2025. It is now read-only.
If traces from your instrumented application or service are not available in Splunk Observability Cloud, verify the OpenTelemetry Collector configuration:
70
70
71
71
* Make sure that the Splunk Distribution of OpenTelemetry Collector is running.
72
-
* Make sure that a ``zipkin`` receiver and a ``sapm`` exporter are configured.
72
+
* Make sure that a ``zipkin`` receiver and an ``otlp`` exporter are configured.
73
73
* Make sure that the ``access_token`` and ``endpoint`` fields are configured.
74
-
* Check that the traces pipeline is configured to use the ``zipkin`` receiver and ``sapm`` exporter.
74
+
* Check that the traces pipeline is configured to use the ``zipkin`` receiver and ``otlp`` exporter.
75
75
76
76
Metrics don't appear in Splunk Observability Cloud
Copy file name to clipboardExpand all lines: gdi/opentelemetry/collector-addon/collector-addon-install.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ Follow these steps to install the Splunk Add-on for OpenTelemetry Collector to a
45
45
46
46
#. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file.
47
47
48
-
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
48
+
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
49
49
50
50
#. Restart Splunkd. Your Add-on solution is now deployed.
51
51
@@ -75,7 +75,7 @@ Follow these steps to install the Splunk Add-on for the OpenTelemetry Collector
75
75
76
76
#. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file.
77
77
78
-
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
78
+
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
79
79
80
80
#. In :strong:`Splunk Web`, select :guilabel:`Settings > Forwarder Management` to access your deployment server.
Copy file name to clipboardExpand all lines: gdi/opentelemetry/collector-kubernetes/k8s-troubleshooting/troubleshoot-k8s-sizing.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,17 +50,17 @@ For example:
50
50
51
51
.. code-block::
52
52
53
-
2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "sapm", "error": "server responded with 429", "interval": "4.4850027s"}
54
-
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "sapm", "dropped_items": 1348}
53
+
2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "otlphttp", "error": "server responded with 429", "interval": "4.4850027s"}
54
+
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "otlphttp", "dropped_items": 1348}
55
55
56
-
If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``sapm`` exporter:
56
+
If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``otlphttp`` exporter:
Copy file name to clipboardExpand all lines: gdi/opentelemetry/components/groupbyattrs-processor.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,7 +70,7 @@ Use the processor to perform the following actions:
70
70
* :ref:`Compact multiple records <groupbyattrs-processor-compact>` that share the same ``resource`` and ``InstrumentationLibrary`` attributes but are under multiple ``ResourceSpans`` or ``ResourceMetrics`` or ``ResourceLogs`` into a single ``ResourceSpans`` or ``ResourceMetrics`` or ``ResourceLogs``, when an empty list of keys is provided.
71
71
72
72
* This happens, for example, when you use the ``groupbytrace`` processor, or when data comes in multiple requests.
73
-
* If you compact data it takes less memory, it's more efficiently processed and serialized, and the number of export requests is reduced, for example if you use the ``sapm`` exporter. See more at :ref:`splunk-apm-exporter`.
73
+
* If you compact data it takes less memory, it's more efficiently processed and serialized, and the number of export requests is reduced.
74
74
75
75
.. tip:: Use the ``groupbyattrs`` processor together with ``batch`` processor, as a consecutive step. Grouping records together under matching resource and/or InstrumentationLibrary reduces the fragmentation of data.
Copy file name to clipboardExpand all lines: gdi/opentelemetry/components/jaeger-receiver.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -94,7 +94,7 @@ The Jaeger receiver uses helper files for additional capabilities:
94
94
Remote sampling
95
95
-----------------------------------------------
96
96
97
-
Since version 0.61.0, remote sampling is no longer supported. Instead, since version 0.59.0, use the ``jaegerremotesapmpling`` extension for remote sampling.
97
+
Since version 0.61.0, remote sampling is no longer supported. Instead, since version 0.59.0, use the ``jaegerremotesampling`` extension for remote sampling.
0 commit comments