diff --git a/_includes/collector-config-ootb.rst b/_includes/collector-config-ootb.rst index 92afd51be..cb731318c 100644 --- a/_includes/collector-config-ootb.rst +++ b/_includes/collector-config-ootb.rst @@ -165,7 +165,7 @@ The following diagram shows the default traces pipeline: subgraph Exporters direction LR - traces/sapm:::exporter + traces/otlphttp:::exporter traces/signalfx/out:::exporter end @@ -173,7 +173,7 @@ The following diagram shows the default traces pipeline: traces/jaeger --> traces/memory_limiter traces/otlp --> traces/memory_limiter traces/zipkin --> traces/memory_limiter - traces/resourcedetection --> traces/sapm + traces/resourcedetection --> traces/otlphttp traces/resourcedetection --> traces/signalfx/out Learn more about these receivers: diff --git a/apm/apm-spans-traces/span-formats.rst b/apm/apm-spans-traces/span-formats.rst index 67a02a06d..004266122 100644 --- a/apm/apm-spans-traces/span-formats.rst +++ b/apm/apm-spans-traces/span-formats.rst @@ -16,13 +16,14 @@ For more information on the ingest API endpoints, see :new-page:`Send APM traces Span formats compatible with the OpenTelemetry Collector ================================================================ -The Splunk Distribution of the OpenTelemetry Collector can collect spans in the following format: +The Splunk Distribution of the OpenTelemetry Collector can collect spans in the following formats: - Jaeger: gRPC and Thrift - Zipkin v1, v2 JSON -- Splunk APM Protocol (SAPM) - OpenTelemetry Protocol (OTLP) +.. note:: Splunk APM Protocol (SAPM) components are deprecated. Use the OTLP format instead. + The following examples show how to configure receivers in the collector configuration file. You can use multiple receivers according to your needs. .. tabs:: @@ -51,14 +52,6 @@ The following examples show how to configure receivers in the collector configur zipkin: endpoint: 0.0.0.0:9411 - .. code-tab:: yaml SAPM - - # To receive spans in SAPM format - - receivers: - sapm: - endpoint: 0.0.0.0:7276 - .. code-tab:: yaml OTLP # To receive spans in OTLP format @@ -85,14 +78,12 @@ The ingest endpoint for Splunk Observability Cloud at ``https://ingest..s * OTLP at ``/v2/trace/otlp`` with ``Content-Type:application/x-protobuf`` * Jaeger Thrift with ``Content-Type:application/x-thrift`` * Zipkin v1, v2 with ``Content-Type:application/json`` -* SAPM with ``Content-Type:application/x-protobuf`` In addition, the following endpoints are available: * OTLP at ``/v2/trace/otlp`` with ``Content-Type:application/x-protobuf`` * Jaeger Thrift at ``/v2/trace/jaegerthrift`` with ``Content-Type:application/x-thrift`` * Zipkin v1, v2 at ``/v2/trace/signalfxv1`` with ``Content-Type:application/json`` -* SAPM at ``/v2/trace/sapm`` with ``Content-Type:application/x-protobuf`` For more information on the ingest API endpoints, see :new-page:`Send APM traces `. diff --git a/gdi/get-data-in/application/otel-dotnet/sfx/troubleshooting/common-dotnet-troubleshooting.rst b/gdi/get-data-in/application/otel-dotnet/sfx/troubleshooting/common-dotnet-troubleshooting.rst index 60507909c..5f82b2bb6 100644 --- a/gdi/get-data-in/application/otel-dotnet/sfx/troubleshooting/common-dotnet-troubleshooting.rst +++ b/gdi/get-data-in/application/otel-dotnet/sfx/troubleshooting/common-dotnet-troubleshooting.rst @@ -69,9 +69,9 @@ Traces don't appear in Splunk Observability Cloud If traces from your instrumented application or service are not available in Splunk Observability Cloud, verify the OpenTelemetry Collector configuration: * Make sure that the Splunk Distribution of OpenTelemetry Collector is running. -* Make sure that a ``zipkin`` receiver and a ``sapm`` exporter are configured. +* Make sure that a ``zipkin`` receiver and an ``otlp`` exporter are configured. * Make sure that the ``access_token`` and ``endpoint`` fields are configured. -* Check that the traces pipeline is configured to use the ``zipkin`` receiver and ``sapm`` exporter. +* Check that the traces pipeline is configured to use the ``zipkin`` receiver and ``otlp`` exporter. Metrics don't appear in Splunk Observability Cloud ================================================================== diff --git a/gdi/monitors-cloud/heroku.rst b/gdi/monitors-cloud/heroku.rst index a4f04e718..99b7b4926 100644 --- a/gdi/monitors-cloud/heroku.rst +++ b/gdi/monitors-cloud/heroku.rst @@ -9,7 +9,7 @@ Heroku The Splunk OpenTelemetry Connector for Heroku is a buildpack for the Splunk Distribution of the OpenTelemetry Collector. The buildpack installs and runs the Splunk OpenTelemetry Connector on a Dyno to receive, process and export metric and trace data for Splunk Observability Cloud: -- Splunk APM through the ``sapm`` exporter. The ``otlphttp`` exporter can be used with a custom configuration. +- Splunk APM through the ``otlphttp`` exporter. - Splunk Infrastructure Monitoring through the ``signalfx`` exporter. See :ref:`otel-intro` to learn more. diff --git a/gdi/opentelemetry/collector-addon/collector-addon-install.rst b/gdi/opentelemetry/collector-addon/collector-addon-install.rst index 8a77db195..272758b11 100644 --- a/gdi/opentelemetry/collector-addon/collector-addon-install.rst +++ b/gdi/opentelemetry/collector-addon/collector-addon-install.rst @@ -45,7 +45,7 @@ Follow these steps to install the Splunk Add-on for OpenTelemetry Collector to a #. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file. -#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files. +#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files. #. Restart Splunkd. Your Add-on solution is now deployed. @@ -75,7 +75,7 @@ Follow these steps to install the Splunk Add-on for the OpenTelemetry Collector #. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file. -#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files. +#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files. #. In :strong:`Splunk Web`, select :guilabel:`Settings > Forwarder Management` to access your deployment server. diff --git a/gdi/opentelemetry/collector-kubernetes/k8s-troubleshooting/troubleshoot-k8s-sizing.rst b/gdi/opentelemetry/collector-kubernetes/k8s-troubleshooting/troubleshoot-k8s-sizing.rst index 76f95ee2e..b66e86162 100644 --- a/gdi/opentelemetry/collector-kubernetes/k8s-troubleshooting/troubleshoot-k8s-sizing.rst +++ b/gdi/opentelemetry/collector-kubernetes/k8s-troubleshooting/troubleshoot-k8s-sizing.rst @@ -50,17 +50,17 @@ For example: .. code-block:: - 2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "sapm", "error": "server responded with 429", "interval": "4.4850027s"} - 2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "sapm", "dropped_items": 1348} + 2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "otlphttp", "error": "server responded with 429", "interval": "4.4850027s"} + 2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "otlphttp", "dropped_items": 1348} -If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``sapm`` exporter: +If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``otlphttp`` exporter: .. code-block:: yaml agent: config: exporters: - sapm: + otlphttp: sending_queue: queue_size: 512 diff --git a/gdi/opentelemetry/components/attributes-processor.rst b/gdi/opentelemetry/components/attributes-processor.rst index 883724e1b..6647c2250 100644 --- a/gdi/opentelemetry/components/attributes-processor.rst +++ b/gdi/opentelemetry/components/attributes-processor.rst @@ -83,7 +83,7 @@ You can then add the attributes processors to any compatible pipeline. For examp - memory_limiter - batch - resourcedetection - exporters: [sapm, signalfx] + exporters: [otlphttp, signalfx] metrics: receivers: [hostmetrics, otlp, signalfx] processors: diff --git a/gdi/opentelemetry/components/filter-processor.rst b/gdi/opentelemetry/components/filter-processor.rst index 4af342917..17e492786 100644 --- a/gdi/opentelemetry/components/filter-processor.rst +++ b/gdi/opentelemetry/components/filter-processor.rst @@ -86,7 +86,7 @@ You can then add the filter processors to any compatible pipeline. For example: - memory_limiter - batch - resourcedetection - exporters: [sapm, signalfx] + exporters: [otlphttp, signalfx] metrics: receivers: [hostmetrics, otlp, signalfx] processors: diff --git a/gdi/opentelemetry/components/groupbyattrs-processor.rst b/gdi/opentelemetry/components/groupbyattrs-processor.rst index 00edef02d..b74df01de 100644 --- a/gdi/opentelemetry/components/groupbyattrs-processor.rst +++ b/gdi/opentelemetry/components/groupbyattrs-processor.rst @@ -70,7 +70,7 @@ Use the processor to perform the following actions: * :ref:`Compact multiple records ` that share the same ``resource`` and ``InstrumentationLibrary`` attributes but are under multiple ``ResourceSpans`` or ``ResourceMetrics`` or ``ResourceLogs`` into a single ``ResourceSpans`` or ``ResourceMetrics`` or ``ResourceLogs``, when an empty list of keys is provided. * This happens, for example, when you use the ``groupbytrace`` processor, or when data comes in multiple requests. - * If you compact data it takes less memory, it's more efficiently processed and serialized, and the number of export requests is reduced, for example if you use the ``sapm`` exporter. See more at :ref:`splunk-apm-exporter`. + * If you compact data it takes less memory, it's more efficiently processed and serialized, and the number of export requests is reduced. .. tip:: Use the ``groupbyattrs`` processor together with ``batch`` processor, as a consecutive step. Grouping records together under matching resource and/or InstrumentationLibrary reduces the fragmentation of data. diff --git a/gdi/opentelemetry/components/jaeger-receiver.rst b/gdi/opentelemetry/components/jaeger-receiver.rst index b494b5832..1eeb042f8 100644 --- a/gdi/opentelemetry/components/jaeger-receiver.rst +++ b/gdi/opentelemetry/components/jaeger-receiver.rst @@ -94,7 +94,7 @@ The Jaeger receiver uses helper files for additional capabilities: Remote sampling ----------------------------------------------- -Since version 0.61.0, remote sampling is no longer supported. Instead, since version 0.59.0, use the ``jaegerremotesapmpling`` extension for remote sampling. +Since version 0.61.0, remote sampling is no longer supported. Instead, since version 0.59.0, use the ``jaegerremotesampling`` extension for remote sampling. .. _jaeger-receiver-settings: diff --git a/gdi/opentelemetry/components/logging-exporter.rst b/gdi/opentelemetry/components/logging-exporter.rst index e64b4c0a6..aea9daf37 100644 --- a/gdi/opentelemetry/components/logging-exporter.rst +++ b/gdi/opentelemetry/components/logging-exporter.rst @@ -45,7 +45,7 @@ To activate the logging exporter, add it to any pipeline you want to diagnose. F - memory_limiter - batch - resourcedetection - exporters: [sapm, signalfx, logging] + exporters: [otlphttp, signalfx, logging] metrics: receivers: [hostmetrics, otlp, signalfx] processors: [memory_limiter, batch, resourcedetection] diff --git a/gdi/opentelemetry/components/otlphttp-exporter.rst b/gdi/opentelemetry/components/otlphttp-exporter.rst index 5dfc3d127..00ec96d37 100644 --- a/gdi/opentelemetry/components/otlphttp-exporter.rst +++ b/gdi/opentelemetry/components/otlphttp-exporter.rst @@ -7,24 +7,26 @@ OTLP/HTTP exporter .. meta:: :description: The OTLP/HTTP exporter allows the OpenTelemetry Collector to send metrics, traces, and logs via HTTP using the OTLP format. Read on to learn how to configure the component. -The OTLP/HTTP exporter sends metrics, traces, and logs through HTTP using the OTLP format. The supported pipeline types are ``traces``, ``metrics``, and ``logs``. See :ref:`otel-data-processing` for more information. - -You can also use the OTLP exporter for advanced options to send data using the OTLP format. See more at :ref:`otlp-exporter`. +.. note:: Use the OTLP/HTTP exporter as the default method to send traces to Splunk Observability Cloud. -If you need to bypass the Collector and send data in the OTLP format directly to Splunk Observability Cloud: +The OTLP/HTTP exporter sends metrics, traces, and logs through HTTP using the OTLP format. The supported pipeline types are ``traces``, ``metrics``, and ``logs``. See :ref:`otel-data-processing` for more information. -* To send metrics, use the otlp endpoint. Find out more in the dev portal at :new-page:`Sending data points `. Note that this option only accepts protobuf payloads. - -* To send traces, use the gRPC endpoint. For more information, see :ref:`grpc-data-ingest`. +You can also use the OTLP exporter for advanced options to send data using gRPC protocol. See more at :ref:`otlp-exporter`. Read more about the OTLP format at the OTel repo :new-page:`OpenTelemetry Protocol Specification `. Get started ====================== +.. note:: + + This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector to send traces to Splunk Observability Cloud when deploying in host monitoring (agent) mode. See :ref:`otel-deployment-mode` for more information. + + For details about the default configuration, see :ref:`otel-kubernetes-config`, :ref:`linux-config-ootb`, or :ref:`windows-config-ootb`. You can customize your configuration any time as explained in this document. + Follow these steps to configure and activate the component: -1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform: +1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform: - :ref:`otel-install-linux` - :ref:`otel-install-windows` @@ -33,30 +35,20 @@ Follow these steps to configure and activate the component: 2. Configure the exporter as described in the next section. 3. Restart the Collector. -The OTLP/HTTP exporter is not included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector. If you want to add it, the following settings are required: +Configuration options +-------------------------------- -* ``endpoint``. The target base URL to send data to, for example ``https://example.com:4318``. No default value. +The following settings are required: - * Each type of signal is added to this base URL. For example, for traces, ``https://example.com:4318/v1/traces``. +* ``traces_endpoint``. The target URL to send trace data to. ``https://ingest..signalfx.com/v2/trace/otlp`` for Splunk Observability Cloud. -The following settings are optional: +The following settings are optional and can be added to the configuration for more advanced use cases: -* ``logs_endpoint``. The target URL to send log data to. - - * For example, ``https://example.com:4318/v1/logs``. - * If this setting is present, the endpoint setting is ignored for logs. +* ``logs_endpoint``. The target URL to send log data to. For example, ``https://example.com:4318/v1/logs``. -* ``metrics_endpoint``. The target URL to send metric data to. - - * For example, ``https://example.com:4318/v1/metrics``. - * If this setting is present, the endpoint setting is ignored for metrics. +* ``metrics_endpoint``. The target URL to send metric data to. For example, ``"https://ingest.us0.signalfx.com/v2/trace/otlp"`` to send metrics to Splunk Observability Cloud. -* ``traces_endpoint``. The target URL to send trace data to. - - * For example, ``https://example.com:4318/v1/traces``. - * If this setting is present, the endpoint setting is ignored for traces. - -* ``tls``. See :ref:`TLS Configuration Settings ` in this document for the full set of available options. +* ``tls``. See :ref:`TLS Configuration Settings ` in this document for the full set of available options. Only applicable for sending data to a custom endpoint. * ``timeout``. ``30s`` by default. HTTP request time limit. For details see :new-page:`https://golang.org/pkg/net/http/#Client`. @@ -64,32 +56,33 @@ The following settings are optional: * ``write_buffer_size``. ``512 * 1024`` by default. WriteBufferSize for the HTTP client. -Sample configurations +Sample configuration -------------------------------- To send traces and metrics to Splunk Observability Cloud using OTLP over HTTP, configure the ``metrics_endpoint`` and ``traces_endpoint`` settings to the REST API ingest endpoints. For example: .. code-block:: yaml - exporters: - otlphttp: - metrics_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/datapoint/otlp" - traces_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp" - compression: gzip - headers: - "X-SF-Token": "${SPLUNK_ACCESS_TOKEN}" - -To complete the configuration, include the receiver in the required pipeline of the ``service`` section of your + exporters: + otlphttp: + # The target URL to send trace data to. By default it's set to ``https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp``. + traces_endpoint: https://ingest..signalfx.com/v2/trace/otlp + # Set of HTTP headers added to every request. + headers: + # X-SF-Token is the authentication token provided by Splunk Observability Cloud. + X-SF-Token: + +To complete the configuration, include the exporter in the required pipeline of the ``service`` section of your configuration file. For example: .. code:: yaml - service: - pipelines: - metrics: - exporters: [otlphttp] - traces: - exporters: [otlphttp] + service: + pipelines: + metrics: + exporters: [otlphttp] + traces: + exporters: [otlphttp] Configuration examples -------------------------------- @@ -98,13 +91,11 @@ This is a detailed configuration example: .. code-block:: yaml - endpoint: "https://1.2.3.4:1234" - tls: - ca_file: /var/lib/mycert.pem - cert_file: certfile - key_file: keyfile - insecure: true + traces_endpoint: https://ingest.us0.signalfx.com/v2/trace/otlp + metrics_endpoint: https://ingest.us0.signalfx.com/v2/datapoint/otlp + headers: + X-SF-Token: timeout: 10s read_buffer_size: 123 write_buffer_size: 345 @@ -119,20 +110,15 @@ This is a detailed configuration example: multiplier: 1.3 max_interval: 60s max_elapsed_time: 10m - headers: - "can you have a . here?": "F0000000-0000-0000-0000-000000000000" - header1: 234 - another: "somevalue" compression: gzip Configure gzip compression -------------------------------- -By default, gzip compression is turned on. To turn it off, use the following configuration: +By default, gzip compression is turned on. To turn it off use the following configuration: .. code-block:: yaml - exporters: otlphttp: ... @@ -147,23 +133,21 @@ The following table shows the configuration options for the OTLP/HTTP exporter: .. raw:: html -
+
Troubleshooting ====================== - - .. raw:: html -
+
.. include:: /_includes/troubleshooting-components.rst .. raw:: html -
+
diff --git a/gdi/opentelemetry/components/sapm-receiver.rst b/gdi/opentelemetry/components/sapm-receiver.rst index 2669aa1ad..175a137d4 100644 --- a/gdi/opentelemetry/components/sapm-receiver.rst +++ b/gdi/opentelemetry/components/sapm-receiver.rst @@ -1,13 +1,11 @@ .. _sapm-receiver: -**************************** -SAPM receiver -**************************** +******************************************** +Splunk APM (SAPM) receiver (deprecated) +******************************************** .. meta:: :description: Receives traces from other collectors or from the SignalFx Smart Agent. -The Splunk Distribution of the OpenTelemetry Collector supports the SAPM receiver. Documentation is planned for a future release. - -To find information about this component in the meantime, see :new-page:`SAPM receiver ` on GitHub. +.. caution:: The SAPM receiver is deprecated and will be removed in April 2025. To receive traces from other Collector instances use the :ref:`otlp-receiver` instead. diff --git a/gdi/opentelemetry/components/splunk-apm-exporter.rst b/gdi/opentelemetry/components/splunk-apm-exporter.rst index 2d842f85a..8d0072734 100644 --- a/gdi/opentelemetry/components/splunk-apm-exporter.rst +++ b/gdi/opentelemetry/components/splunk-apm-exporter.rst @@ -1,22 +1,19 @@ .. _splunk-apm-exporter: -Splunk APM exporter -************************** +**************************************************** +Splunk APM (SAPM) exporter (deprecated) +**************************************************** .. meta:: :description: Use the Splunk APM (SAPM) exporter to send traces from multiple nodes or services in a single batch. Read on to learn how to configure the component. +.. caution:: The SAPM exporter is deprecated and will be removed in April 2025. To send traces to Splunk Observability Cloud use the :ref:`otlphttp-exporter` instead. + The Splunk APM (SAPM) exporter allows the OpenTelemetry Collector to send traces to Splunk Observability Cloud. The supported pipeline types are ``traces``. See :ref:`otel-data-processing` for more information. Get started ====================== -.. note:: - - This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector when deploying in host monitoring (agent) mode in the ``traces`` pipeline. See :ref:`otel-deployment-mode` for more information. - - For details about the default configuration, see :ref:`otel-kubernetes-config`, :ref:`linux-config-ootb`, or :ref:`windows-config-ootb`. You can customize your configuration any time as explained in this document. - Follow these steps to configure and activate the component: 1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform: diff --git a/gdi/opentelemetry/components/transform-processor.rst b/gdi/opentelemetry/components/transform-processor.rst index ae4bb4b41..529204c26 100644 --- a/gdi/opentelemetry/components/transform-processor.rst +++ b/gdi/opentelemetry/components/transform-processor.rst @@ -77,7 +77,7 @@ You can then add the transform processor to any compatible pipeline. For example - memory_limiter - batch - resourcedetection - exporters: [sapm, signalfx] + exporters: [otlphttp, signalfx] metrics: receivers: [hostmetrics, otlp, signalfx] processors: diff --git a/gdi/opentelemetry/deployment-modes.rst b/gdi/opentelemetry/deployment-modes.rst index f42fdfd3c..a0d7868aa 100644 --- a/gdi/opentelemetry/deployment-modes.rst +++ b/gdi/opentelemetry/deployment-modes.rst @@ -263,9 +263,9 @@ To set the Collector in data forwarding (gateway) mode to receiving data from an exporters: # Traces (Agent) - sapm: + otlphttp: access_token: "${SPLUNK_ACCESS_TOKEN}" - endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace" + endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp" # Metrics + Events (Agent) signalfx: access_token: "${SPLUNK_ACCESS_TOKEN}" @@ -285,7 +285,7 @@ To set the Collector in data forwarding (gateway) mode to receiving data from an processors: - memory_limiter - batch - exporters: [sapm] + exporters: [otlphttp] metrics: receivers: [otlp] processors: [memory_limiter, batch] @@ -317,9 +317,9 @@ If you want to use the :ref:`signalfx-exporter` for metrics on both agent and ga exporters: # Traces - sapm: + otlphttp: access_token: "${SPLUNK_ACCESS_TOKEN}" - endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace" + traces_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp" # Metrics + Events (Agent) signalfx: access_token: "${SPLUNK_ACCESS_TOKEN}" @@ -340,7 +340,7 @@ If you want to use the :ref:`signalfx-exporter` for metrics on both agent and ga processors: - memory_limiter - batch - exporters: [sapm] + exporters: [otlphttp] metrics: receivers: [signalfx] processors: [memory_limiter, batch] diff --git a/gdi/opentelemetry/exposed-endpoints.rst b/gdi/opentelemetry/exposed-endpoints.rst index 19cafc084..73608433e 100644 --- a/gdi/opentelemetry/exposed-endpoints.rst +++ b/gdi/opentelemetry/exposed-endpoints.rst @@ -32,8 +32,6 @@ See the table for a complete list of exposed ports and endpoints: - OTLP receiver using gRPC and http * - ``http(s)://0.0.0.0:6060`` - HTTP forwarder used to receive Smart Agent ``apiUrl`` data - * - ``http(s)://0.0.0.0:7276`` - - SAPM trace receiver * - ``http://localhost:8888/metrics`` - :new-page:`Internal Prometheus metrics ` * - ``http(s)://localhost:8006`` diff --git a/gdi/opentelemetry/metrics-internal-collector.rst b/gdi/opentelemetry/metrics-internal-collector.rst index d657f69e7..1d463e5b6 100644 --- a/gdi/opentelemetry/metrics-internal-collector.rst +++ b/gdi/opentelemetry/metrics-internal-collector.rst @@ -191,12 +191,6 @@ These are the Collector's internal metrics. * - ``otelcol_receiver_refused_spans`` - Number of spans that could not be pushed into the pipeline - * - ``otelcol_sapm_requests_failed`` - - Number of failed HTTP requests - - * - ``otelcol_sapm_spans_exported`` - - Number of spans successfully exported - * - ``otelcol_scraper_errored_metric_points`` - Number of metric points that couldn't be scraped diff --git a/gdi/opentelemetry/smart-agent/smart-agent-migration-monitors.rst b/gdi/opentelemetry/smart-agent/smart-agent-migration-monitors.rst index 009cf495a..40204e742 100644 --- a/gdi/opentelemetry/smart-agent/smart-agent-migration-monitors.rst +++ b/gdi/opentelemetry/smart-agent/smart-agent-migration-monitors.rst @@ -36,7 +36,7 @@ For each Smart Agent monitor you want to add to the Collector, add a ``smartagen Instead of using ``discoveryRule``, use the Collector receiver creator and observer extensions. See :ref:`receiver-creator-receiver` for more information. -If you're using a SignalFx Forwarder monitor (deprecated), add it to both a ``traces`` and a ``metrics`` pipeline, and use a SAPM exporter and a SignalFx exporter, as each pipeline's exporter, respectively. See more on :ref:`exporters `. +If you're using a SignalFx Forwarder monitor (deprecated), add it to both a ``traces`` and a ``metrics`` pipeline, and use an OTLP exporter and a SignalFx exporter as each pipeline's exporter, respectively. See more on :ref:`exporters `. Configure the Smart Agent receiver ------------------------------------------------------------ @@ -106,9 +106,9 @@ Configuration example signalfx: access_token: "${SIGNALFX_ACCESS_TOKEN}" realm: us1 - sapm: + otlphttp: access_token: "${SIGNALFX_ACCESS_TOKEN}" - endpoint: https://ingest.us1.signalfx.com/v2/trace + traces_endpoint: https://ingest.us1.signalfx.com/v2/trace/otlp service: pipelines: @@ -134,6 +134,6 @@ Configuration example processors: - resourcedetection exporters: - - sapm + - otlphttp diff --git a/metrics-and-metadata/relatedcontent-collector-apm.rst b/metrics-and-metadata/relatedcontent-collector-apm.rst index d314679e3..9accc4da2 100644 --- a/metrics-and-metadata/relatedcontent-collector-apm.rst +++ b/metrics-and-metadata/relatedcontent-collector-apm.rst @@ -96,7 +96,7 @@ Here are the relevant config snippets from each section: exporters: # Traces - sapm: + otlphttp: access_token: "${SPLUNK_ACCESS_TOKEN}" endpoint: "${SPLUNK_TRACE_URL}" # Metrics + Events + APM correlation calls @@ -113,7 +113,7 @@ Here are the relevant config snippets from each section: traces: receivers: [jaeger, zipkin] processors: [memory_limiter, batch, resourcedetection, resource/add_environment] - exporters: [sapm, signalfx] + exporters: [otlphttp, signalfx] metrics: receivers: [hostmetrics] processors: [memory_limiter, batch, resourcedetection] @@ -257,9 +257,9 @@ Here are the relevant config snippets from each section: exporters: # Traces - sapm: + otlphttp: access_token: "${SPLUNK_ACCESS_TOKEN}" - endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace" + traces_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp" # Metrics + Events signalfx: access_token: "${SPLUNK_ACCESS_TOKEN}" @@ -273,7 +273,7 @@ Here are the relevant config snippets from each section: processors: - memory_limiter - batch - exporters: [sapm] + exporters: [otlphttp] metrics: receivers: [otlp] processors: [memory_limiter, batch] @@ -293,9 +293,9 @@ Configure the agent in gateway mode as follows: exporters: # Traces - sapm: + otlphttp: access_token: "${SPLUNK_ACCESS_TOKEN}" - endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace" + endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp" # Metrics + Events signalfx: access_token: "${SPLUNK_ACCESS_TOKEN}" @@ -311,7 +311,7 @@ Configure the agent in gateway mode as follows: processors: - memory_limiter - batch - exporters: [sapm] + exporters: [otlphttp] metrics: receivers: [signalfx] processors: [memory_limiter, batch]