Skip to content
This repository was archived by the owner on Sep 2, 2025. It is now read-only.

Commit 1904fa0

Browse files
Merge pull request #2257 from splunk/urbiz-OD6448-pipelines
[6448]: Pipeline updates
2 parents a3feba2 + 6550985 commit 1904fa0

File tree

3 files changed

+41
-40
lines changed

3 files changed

+41
-40
lines changed

gdi/opentelemetry/data-processing.rst

Lines changed: 31 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ Process your data with pipelines
55
*********************************************************************
66

77
.. meta::
8-
:description: Learn how to process data collected with the Splunk Distribution of OpenTelemetry Collector.
8+
:description: Learn how to process data collected with the Splunk Distribution of the OpenTelemetry Collector.
99

10-
A pipeline defines the path the ingested data follows in the Collector, starting from reception, then further processing or modification, and finally when data exits the Collector through exporters.
10+
Use pipelines in your Collector's config file to define the path you want your ingested data to follow. Specify which components you want to use, starting from data reception using :ref:`receivers <otel-components-receivers>`, then data processing or modification with :ref:`processors <otel-components-processors>`, until data finally exits the Collector through :ref:`exporters <otel-components-exporters>`. For an overview of all available components and theire behavior refer to :ref:`otel-components`.
1111

1212
Pipelines operate on three data types: logs, traces, and metrics. To learn more about data in Splunk Observability Cloud, see :ref:`data-model`.
1313

@@ -16,13 +16,36 @@ Pipelines operate on three data types: logs, traces, and metrics. To learn more
1616
Define the pipeline
1717
=========================================
1818

19-
The pipeline is constructed during Collector startup based on the pipeline definition. See :ref:`otel-components` to understand the behavior of each component.
19+
The pipeline is constructed during Collector startup based on your Collector's config file.
2020

21-
To define the pipeline, first you need to specify a data type in your pipeline configuration. All the receivers, exporters, and processors you use in a pipeline must support the particular data type, otherwise you'll get the ``ErrDataTypeIsNotSupported`` error message when the configuration is loaded.
21+
See more at:
2222

23-
A pipeline can contain one or more receivers. Data from all receivers is pushed to the first processor, which performs processing on it and then pushes it to the next processor and so on until the last processor in the pipeline pushes the data to the exporters. Each exporter gets a copy of each data element. The last processor uses a data fan-out connector to fan out (distribute) the data to multiple exporters.
23+
* :ref:`Collector for Kubernetes <collector-kubernetes-intro>`
24+
* :ref:`Collector for Linux <collector-linux-intro>`
25+
* :ref:`Collector for Windows <collector-windows-intro>`
2426

25-
You can also use connectors to connect two pipelines: it consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It may consume and emit data of the same data type, or of different data types. A connector may generate and emit data to summarize the consumed data, or it may simply replicate or route data. Learn more at:ref:`otel-components-connectors`.
27+
The following applies:
28+
29+
* You need to specify a data type in your pipeline configuration. All the receivers, exporters, and processors you use in a pipeline must support the particular data type, otherwise you'll get the ``ErrDataTypeIsNotSupported`` error message when the configuration is loaded.
30+
31+
* A pipeline can contain one or more receivers.
32+
33+
* Data from all receivers is pushed to the first processor, which performs processing on it and then pushes it to the next processor and so on until the last processor in the pipeline uses a data fan-out connector to fan out (distribute) the data to multiple exporters.
34+
35+
* Note that some types of processor "mutate" (duplicate) data before they pass it on to the next processor.
36+
37+
* If a pipeline uses more than one exporter, each exporter receives a copy of each data element from the last processor.
38+
39+
* In case of failure, the rest of exporters continue to work independently.
40+
41+
* You can configure exporters to "mutate" (duplicate) the data they receive. In the Splunk OTel Collector this option is not enabled.
42+
43+
Connect pipelines with connectors
44+
--------------------------------------------------------------------
45+
46+
You can use connectors to connect two pipelines. Connectors consume data as an exporter at the end of one pipeline and emit data as a receiver at the start of another pipeline. They can consume and emit data of the same data type, or of different data types. Use connectors to generate and emit data which summarizes the data you've already consumed, or to simply replicate or route data.
47+
48+
Learn more at:ref:`otel-components-connectors`.
2649

2750
Example of a pipeline configuration
2851
--------------------------------------------------------------------
@@ -40,7 +63,7 @@ A pipeline configuration typically looks like this:
4063
processors: [memory_limiter, batch]
4164
exporters: [otlp, splunk_hec, jaeger, zipkin]
4265
43-
This example defines a pipeline for ``traces``, with three receivers, two processors, and four exporters. The following table describes the receivers, processors, and exporters used in this example. For more details, see :ref:`Collector components <otel-components>`.
66+
This example defines a pipeline for ``traces``, with three receivers, two processors, and four exporters. The following table describes the receivers, processors, and exporters used in this example.
4467

4568
.. list-table::
4669
:widths: 25 50 25
@@ -80,37 +103,7 @@ This example defines a pipeline for ``traces``, with three receivers, two proces
80103
Metadata transformations
81104
============================================
82105

83-
Metadata refers to the name/value pair added to telemetry data. OpenTelemetry calls this ``Attributes`` on ``Spans``, ``Labels`` on ``Metrics``, and ``Fields`` on ``Logs``. See :new-page:`Resource SDK <https://github.com/open-telemetry/opentelemetry-specification/blob/49c2f56f3c0468ceb2b69518bcadadd96e0a5a8b/specification/resource/sdk.md>`, :new-page:`Metrics API <https://github.com/open-telemetry/opentelemetry-specification/blob/49c2f56f3c0468ceb2b69518bcadadd96e0a5a8b/specification/metrics/api.md>`, and :new-page:`Trace Semantic Conventions <https://github.com/open-telemetry/opentelemetry-specification/blob/52cc12879e8c2d372c5200c00d4574fa73996369/specification/trace/semantic_conventions/README.md>` in GitHub for additional details.
84-
85-
Attributes
86-
--------------------------
87-
88-
Attributes are a list of zero or more key-value pairs. An attribute must have the following properties:
89-
90-
* The attribute key, which must be a non-null and non-empty string.
91-
* The attribute value, which is one of these types:
92-
93-
* A primitive type: string, boolean, double precision floating point (IEEE 754-1985) or signed 64-bit integer.
94-
* An array of primitive type values. The array must be homogeneous. That is, it must not contain values of different types. For protocols that do not natively support array values, represent those values as JSON strings.
95-
96-
Attribute values expressing a numerical value of zero, an empty string, or an empty array are considered meaningful and must be stored and passed on to processors or exporters.
97-
98-
Attribute values of ``null`` are not valid and attempting to set a ``null`` value is undefined behavior.
99-
100-
``null`` values are not allowed in arrays. However, if it is impossible to make sure that no ``null`` values are accepted (for example, in languages that do not have appropriate compile-time type checking), ``null`` values within arrays MUST be preserved as-is (that is, passed on to processors or exporters as ``null``). If exporters do not support exporting ``null`` values, you can replace those values by 0, ``false``, or empty strings. Changing these values is required for map and dictionary structures represented as two arrays with indices that are kept in sync (for example, two attributes ``header_keys`` and ``header_values``, both containing an array of strings to represent a mapping ``header_keys[i] -> header_values[i]``).
101-
102-
Labels
103-
-----------------------------------------
104-
105-
Labels are name/value pairs added to metric data points. Labels are deprecated from the OpenTelemetry specification. Use attributes instead of labels.
106-
107-
Fields
108-
---------------------------------------
109-
110-
Fields are name/value pairs added to log records. Each record contains two kinds of fields:
111-
112-
* Named top-level fields of specific type and meaning.
113-
* Fields stored as ``map<string, any>``, which can contain arbitrary values of different types. The keys and values for well-known fields follow semantic conventions for key names and possible values that allow all parties that work with the field to have the same interpretation of the data.
106+
Metadata refers to the name/value pair added to telemetry data. Learn more at :ref:`otel-tags`.
114107

115108
.. _pipelines-next:
116109

gdi/opentelemetry/opentelemetry.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -238,7 +238,7 @@ Splunk Observability Cloud offers a guided setup to install the Collector:
238238
<h3>Advanced install<a name="collector-intro-install" class="headerlink" href="#collector-intro-install" title="Permalink to this headline">¶</a></h3>
239239
</embed>
240240

241-
The Splunk distribution of the OpenTelemetry Collector is supported on and packaged for a variety of platforms, including:
241+
The Splunk Distribution of the OpenTelemetry Collector is supported on and packaged for a variety of platforms, including:
242242

243243
* :ref:`Collector for Kubernetes <collector-kubernetes-intro>`
244244
* :ref:`Collector for Linux <collector-linux-intro>`

gdi/opentelemetry/tags.rst

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Use tags or attributes in OpenTelemetry
55
*******************************************************
66

77
.. meta::
8-
:description: Add tags to your Splunk Distribution of OpenTelemetry Collector configuration. You can include span tags in settings for the batch processor in your configuration YAML file.
8+
:description: Add tags to your Splunk Distribution of the OpenTelemetry Collector configuration. You can include span tags in settings for the batch processor in your configuration YAML file.
99

1010
Tags are key-value pairs of data associated with recorded measurements to provide contextual information, distinguish, and group metrics during analysis and inspection.
1111

@@ -162,3 +162,11 @@ For example, suppose your application sends in data for a metric named ``custom.
162162

163163
Splunk Observability Cloud provides a report that allows for management of metrics usage, and you can create rules to drop undesirable dimensions. See more at :ref:`subscription-overview`.
164164

165+
Learn more
166+
------------------------------------------------------------
167+
168+
For additional details see the following resources in GitHub:
169+
170+
* :new-page:`Resource SDK <https://github.com/open-telemetry/opentelemetry-specification/blob/49c2f56f3c0468ceb2b69518bcadadd96e0a5a8b/specification/resource/sdk.md>`
171+
* :new-page:`Metrics API <https://github.com/open-telemetry/opentelemetry-specification/blob/49c2f56f3c0468ceb2b69518bcadadd96e0a5a8b/specification/metrics/api.md>`
172+
* :new-page:`Trace Semantic Conventions <https://github.com/open-telemetry/opentelemetry-specification/blob/52cc12879e8c2d372c5200c00d4574fa73996369/specification/trace/semantic_conventions/README.md>`

0 commit comments

Comments
 (0)