Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 13 additions & 13 deletions administration/configuring-fluent-bit/yaml/pipeline-section.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# Pipeline Section
# Pipeline section

The `pipeline` section defines the flow of how data is collected, processed, and sent to its final destination. It encompasses the following core concepts:

| Name | Description |
|---|---|
| ---- | ----------- |
| `inputs` | Specifies the name of the plugin responsible for collecting or receiving data. This component serves as the data source in the pipeline. Examples of input plugins include `tail`, `http`, and `random`. |
| `processors` | **Unique to YAML configuration**, processors are specialized plugins that handle data processing directly attached to input plugins. Unlike filters, processors are not dependent on tag or matching rules. Instead, they work closely with the input to modify or enrich the data before it reaches the filtering or output stages. Processors are defined within an input plugin section. |
| `processors` | **Unique to YAML configuration**, processors are specialized plugins that handle data processing directly attached to input plugins. Unlike filters, processors aren't dependent on tag or matching rules. Instead, they work closely with the input to modify or enrich the data before it reaches the filtering or output stages. Processors are defined within an input plugin section. |
| `filters` | Filters are used to transform, enrich, or discard events based on specific criteria. They allow matching tags using strings or regular expressions, providing a more flexible way to manipulate data. Filters run as part of the main event loop and can be applied across multiple inputs and filters. Examples of filters include `modify`, `grep`, and `nest`. |
| `outputs` | Defines the destination for processed data. Outputs specify where the data will be sent, such as to a remote server, a file, or another service. Each output plugin is configured with matching rules to determine which events are sent to that destination. Common output plugins include `stdout`, `elasticsearch`, and `kafka`. |

## Example Configuration
## Example configuration

Here's a simple example of a pipeline configuration:
Here's an example of a pipeline configuration:

```yaml
pipeline:
Expand All @@ -33,13 +33,13 @@ pipeline:
match: '*'
```

## Pipeline Processors
## Pipeline processors

Processors operate on specific signals such as logs, metrics, and traces. They are attached to an input plugin and must specify the signal type they will process.
Processors operate on specific signals such as logs, metrics, and traces. They're attached to an input plugin and must specify the signal type they will process.

### Example of a Processor

In the example below, the content_modifier processor inserts or updates (upserts) the key my_new_key with the value 123 for all log records generated by the tail plugin. This processor is only applied to log signals:
In the following example, the `content_modifier` processor inserts or updates (upserts) the key `my_new_key` with the value `123` for all log records generated by the tail plugin. This processor is only applied to log signals:

```yaml
parsers:
Expand Down Expand Up @@ -110,9 +110,9 @@ pipeline:
end
```

You might noticed that processors not only can be attached to input, but also to an output.
Processors can be attached to inputs and outputs.

### How Are Processors Different from Filters?
### How Processors are different from Filters
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should Processors and Filters be capitalized here? I don't think they should, since they're not capitalized in the following line in this section.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided figuring out if they were proper nouns was not what this cleanup was covering right now. I think that needs a bigger picture review.


While processors and filters are similar in that they can transform, enrich, or drop data from the pipeline, there is a significant difference in how they operate:

Expand All @@ -122,11 +122,11 @@ While processors and filters are similar in that they can transform, enrich, or

## Running Filters as Processors

You can configure existing [Filters](https://docs.fluentbit.io/manual/pipeline/filters) to run as processors. There are no specific changes needed; you simply use the filter name as if it were a native processor.
You can configure existing [Filters](https://docs.fluentbit.io/manual/pipeline/filters) to run as processors. There are no specific changes needed; you use the filter name as if it were a native processor.

### Example of a Filter Running as a Processor
### Example of a Filter running as a Processor

In the example below, the grep filter is used as a processor to filter log events based on a pattern:
In the following example, the `grep` filter is used as a processor to filter log events based on a pattern:

```yaml
parsers:
Expand Down