diff --git a/SUMMARY.md b/SUMMARY.md index 2d6f3b6fd..ba0fe421c 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -1,20 +1,20 @@ # Table of contents -* [Fluent Bit Documentation](README.md) +* [Fluent Bit documentation](README.md) ## About * [What is Fluent Bit?](about/what-is-fluent-bit.md) -* [A Brief History of Fluent Bit](about/history.md) +* [A brief history of Fluent Bit](about/history.md) * [Fluentd and Fluent Bit](about/fluentd-and-fluent-bit.md) * [License](about/license.md) -* [Sandbox and Lab Resources](about/sandbox-and-lab-resources.md) +* [Sandbox and lab resources](about/sandbox-and-lab-resources.md) ## Concepts -* [Key Concepts](concepts/key-concepts.md) +* [Key concepts](concepts/key-concepts.md) * [Buffering](concepts/buffering.md) -* [Data Pipeline](concepts/data-pipeline/README.md) +* [Data pipeline](concepts/data-pipeline/README.md) * [Input](concepts/data-pipeline/input.md) * [Parser](concepts/data-pipeline/parser.md) * [Filter](concepts/data-pipeline/filter.md) diff --git a/about/fluentd-and-fluent-bit.md b/about/fluentd-and-fluent-bit.md index 9dd73b1ea..a12d10f49 100644 --- a/about/fluentd-and-fluent-bit.md +++ b/about/fluentd-and-fluent-bit.md @@ -4,42 +4,28 @@ description: The production grade telemetry ecosystem # Fluentd and Fluent Bit -Telemetry data processing can be complex, especially at scale. That's why -[Fluentd](https://www.fluentd.org) was created. Fluentd is more than a simple tool, -it's grown into a fullscale ecosystem that contains SDKs for different languages -and subprojects like [Fluent Bit](https://fluentbit.io). +Telemetry data processing can be complex, especially at scale. That's why [Fluentd](https://www.fluentd.org) was created. Fluentd is more than a simple tool, it's grown into a fullscale ecosystem that contains SDKs for different languages and subprojects like [Fluent Bit](https://fluentbit.io). -Here, we describe the relationship between the [Fluentd](http://fluentd.org) -and [Fluent Bit](http://fluentbit.io) open source projects. +Here, we describe the relationship between the [Fluentd](http://fluentd.org) and [Fluent Bit](http://fluentbit.io) open source projects. Both projects are: -- Licensed under the terms of Apache License v2.0. -- Graduated hosted projects by the [Cloud Native Computing Foundation (CNCF)](https://cncf.io). -- Production grade solutions: Deployed millions of times every single day. -- Vendor neutral and community driven. -- Widely adopted by the industry: Trusted by major companies like AWS, Microsoft, - Google Cloud, and hundreds of others. +- Licensed under the terms of Apache License v2.0. - Graduated hosted projects by the [Cloud Native Computing Foundation (CNCF)](https://cncf.io). - Production grade solutions: Deployed millions of times every single day. - Vendor neutral and community driven. - Widely adopted by the industry: Trusted by major companies like AWS, Microsoft, Google Cloud, and hundreds of others. -The projects have many similarities: [Fluent Bit](https://fluentbit.io) is -designed and built on top of the best ideas of [Fluentd](https://www.fluentd.org) -architecture and general design. Which one you choose depends on your end-users' needs. +The projects have many similarities: [Fluent Bit](https://fluentbit.io) is designed and built on top of the best ideas of [Fluentd](https://www.fluentd.org) architecture and general design. Which one you choose depends on your end-users' needs. The following table describes a comparison of different areas of the projects: | Attribute | Fluentd | Fluent Bit | | ------------ | --------------------- | --------------------- | | Scope | Containers / Servers | Embedded Linux / Containers / Servers | -| Language | C & Ruby | C | +| Language | C and Ruby | C | | Memory | Greater than 60 MB | Approximately 1 MB | | Performance | Medium Performance | High Performance | | Dependencies | Built as a Ruby Gem, depends on other gems. | Zero dependencies, unless required by a plugin. | | Plugins | Over 1,000 external plugins available. | Over 100 built-in plugins available. | | License | [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0) | [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0) | -Both [Fluentd](https://www.fluentd.org) and [Fluent Bit](https://fluentbit.io) -can work as Aggregators or Forwarders, and can complement each other or be used -as standalone solutions. +Both [Fluentd](https://www.fluentd.org) and [Fluent Bit](https://fluentbit.io) can work as Aggregators or Forwarders, and can complement each other or be used as standalone solutions. -In the recent years, cloud providers have switched from Fluentd to Fluent Bit for -performance and compatibility. Fluent Bit is now considered the next-generation solution. +In the recent years, cloud providers have switched from Fluentd to Fluent Bit for performance and compatibility. Fluent Bit is now considered the next-generation solution. diff --git a/about/history.md b/about/history.md index 0e71d209f..5b8cd6b99 100644 --- a/about/history.md +++ b/about/history.md @@ -5,16 +5,6 @@ description: Every project has a story # A brief history of Fluent Bit -In 2014, the [Fluentd](https://www.fluentd.org/) team at -[Treasure Data](https://www.treasuredata.com/) was forecasting the need for a -lightweight log processor for constraint environments like embedded Linux and -gateways. The project aimed to be part of the Fluentd ecosystem. At that moment, -Eduardo Silva created [Fluent Bit](https://fluentbit.io/), a new open source solution, -written from scratch and available under the terms of the -[Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0). +In 2014, the [Fluentd](https://www.fluentd.org/) team at [Treasure Data](https://www.treasuredata.com/) was forecasting the need for a lightweight log processor for constraint environments like embedded Linux and gateways. The project aimed to be part of the Fluentd ecosystem. At that moment, Eduardo Silva created [Fluent Bit](https://fluentbit.io/), a new open source solution, written from scratch and available under the terms of the [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0). -After the project matured, it gained traction for normal Linux systems. With the -new containerized world, the Cloud Native community asked to extend the -project scope to support more sources, filters, and destinations. Not long after, -Fluent Bit became one of the preferred solutions to solve the logging challenges -in Cloud environments. +After the project matured, it gained traction for normal Linux systems. With the new containerized world, the Cloud Native community asked to extend the project scope to support more sources, filters, and destinations. Not long after, Fluent Bit became one of the preferred solutions to solve the logging challenges in Cloud environments. diff --git a/about/license.md b/about/license.md index 625714e45..3737a1e4a 100644 --- a/about/license.md +++ b/about/license.md @@ -5,9 +5,7 @@ description: Fluent Bit license description # License -[Fluent Bit](http://fluentbit.io), including its core, plugins, and tools are -distributed under the terms of the -[Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0): +[Fluent Bit](http://fluentbit.io), including its core, plugins, and tools are distributed under the terms of the [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0): ```text Apache License diff --git a/about/sandbox-and-lab-resources.md b/about/sandbox-and-lab-resources.md index fe863a9ad..f9571d2c8 100644 --- a/about/sandbox-and-lab-resources.md +++ b/about/sandbox-and-lab-resources.md @@ -4,7 +4,7 @@ description: >- Labs for learning how to best operate, use, and have success with Fluent Bit. --- -# Sandbox and Lab Resources +# Sandbox and lab resources ## Fluent Bit Sandbox - sign-up required diff --git a/about/what-is-fluent-bit.md b/about/what-is-fluent-bit.md index e0b4fd9d2..9db593a79 100644 --- a/about/what-is-fluent-bit.md +++ b/about/what-is-fluent-bit.md @@ -4,24 +4,11 @@ description: Fluent Bit is a CNCF sub-project under the umbrella of Fluentd # What is Fluent Bit? -[Fluent Bit](https://fluentbit.io) is an open source telemetry agent specifically -designed to efficiently handle the challenges of collecting and processing telemetry -data across a wide range of environments, from constrained systems to complex cloud -infrastructures. Managing telemetry data from various sources and formats can be a -constant challenge, particularly when performance is a critical factor. +[Fluent Bit](https://fluentbit.io) is an open source telemetry agent specifically designed to efficiently handle the challenges of collecting and processing telemetry data across a wide range of environments, from constrained systems to complex cloud infrastructures. Managing telemetry data from various sources and formats can be a constant challenge, particularly when performance is a critical factor. -Rather than serving as a drop-in replacement, Fluent Bit enhances the observability -strategy for your infrastructure by adapting and optimizing your existing logging -layer, and adding metrics and traces processing. Fluent Bit supports a -vendor-neutral approach, seamlessly integrating with other ecosystems such as -Prometheus and OpenTelemetry. Trusted by major cloud providers, banks, and companies -in need of a ready-to-use telemetry agent solution, Fluent Bit effectively manages -diverse data sources and formats while maintaining optimal performance and keeping -resource consumption low. +Rather than serving as a drop-in replacement, Fluent Bit enhances the observability strategy for your infrastructure by adapting and optimizing your existing logging layer, and adding metrics and traces processing. Fluent Bit supports a vendor-neutral approach, seamlessly integrating with other ecosystems such as Prometheus and OpenTelemetry. Trusted by major cloud providers, banks, and companies in need of a ready-to-use telemetry agent solution, Fluent Bit effectively manages diverse data sources and formats while maintaining optimal performance and keeping resource consumption low. -Fluent Bit can be deployed as an edge agent for localized telemetry data handling or -utilized as a central aggregator/collector for managing telemetry data across -multiple sources and environments. +Fluent Bit can be deployed as an edge agent for localized telemetry data handling or utilized as a central aggregator/collector for managing telemetry data across multiple sources and environments. {% embed url="https://www.youtube.com/watch?v=3ELc1helke4" %} diff --git a/concepts/buffering.md b/concepts/buffering.md index a45a4e963..8f0fc8234 100644 --- a/concepts/buffering.md +++ b/concepts/buffering.md @@ -4,29 +4,14 @@ description: Performance and data safety # Buffering -When [Fluent Bit](https://fluentbit.io) processes data, it uses the system memory -(heap) as a primary and temporary place to store the record logs before they get -delivered. The records are processed in this private memory area. +When [Fluent Bit](https://fluentbit.io) processes data, it uses the system memory (heap) as a primary and temporary place to store the record logs before they get delivered. The records are processed in this private memory area. -Buffering is the ability to store the records, and continue storing incoming data -while previous data is processed and delivered. Buffering in memory is the fastest -mechanism, but there are scenarios requiring special strategies to deal with -[backpressure](../administration/backpressure.md), data safety, or to reduce memory -consumption by the service in constrained environments. +Buffering is the ability to store the records, and continue storing incoming data while previous data is processed and delivered. Buffering in memory is the fastest mechanism, but there are scenarios requiring special strategies to deal with [backpressure](../administration/backpressure.md), data safety, or to reduce memory consumption by the service in constrained environments. -Network failures or latency in third party service is common. When data can't be -delivered fast enough and new data to process arrives, the system can face -backpressure. +Network failures or latency in third party service is common. When data can't be delivered fast enough and new data to process arrives, the system can face backpressure. -Fluent Bit buffering strategies are designed to solve problems associated with -backpressure and general delivery failures. Fluent Bit offers a primary buffering -mechanism in memory and an optional secondary one using the file system. With -this hybrid solution you can accommodate any use case safely and keep a high -performance while processing your data. +Fluent Bit buffering strategies are designed to solve problems associated with backpressure and general delivery failures. Fluent Bit offers a primary buffering mechanism in memory and an optional secondary one using the file system. With this hybrid solution you can accommodate any use case safely and keep a high performance while processing your data. -These mechanisms aren't mutually exclusive. When data is ready to be processed or -delivered it's always be in memory, while other data in the queue might be in -the file system until is ready to be processed and moved up to memory. +These mechanisms aren't mutually exclusive. When data is ready to be processed or delivered it's always be in memory, while other data in the queue might be in the file system until is ready to be processed and moved up to memory. -To learn more about the buffering configuration in Fluent Bit, see -[Buffering & Storage](../administration/buffering-and-storage.md). +To learn more about the buffering configuration in Fluent Bit, see [Buffering & Storage](../administration/buffering-and-storage.md). diff --git a/concepts/data-pipeline/README.md b/concepts/data-pipeline/README.md index 7ae9cdc9a..7ea3ea315 100644 --- a/concepts/data-pipeline/README.md +++ b/concepts/data-pipeline/README.md @@ -1,2 +1 @@ -# Data Pipeline - +# Data pipeline diff --git a/concepts/data-pipeline/buffer.md b/concepts/data-pipeline/buffer.md index 9d0f02785..e347985b6 100644 --- a/concepts/data-pipeline/buffer.md +++ b/concepts/data-pipeline/buffer.md @@ -4,12 +4,9 @@ description: Data processing with reliability # Buffer -The [`buffer`](../buffering.md) phase in the pipeline aims to provide a unified and -persistent mechanism to store your data, using the primary in-memory model or the -file system-based mode. +The [`buffer`](../buffering.md) phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode. -The `buffer` phase contains the data in an immutable state, meaning that no other -filter can be applied. +The `buffer` phase contains the data in an immutable state, meaning that no other filter can be applied. ```mermaid graph LR @@ -27,5 +24,4 @@ graph LR Buffered data uses the Fluent Bit internal binary representation, which isn't raw text. -Fluent Bit offers a buffering mechanism in the file system that acts as a backup -system to avoid data loss in case of system failures. +Fluent Bit offers a buffering mechanism in the file system that acts as a backup system to avoid data loss in case of system failures. diff --git a/concepts/data-pipeline/filter.md b/concepts/data-pipeline/filter.md index c6c9eef81..b31d8e2a5 100644 --- a/concepts/data-pipeline/filter.md +++ b/concepts/data-pipeline/filter.md @@ -4,8 +4,7 @@ description: Modify, enrich or drop your records # Filter -In production environments you need full control of the data you're collecting. -Filtering lets you alter the collected data before delivering it to a destination. +In production environments you need full control of the data you're collecting. Filtering lets you alter the collected data before delivering it to a destination. ```mermaid graph LR @@ -21,14 +20,10 @@ graph LR style C stroke:darkred,stroke-width:2px; ``` -Filtering is implemented through plugins. Each available filter can be used to -match, exclude, or enrich your logs with specific metadata. +Filtering is implemented through plugins. Each available filter can be used to match, exclude, or enrich your logs with specific metadata. -Fluent Bit support many filters. A common use case for filtering is Kubernetes -deployments. Every pod log needs the proper metadata associated with it. +Fluent Bit support many filters. A common use case for filtering is Kubernetes deployments. Every pod log needs the proper metadata associated with it. -Like input plugins, filters run in an instance context, which has its own independent -configuration. Configuration keys are often called _properties_. +Like input plugins, filters run in an instance context, which has its own independent configuration. Configuration keys are often called _properties_. -For more details about the Filters available and their usage, see -[Filters](https://docs.fluentbit.io/manual/pipeline/filters). +For more details about the Filters available and their usage, see [Filters](https://docs.fluentbit.io/manual/pipeline/filters). diff --git a/concepts/data-pipeline/input.md b/concepts/data-pipeline/input.md index 20f73ddcf..86edfaaad 100644 --- a/concepts/data-pipeline/input.md +++ b/concepts/data-pipeline/input.md @@ -4,10 +4,7 @@ description: The way to gather data from your sources # Input -[Fluent Bit](http://fluentbit.io) provides input plugins to gather information from -different sources. Some plugins collect data from log files, while others can -gather metrics information from the operating system. There are many plugins to suit -different needs. +[Fluent Bit](http://fluentbit.io) provides input plugins to gather information from different sources. Some plugins collect data from log files, while others can gather metrics information from the operating system. There are many plugins to suit different needs. ```mermaid graph LR @@ -23,10 +20,8 @@ graph LR style A stroke:darkred,stroke-width:2px; ``` -When an input plugin loads, an internal _instance_ is created. Each instance has its -own independent configuration. Configuration keys are often called _properties_. +When an input plugin loads, an internal _instance_ is created. Each instance has its own independent configuration. Configuration keys are often called _properties_. -Every input plugin has its own documentation section that specifies how to use it -and what properties are available. +Every input plugin has its own documentation section that specifies how to use it and what properties are available. For more details, see [Input Plugins](https://docs.fluentbit.io/manual/pipeline/inputs). diff --git a/concepts/data-pipeline/output.md b/concepts/data-pipeline/output.md index d341a67f0..68a8092ba 100644 --- a/concepts/data-pipeline/output.md +++ b/concepts/data-pipeline/output.md @@ -4,9 +4,7 @@ description: Learn about destinations for your data, such as databases and cloud # Output -The output interface lets you define destinations for your data. Common destinations -are remote services, local file systems, or other standard interfaces. Outputs are -implemented as plugins. +The output interface lets you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces. Outputs are implemented as plugins. ```mermaid graph LR @@ -24,9 +22,7 @@ graph LR style H stroke:darkred,stroke-width:2px; ``` -When an output plugin is loaded, an internal _instance_ is created. Every instance -has its own independent configuration. Configuration keys are often called -_properties_. +When an output plugin is loaded, an internal _instance_ is created. Every instance has its own independent configuration. Configuration keys are often called _properties_. Every output plugin has its own documentation section specifying how it can be used and what properties are available. diff --git a/concepts/data-pipeline/parser.md b/concepts/data-pipeline/parser.md index 54b973991..4e5cdbc03 100644 --- a/concepts/data-pipeline/parser.md +++ b/concepts/data-pipeline/parser.md @@ -4,9 +4,7 @@ description: Convert unstructured messages to structured messages # Parser -Dealing with raw strings or unstructured messages is difficult. Having a structure -makes data more usable. Set a structure to the incoming data by using input -plugins as data is collected: +Dealing with raw strings or unstructured messages is difficult. Having a structure makes data more usable. Set a structure to the incoming data by using input plugins as data is collected: ```mermaid graph LR @@ -22,17 +20,13 @@ graph LR style B stroke:darkred,stroke-width:2px; ``` -The parser converts unstructured data to structured data. As an example, consider the -following Apache (HTTP Server) log entry: +The parser converts unstructured data to structured data. As an example, consider the following Apache (HTTP Server) log entry: ```text 192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395 ``` -This log line is a raw string without format. Structuring the log makes it easier -to process the data later. If the -[regular expression parser](pipeline/parsers/regular-expression) is used, the log -entry could be converted to: +This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](pipeline/parsers/regular-expression) is used, the log entry could be converted to: ```javascript { @@ -47,6 +41,4 @@ entry could be converted to: } ``` -Parsers are fully configurable and are independently and optionally handled by each -input plugin. For more details, see -[Parsers](https://docs.fluentbit.io/manual/pipeline/parsers). +Parsers are fully configurable and are independently and optionally handled by each input plugin. For more details, see [Parsers](https://docs.fluentbit.io/manual/pipeline/parsers). diff --git a/concepts/data-pipeline/router.md b/concepts/data-pipeline/router.md index ccc97e74f..3f3ec7719 100644 --- a/concepts/data-pipeline/router.md +++ b/concepts/data-pipeline/router.md @@ -4,9 +4,7 @@ description: Create flexible routing rules # Router -Routing is a core feature that lets you route your data through filters and then to -one or multiple destinations. The router relies on the concept of -[Tags](../key-concepts.md) and [Matching](../key-concepts.md) rules. +Routing is a core feature that lets you route your data through filters and then to one or multiple destinations. The router relies on the concept of [Tags](../key-concepts.md) and [Matching](../key-concepts.md) rules. ```mermaid graph LR @@ -27,14 +25,11 @@ There are two important concepts in Routing: - Tag - Match -When data is generated by an input plugin, it comes with a `Tag`. A Tag is a -human-readable indicator that helps to identify the data source. Tags are usually -configured manually. +When data is generated by an input plugin, it comes with a `Tag`. A Tag is a human-readable indicator that helps to identify the data source. Tags are usually configured manually. To define where to route data, specify a `Match` rule in the output configuration. -Consider the following configuration example that delivers `CPU` metrics to an -Elasticsearch database and Memory (`mem`) metrics to the standard output interface: +Consider the following configuration example that delivers `CPU` metrics to an Elasticsearch database and Memory (`mem`) metrics to the standard output interface: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -44,14 +39,14 @@ pipeline: inputs: - name: cpu tag: my_cpu - + - name: mem tag: my_mem - + outputs: - name: es match: my_cpu - + - name: stdout match: my_mem ``` @@ -81,13 +76,11 @@ pipeline: {% endtab %} {% endtabs %} -Routing reads the `Input` `Tag` and the `Output` `Match` rules. If data has a `Tag` -that doesn't match at routing time, the data is deleted. +Routing reads the `Input` `Tag` and the `Output` `Match` rules. If data has a `Tag` that doesn't match at routing time, the data is deleted. ## Routing with Wildcard -Routing is flexible enough to support wildcards in the `Match` pattern. The following -example defines a common destination for both sources of data: +Routing is flexible enough to support wildcards in the `Match` pattern. The following example defines a common destination for both sources of data: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -100,7 +93,7 @@ pipeline: - name: mem tag: my_mem - + outputs: - name: stdout match: 'my_*' @@ -131,9 +124,7 @@ The match rule is set to `my_*`, which matches any Tag starting with `my_`. ## Routing with Regex -Routing also provides support for regular expressions with the `Match_Regex` pattern, -allowing for more complex and precise matching criteria. The following example -demonstrates how to route data from sources based on a regular expression: +Routing also provides support for regular expressions with the `Match_Regex` pattern, allowing for more complex and precise matching criteria. The following example demonstrates how to route data from sources based on a regular expression: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -146,7 +137,7 @@ pipeline: - name: humidity_sensor tag: humid_sensor_B - + outputs: - name: stdout match: '.*_sensor_[AB]' @@ -173,7 +164,4 @@ pipeline: {% endtab %} {% endtabs %} -In this configuration, the `Match_regex` rule is set to `.*_sensor_[AB]`. This -regular expression matches any `Tag` that ends with `_sensor_A` or `_sensor_B`, -regardless of what precedes it. This approach provides a more flexible and powerful -way to handle different source tags with a single routing rule. \ No newline at end of file +In this configuration, the `Match_regex` rule is set to `.*_sensor_[AB]`. This regular expression matches any `Tag` that ends with `_sensor_A` or `_sensor_B`, regardless of what precedes it. This approach provides a more flexible and powerful way to handle different source tags with a single routing rule. diff --git a/concepts/key-concepts.md b/concepts/key-concepts.md index fc9841f30..ea90c51de 100644 --- a/concepts/key-concepts.md +++ b/concepts/key-concepts.md @@ -4,11 +4,7 @@ description: Learn these key concepts to understand how Fluent Bit operates. # Key concepts -Before diving into [Fluent Bit](https://fluentbit.io) you might want to get acquainted -with some of the key concepts of the service. This document provides an -introduction to those concepts and common [Fluent Bit](https://fluentbit.io) -terminology. Reading this document will help you gain a more general understanding of the -following topics: +Before diving into [Fluent Bit](https://fluentbit.io) you might want to get acquainted with some of the key concepts of the service. This document provides an introduction to those concepts and common [Fluent Bit](https://fluentbit.io) terminology. Reading this document will help you gain a more general understanding of the following topics: - Event or Record - Filtering @@ -19,8 +15,7 @@ following topics: ## Event or Record -Every incoming piece of data that belongs to a log or a metric that's retrieved by -Fluent Bit is considered an _Event_ or a _Record_. +Every incoming piece of data that belongs to a log or a metric that's retrieved by Fluent Bit is considered an _Event_ or a _Record_. As an example, consider the following content of a Syslog file: @@ -41,8 +36,7 @@ An Event is comprised of: ### Event format -The Fluent Bit wire protocol represents an Event as a two-element array -with a nested array as the first element: +The Fluent Bit wire protocol represents an Event as a two-element array with a nested array as the first element: ```javascript copy [[TIMESTAMP, METADATA], MESSAGE] @@ -50,8 +44,7 @@ with a nested array as the first element: where -- _`TIMESTAMP`_ is a timestamp in seconds as an integer or floating point value - (not a string). +- _`TIMESTAMP`_ is a timestamp in seconds as an integer or floating point value (not a string). - _`METADATA`_ is an object containing event metadata, and might be empty. - _`MESSAGE`_ is an object containing the event body. @@ -61,13 +54,11 @@ Fluent Bit versions prior to v2.1.0 used: [TIMESTAMP, MESSAGE] ``` -to represent events. This format is still supported for reading input event -streams. +to represent events. This format is still supported for reading input event streams. ## Filtering -You might need to perform modifications on an Event's content. The process to alter, -append to, or drop Events is called [_filtering_](data-pipeline/filter.md). +You might need to perform modifications on an Event's content. The process to alter, append to, or drop Events is called [_filtering_](data-pipeline/filter.md). Use filtering to: @@ -77,29 +68,19 @@ Use filtering to: ## Tag -Every Event ingested by Fluent Bit is assigned a Tag. This tag is an internal string -used in a later stage by the Router to decide which Filter or -[Output](data-pipeline/output.md) phase it must go through. +Every Event ingested by Fluent Bit is assigned a Tag. This tag is an internal string used in a later stage by the Router to decide which Filter or [Output](data-pipeline/output.md) phase it must go through. -Most tags are assigned manually in the configuration. If a tag isn't specified, -Fluent Bit assigns the name of the [Input](data-pipeline/input.md) plugin -instance where that Event was generated from. +Most tags are assigned manually in the configuration. If a tag isn't specified, Fluent Bit assigns the name of the [Input](data-pipeline/input.md) plugin instance where that Event was generated from. {% hint style="info" %} -The [Forward](../pipeline/inputs/forward.md) input plugin doesn't assign tags. This -plugin speaks the Fluentd wire protocol called Forward where every Event already -comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the -client. +The [Forward](../pipeline/inputs/forward.md) input plugin doesn't assign tags. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client. {% endhint %} -A tagged record must always have a Matching rule. To learn more about Tags and -Matches, see [Routing](data-pipeline/router.md). +A tagged record must always have a Matching rule. To learn more about Tags and Matches, see [Routing](data-pipeline/router.md). ## Timestamp -The timestamp represents the time an Event was created. Every Event contains an -associated timestamps. All events have timestamps, and they're set by the input plugin or -discovered through a data parsing process. +The timestamp represents the time an Event was created. Every Event contains an associated timestamps. All events have timestamps, and they're set by the input plugin or discovered through a data parsing process. The timestamp is a numeric fractional integer in the format: @@ -114,17 +95,13 @@ where: ## Match -Fluent Bit lets you route your collected and processed Events to one or multiple -destinations. A _Match_ represents a rule to select Events -where a Tag matches a defined rule. +Fluent Bit lets you route your collected and processed Events to one or multiple destinations. A _Match_ represents a rule to select Events where a Tag matches a defined rule. To learn more about Tags and Matches, see [Routing](data-pipeline/router.md). ## Structured messages -Source events can have a structure. A structure defines a set of `keys` and `values` -inside the Event message to implement faster operations on data modifications. -Fluent Bit treats every Event message as a structured message. +Source events can have a structure. A structure defines a set of `keys` and `values` inside the Event message to implement faster operations on data modifications. Fluent Bit treats every Event message as a structured message. Consider the following two messages: @@ -140,5 +117,4 @@ Consider the following two messages: {"project": "Fluent Bit", "created": 1398289291} ``` -For performance reasons, Fluent Bit uses a binary serialization data format called -[MessagePack](https://msgpack.org/). +For performance reasons, Fluent Bit uses a binary serialization data format called [MessagePack](https://msgpack.org/).