diff --git a/SUMMARY.md b/SUMMARY.md index 5867bbd39..7098f189f 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -172,7 +172,7 @@ * [Sysinfo](pipeline/filters/sysinfo.md) * [Tensorflow](pipeline/filters/tensorflow.md) * [Throttle](pipeline/filters/throttle.md) - * [Type Converter](pipeline/filters/type-converter.md) + * [Type converter](pipeline/filters/type-converter.md) * [Wasm](pipeline/filters/wasm.md) * [Outputs](pipeline/outputs/README.md) * [Amazon CloudWatch](pipeline/outputs/cloudwatch.md) diff --git a/pipeline/filters/ecs-metadata.md b/pipeline/filters/ecs-metadata.md index bfafcb2dc..3d10e1903 100644 --- a/pipeline/filters/ecs-metadata.md +++ b/pipeline/filters/ecs-metadata.md @@ -10,7 +10,7 @@ The plugin supports the following configuration parameters: | Key | Description | Default | | :--- | :--- | :--- | -| `Add` | Similar to the `ADD` option in the [modify filter](https://docs.fluentbit.io/manual/pipeline/filters/modify). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](https://docs.fluentbit.io/manual/v/1.5/administration/configuring-fluent-bit/record-accessor) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option allows you to control both the key names for metadata and the format for metadata values. | _none_ | +| `Add` | Similar to the `ADD` option in the [modify filter](https://docs.fluentbit.io/manual/pipeline/filters/modify). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](https://docs.fluentbit.io/manual/v/1.5/administration/configuring-fluent-bit/record-accessor) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option lets you control both the key names for metadata and the format for metadata values. | _none_ | | `ECS_Tag_Prefix` | Similar to the `Kube_Tag_Prefix` option in the [Kubernetes filter](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) and performs the same function. The full log tag should be prefixed with this string and after the prefix the filter must find the next characters in the tag to be the Docker Container Short ID (the first 12 characters of the full container ID). The filter uses this to identify which container the log came from so it can find which task it's a part of. See the design section for more information. If not specified, it defaults to empty string, meaning that the tag must be prefixed with the 12 character container short ID. If you want to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks, don't set this parameter and enable the `Cluster_Metadata_Only` option | empty string | | `Cluster_Metadata_Only` | When enabled, the plugin will only attempt to attach cluster metadata values. Use to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks. | `Off` | | `ECS_Meta_Cache_TTL` | The filter builds a hash table in memory mapping each unique container short ID to its metadata. This option sets a max `TTL` for objects in the hash table. You should set this if you have frequent container or task restarts. For example, if your cluster runs short running batch jobs that complete in less than 10 minutes, there is no reason to keep any stored metadata longer than 10 minutes. You would therefore set this parameter to `10m`. | `1h` | @@ -269,4 +269,4 @@ pipeline: ``` {% endtab %} -{% endtabs %} \ No newline at end of file +{% endtabs %} diff --git a/pipeline/filters/grep.md b/pipeline/filters/grep.md index 38a9820d3..a80937400 100644 --- a/pipeline/filters/grep.md +++ b/pipeline/filters/grep.md @@ -16,9 +16,9 @@ The plugin supports the following configuration parameters: | `Exclude` | `KEY REGEX` | Exclude records where the content of `KEY` matches the regular expression. | | `Logical_Op` | `Operation` | Specify a logical operator: `AND`, `OR` or `legacy` (default). In `legacy` mode the behaviour is either `AND` or `OR` depending on whether the `grep` is including (uses `AND`) or excluding (uses OR). Available from 2.1 or higher. | -### Record Accessor enabled +### Record accessor enabled -Enable the [Record Accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) feature to specify the `KEY`. Use the record accessor to match values against nested values. +Enable the [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) feature to specify the `KEY`. Use the record accessor to match values against nested values. ## Filter records @@ -53,18 +53,18 @@ fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdou ```yaml service: parsers_file: /path/to/parsers.conf - + pipeline: inputs: - name: tail path: lines.txt parser: json - + filters: - name: grep match: '*' regex: log aa - + outputs: - name: stdout match: '*' @@ -95,8 +95,7 @@ pipeline: {% endtab %} {% endtabs %} - -The filter allows to use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions)). +The filter lets you use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions)). ### Nested fields example @@ -127,8 +126,8 @@ For example, to exclude records that match the nested field `kubernetes.labels.a {% tab title="fluent-bit.yaml" %} ```yaml -pipeline: - +pipeline: + filters: - name: grep match: '*' @@ -162,7 +161,7 @@ The following example checks for a specific valid value for the key: ```yaml pipeline: - + filters: # Use Grep to verify the contents of the iot_timestamp value. # If the iot_timestamp key does not exist, this will fail @@ -214,7 +213,7 @@ pipeline: - name: dummy dummy: '{"endpoint":"localhost", "value":"something"}' tag: dummy - + filters: - name: grep match: '*' @@ -257,4 +256,4 @@ The output looks similar to: ```text [0] dummy: [1674348410.558341857, {"endpoint"=>"localhost", "value"=>"something"}] [0] dummy: [1674348411.546425499, {"endpoint"=>"localhost", "value"=>"something"}] -``` \ No newline at end of file +``` diff --git a/pipeline/filters/kubernetes.md b/pipeline/filters/kubernetes.md index 0e849cbec..9908dfca6 100644 --- a/pipeline/filters/kubernetes.md +++ b/pipeline/filters/kubernetes.md @@ -302,7 +302,7 @@ parsers: - name: custom-tag format: regex regex: '^(?[^_]+)\.(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)\.(?.+)\.(?[a-z0-9]{64})' - + pipeline: inputs: - name: tail @@ -560,7 +560,7 @@ Learn how to solve them to ensure that the Fluent Bit Kubernetes filter is opera If set roles are configured correctly, it should respond with `yes`. - For instance, using Azure AKS, running the previous command might respond with: + For instance, using Azure Kubernetes Service (AKS), running the previous command might respond with: ```text no - Azure does not have opinion for this user. diff --git a/pipeline/filters/log_to_metrics.md b/pipeline/filters/log_to_metrics.md index b0a1f0e96..9148f715e 100644 --- a/pipeline/filters/log_to_metrics.md +++ b/pipeline/filters/log_to_metrics.md @@ -4,7 +4,7 @@ description: Generate metrics from logs # Logs to metrics -The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a guage for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values. +The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a gauge for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values. This filter doesn't actually act as a record filter and therefore doesn't change or drop records. All records will pass through this filter untouched, and any generated metrics will be emitted into a separate metric pipeline. @@ -53,13 +53,13 @@ The following example takes records from two `dummy` inputs and counts all messa service: flush: 1 log_level: info - + pipeline: inputs: - name: dummy dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}' tag: dummy.log - + - name: dummy dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}' tag: dummy.log2 @@ -154,13 +154,13 @@ The `gauge` mode needs a `value_field` to specify where to generate the metric v service: flush: 1 log_level: info - + pipeline: inputs: - name: dummy dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}' tag: dummy.log - + - name: dummy dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}' tag: dummy.log2 @@ -176,7 +176,7 @@ pipeline: kubernetes_mode: on regex: 'message .*el.*' add_label: app $kubernetes['labels']['app'] - label_field: + label_field: - color - shape @@ -218,7 +218,7 @@ pipeline: add_label app $kubernetes['labels']['app'] label_field color label_field shape - + [OUTPUT] name prometheus_exporter match * @@ -278,13 +278,13 @@ Similar to the `gauge` mode, the `histogram` mode needs a `value_field` to speci service: flush: 1 log_level: info - + pipeline: inputs: - name: dummy dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}' tag: dummy.log - + - name: dummy dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}' tag: dummy.log2 @@ -342,7 +342,7 @@ pipeline: add_label app $kubernetes['labels']['app'] label_field color label_field shape - + [OUTPUT] name prometheus_exporter match * @@ -417,13 +417,13 @@ In the resulting output, there are several buckets by default: `0.005, 0.01, 0.0 service: flush: 1 log_level: info - + pipeline: inputs: - name: dummy dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}' tag: dummy.log - + - name: dummy dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}' tag: dummy.log2 @@ -496,7 +496,7 @@ pipeline: regex message .*el.* label_field color label_field shape - + [OUTPUT] name prometheus_exporter match * diff --git a/pipeline/filters/lua.md b/pipeline/filters/lua.md index a63f0bd4b..e16e460bc 100644 --- a/pipeline/filters/lua.md +++ b/pipeline/filters/lua.md @@ -18,7 +18,7 @@ The plugin supports the following configuration parameters: | `type_int_key` | If these keys are matched, the fields are converted to integers. If more than one key, delimit by space. | | `type_array_key` | If these keys are matched, the fields are handled as array. If more than one key, delimit by space. The array can be empty. | | `protected_mode` | If enabled, the Lua script will be executed in protected mode. It prevents Fluent Bit from crashing when an invalid Lua script is executed or the triggered Lua function throws exceptions. Default value: `true`. | -| `time_as_table` | By default, when the Lua script is invoked, the record timestamp is passed as a floating number, which might lead to precision loss when it is converted back. If you need timestamp precision, enabling this option will pass the timestamp as a Lua table with keys `sec` for seconds since epoch and `nsec` for nanoseconds. | +| `time_as_table` | By default, when the Lua script is invoked, the record timestamp is passed as a floating number, which might lead to precision loss when it's converted back. If you need timestamp precision, enabling this option will pass the timestamp as a Lua table with keys `sec` for seconds since epoch and `nsec` for nanoseconds. | | `code` | Inline Lua code instead of loading from a path defined in `script`. | | `enable_flb_null` | If enabled, `null` will be converted to `flb_null` in Lua. This helps prevent removing key/value since `nil` is a special value to remove key/value from map in Lua. Default value: `false`. | @@ -140,7 +140,7 @@ end | ---- | ----------- | | `tag` | Name of the tag associated with the incoming record. | | `timestamp` | Unix timestamp with nanoseconds associated with the incoming record. | -| `group` | A read-only table containing group-level metadata (for example, OpenTelemetry resource or scope info). This will be an empty table if the log is not part of a group. | +| `group` | A read-only table containing group-level metadata (for example, OpenTelemetry resource or scope info). This will be an empty table if the log isn't part of a group. | | `metadata` | A table representing the record-specific metadata. You can modify this if needed. | | `record` | Lua table with the record content. | @@ -192,7 +192,7 @@ The metadata and record arrays must have the same length. This example demonstrates processing OpenTelemetry logs with group metadata access: -#### Configuration +#### Configuration [#configuration-otel] ```yaml pipeline: @@ -390,6 +390,7 @@ pipeline: {% endtabs %} filters.lua: + ```lua -- Use a Lua function to create some additional entries based -- on substrings from the kubernetes properties. @@ -421,7 +422,7 @@ The Lua callback function can return an array of tables (for example, an array o For example: -#### Lua script +#### Lua script [#lua-record-split] ```lua function cb_split(tag, timestamp, record) @@ -433,7 +434,7 @@ function cb_split(tag, timestamp, record) end ``` -#### Configuration +#### Configuration [#configuration-record-split] {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -453,6 +454,7 @@ pipeline: - name: stdout match: '*' ``` + {% endtab %} {% tab title="fluent-bit.conf" %} @@ -474,7 +476,7 @@ pipeline: {% endtab %} {% endtabs %} -#### Input +#### Input [#input-record-split] ```text {"x": [ {"a1":"aa", "z1":"zz"}, {"b1":"bb", "x1":"xx"}, {"c1":"cc"} ]} @@ -482,7 +484,7 @@ pipeline: {"a3":"aa", "z3":"zz", "b3":"bb", "x3":"xx", "c3":"cc"} ``` -#### Output +#### Output [#output-record-split] ```text [0] stdin.0: [1538435928.310583591, {"a1"=>"aa", "z1"=>"zz"}] @@ -498,9 +500,9 @@ See also [Fluent Bit: PR 811](https://github.com/fluent/fluent-bit/pull/811). ### Response code filtering -This example filters Istio logs to exclude lines with a response code between `1` and `399`. Istio is confiured to write logs in JSON format. +This example filters Istio logs to exclude lines with a response code between `1` and `399`. Istio is configured to write logs in JSON format. -#### Lua script +#### Lua script [#lua-response-code] Script `response_code_filter.lua` @@ -517,7 +519,7 @@ function cb_response_code_filter(tag, timestamp, record) end ``` -#### Configuration +#### Configuration [#configuration-response-code] Configuration to get Istio logs and apply response code filter to them. @@ -571,7 +573,7 @@ pipeline: {% endtab %} {% endtabs %} -#### Input +#### Input [#input-response-code] ```json { @@ -606,7 +608,7 @@ pipeline: } ``` -#### Output +#### Output [#output-response-code] In the output, only the messages with response code `0` or greater than `399` are shown. @@ -614,7 +616,7 @@ In the output, only the messages with response code `0` or greater than `399` ar The following example converts a field's specific type of `datetime` format to the UTC ISO 8601 format. -#### Lua script +#### Lua script [#lua-time-format] Script `custom_datetime_format.lua`: @@ -640,7 +642,7 @@ function convert_to_utc(tag, timestamp, record) end ``` -#### Configuration +#### Configuration [#configuration-time-format] Use this configuration to obtain a JSON key with `datetime`, and then convert it to another format. @@ -686,6 +688,7 @@ pipeline: - name: stdout match: '*' ``` + {% endtab %} {% tab title="fluent-bit.conf" %} @@ -714,19 +717,21 @@ pipeline: {% endtab %} {% endtabs %} -#### Input +#### Input [#input-time-format] ```json {"event": "Restock", "pub_date": "Tue, 30 Jul 2024 18:01:06 +0000"} ``` + and ```json {"event": "Soldout", "pub_date": "Mon, 29 Jul 2024 10:15:00 +0600"} ``` + Which are handled by dummy in this example. -#### Output +#### Output [#output-time-format] The output of this process shows the conversion of the `datetime` of two timezones to ISO 8601 format in UTC. @@ -755,7 +760,7 @@ env: These variables can be accessed from the Lua code by referring to the `FLB_ENV` Lua table. Since this is a Lua table, you can access its sub-records through the same syntax (for example, `FLB_ENV['A']`). -#### Configuration +#### Configuration [#configuration-env-var] {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -799,4 +804,4 @@ pipeline: ```text test: [[1731990257.781970977, {}], {"my_env"=>{"A"=>"aaa", "C"=>"ccc", "HOSTNAME"=>"monox-2.lan", "B"=>"bbb"}, "rand_value"=>4805047635809401856}] -``` \ No newline at end of file +``` diff --git a/pipeline/filters/modify.md b/pipeline/filters/modify.md index cc3b4a3af..1a9cc32c4 100644 --- a/pipeline/filters/modify.md +++ b/pipeline/filters/modify.md @@ -62,8 +62,8 @@ The plugin supports the following conditions: | :--- | :--- | :--- | :--- | | `Key_exists` | `STRING:KEY` | _none_ | Is `true` if `KEY` exists. | | `Key_does_not_exist` | `STRING:KEY` | _none_ | Is `true` if `KEY` doesn't exist. | -| `A_key_matches` | `REGEXP:KEY` | _none_ | Is `true` if a key matches regex `KEY`. | -| `No_key_matches` | `REGEXP:KEY` | _none_ | Is `true` if no key matches regex `KEY`. | +| `A_key_matches` | `REGEXP:KEY` | _none_ | Is `true` if a key matches regular expression `KEY`. | +| `No_key_matches` | `REGEXP:KEY` | _none_ | Is `true` if no key matches regular expression `KEY`. | | `Key_value_equals` | `STRING:KEY` | `STRING:VALUE` | Is `true` if `KEY` exists and its value is `VALUE`. | | `Key_value_does_not_equal` | `STRING:KEY` | `STRING:VALUE` | Is `true` if `KEY` exists and its value isn't `VALUE`. | | `Key_value_matches` | `STRING:KEY` | `REGEXP:VALUE` | Is `true` if key `KEY` exists and its value matches `VALUE`. | @@ -267,7 +267,7 @@ pipeline: ## Example 3 - emoji -### Emoji configuration File +### Emoji configuration file {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -292,7 +292,7 @@ pipeline: - 💦 is_wet copy: 🔥 💦 rename: 💦 ❄️ - + outputs: - name: stdout match: '*' @@ -317,7 +317,7 @@ pipeline: Rename 💦 ❄️ Set ❄️ is_cold Set 💦 is_wet - + [OUTPUT] Name stdout Match * @@ -335,4 +335,4 @@ pipeline: [2] mem.local: [1528926374.000181042, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}] [3] mem.local: [1528926375.000090841, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}] [0] mem.local: [1528926376.000610974, {"This_plugin_is_on"=>"🔥", "🔥"=>"is_hot", "❄️"=>"is_cold", "💦"=>"is_wet"}] -``` \ No newline at end of file +``` diff --git a/pipeline/filters/nest.md b/pipeline/filters/nest.md index 2f680ddd6..b1dd407d5 100644 --- a/pipeline/filters/nest.md +++ b/pipeline/filters/nest.md @@ -63,12 +63,12 @@ The plugin supports the following configuration parameters: | Key | Value format | Operation | Description | | :--- | :--- | :--- | :--- | -| `Operation` | ENUM [`nest` or `lift`] | | Select the operation `nest` or `lift` | -| `Wildcard` | FIELD WILDCARD | `nest` | Nest records which field matches the wildcard | -| `Nest_under` | FIELD STRING | `nest` | Nest records matching the `Wildcard` under this key | -| `Nested_under` | FIELD STRING | `lift` | Lift records nested under the `Nested_under` key | -| `Add_prefix` | FIELD STRING | Any | Prefix affected keys with this string | -| `Remove_prefix` | FIELD STRING | Any | Remove prefix from affected keys if it matches this string | +| `Operation` | Enum [`nest` or `lift`] | | Select the operation `nest` or `lift` | +| `Wildcard` | Field wildcard | `nest` | Nest records which field matches the wildcard | +| `Nest_under` | Field string | `nest` | Nest records matching the `Wildcard` under this key | +| `Nested_under` | Field string | `lift` | Lift records nested under the `Nested_under` key | +| `Add_prefix` | Field string | Any | Prefix affected keys with this string | +| `Remove_prefix` | Field string | Any | Remove prefix from affected keys if it matches this string | ## Get started @@ -125,11 +125,11 @@ pipeline: [FILTER] Name nest Match * - Operation nest + Operation nest Wildcard Mem.* Nest_under Memstats Remove_prefix Mem. - + [OUTPUT] Name stdout Match * @@ -206,10 +206,10 @@ pipeline: Operation lift Nested_under Stats Remove_prefix NESTED - + [OUTPUT] Name stdout - Match * + Match * ``` {% endtab %} @@ -298,7 +298,7 @@ pipeline: {% endtab %} {% endtabs %} -### Deep `nest` Result +### Deep `nest` result ```text [0] mem.local: [1524795923.009867831, {"Swap.total"=>1046524, "Swap.used"=>0, "Swap.free"=>1046524, "LAYER3"=>{"LAYER2"=>{"LAYER1"=>{"Mem.total"=>4050908, "Mem.used"=>1112036, "Mem.free"=>2938872}}}}] diff --git a/pipeline/filters/rewrite-tag.md b/pipeline/filters/rewrite-tag.md index 4d4aa2290..54f844155 100644 --- a/pipeline/filters/rewrite-tag.md +++ b/pipeline/filters/rewrite-tag.md @@ -55,10 +55,10 @@ The key represents the name of the _record key_ that holds the `value` to use to To match against the value of the key `name`, you must use `$name`. The key selector is flexible enough to allow to match nested levels of sub-maps from the structure. To capture the value of the nested key `s2`, specify `$ss['s1']['s2']`, for short: --`$name` = "abc-123" --`$ss['s1']['s2']` = "flb" +-`$name` = `abc-123` +-`$ss['s1']['s2']` = `flb` -A key must point to a value that contains a string. It's not valid for numbers, Booleans, maps, or arrays. +A key must point to a value that contains a string. It's not valid for numbers, Boolean values, maps, or arrays. ### Regular expressions @@ -72,11 +72,11 @@ To match any record that it `$name` contains a value of the format `string-numbe This example uses parentheses to specify groups of data. If the pattern matches the value a placeholder will be created that can be consumed by the `NEW_TAG` section. -If `$name` equals `abc-123` , then the following placeholders will be created: +If `$name` equals `abc-123`, then the following placeholders will be created: --`$0` = "abc-123" --`$1` = "abc" --`$2` = "123" +-`$0` = `abc-123` +-`$1` = `abc` +-`$2` = `123` If the regular expression doesn't match an incoming record, the rule will be skipped and the next rule (if present) will be processed. @@ -228,7 +228,7 @@ The _dummy_ input generated two records, while the filter dropped two from the c The records generated are handled by the internal emitter, so the new records are summarized in the Emitter metrics. Take a look at the entry called `emitter_for_rewrite_tag.0`. -### The Emitter +### Emitter The _Emitter_ is an internal Fluent Bit plugin that allows other components of the pipeline to emit custom records. On this case `rewrite_tag` creates an emitter instance to use it exclusively to emit records, allowing for granular control of who is emitting what. diff --git a/pipeline/filters/tensorflow.md b/pipeline/filters/tensorflow.md index b59f16995..d0217257d 100644 --- a/pipeline/filters/tensorflow.md +++ b/pipeline/filters/tensorflow.md @@ -121,4 +121,4 @@ pipeline: ``` {% endtab %} -{% endtabs %} \ No newline at end of file +{% endtabs %} diff --git a/pipeline/filters/throttle.md b/pipeline/filters/throttle.md index 2af148301..c45e726d5 100644 --- a/pipeline/filters/throttle.md +++ b/pipeline/filters/throttle.md @@ -65,7 +65,7 @@ will become: The last pane of the window was overwritten and 1 message was dropped. -### Interval versus Window size +### `Interval` versus `Window` size You might notice it's possible to configure the `Interval` of the `Window` shift. It's counterintuitive, but there is a difference between the two previous examples: @@ -115,7 +115,7 @@ The following command will load the Tail plugin and read the content of the `lin fluent-bit -i tail -p 'path=lines.txt' -F throttle -p 'rate=1' -m '*' -o stdout ``` -### Configuration File +### Configuration file {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -161,4 +161,4 @@ pipeline: {% endtab %} {% endtabs %} -This example will pass 1000 messages per second in average over 300 seconds. \ No newline at end of file +This example will pass 1000 messages per second in average over 300 seconds. diff --git a/pipeline/filters/type-converter.md b/pipeline/filters/type-converter.md index 93c2e7c65..0c7a47e8e 100644 --- a/pipeline/filters/type-converter.md +++ b/pipeline/filters/type-converter.md @@ -1,6 +1,6 @@ -# Type Converter +# Type converter -The _Type Converter_ filter plugin converts data types and appends new key-value pairs. +The _Type converter_ filter plugin converts data types and appends new key-value pairs. You can use this filter in combination with plugins which expect incoming string value. For example, [Grep](grep.md) and [Modify](modify.md). @@ -88,4 +88,4 @@ The output will be ```text [0] mem.0: [1639915154.160159749, {"Mem.total"=>8146052, "Mem.used"=>4513564, "Mem.free"=>3632488, "Swap.total"=>1918356, "Swap.used"=>0, "Swap.free"=>1918356, "Mem.total_str"=>"8146052", "Mem.used_str"=>"4513564", "Mem.free_str"=>"3632488"}] -``` \ No newline at end of file +``` diff --git a/vale-styles/FluentBit/Headings.yml b/vale-styles/FluentBit/Headings.yml index 63dda8156..1d8c0c99c 100644 --- a/vale-styles/FluentBit/Headings.yml +++ b/vale-styles/FluentBit/Headings.yml @@ -120,6 +120,7 @@ exceptions: - Tanzu - TCP - Telemetry Pipeline + - Tensorflow Lite - Terraform - TLS - Transport Layer Security