diff --git a/pipeline/filters/aws-metadata.md b/pipeline/filters/aws-metadata.md index b478a7481..e6dc6c6ef 100644 --- a/pipeline/filters/aws-metadata.md +++ b/pipeline/filters/aws-metadata.md @@ -29,12 +29,12 @@ If you run Fluent Bit in a container, you might need to use instance metadata v1 Run Fluent Bit from the command line: ```shell -$ ./fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.conf +bin/fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.conf ``` You should see results like this: -```text +```shell [2020/01/17 07:57:17] [ info] [engine] started (pid=32744) [0] dummy: [1579247838.000171227, {"message"=>"dummy", "az"=>"us-west-2c", "ec2_instance_id"=>"i-0c862eca9038f5aae", "ec2_instance_type"=>"t2.medium", "private_ip"=>"172.31.6.59", "vpc_id"=>"vpc-7ea11c06", "ami_id"=>"ami-0841edc20334f9287", "account_id"=>"YOUR_ACCOUNT_ID", "hostname"=>"ip-172-31-6-59.us-west-2.compute.internal"}] [0] dummy: [1601274509.970235760, {"message"=>"dummy", "az"=>"us-west-2c", "ec2_instance_id"=>"i-0c862eca9038f5aae", "ec2_instance_type"=>"t2.medium", "private_ip"=>"172.31.6.59", "vpc_id"=>"vpc-7ea11c06", "ami_id"=>"ami-0841edc20334f9287", "account_id"=>"YOUR_ACCOUNT_ID", "hostname"=>"ip-172-31-6-59.us-west-2.compute.internal"}] @@ -44,38 +44,7 @@ You should see results like this: The following is an example of a configuration file: -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: dummy - tag: dummy - - filters: - - name: aws - match: '*' - imds_version: v1 - az: true - ec2_instance_id: true - ec2_instance_type: true - private_ip: true - ami_id: true - account_id: true - hostname: true - vpc_id: true - tags_enabled: true - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [INPUT] Name dummy Tag dummy @@ -99,9 +68,6 @@ pipeline: Match * ``` -{% endtab %} -{% endtabs %} - ## EC2 tags EC2 Tags let you label and organize your EC2 instances by creating custom-defined key-value pairs. These tags are commonly used for resource management, cost allocation, and automation. Including them in the Fluent Bit-generated logs is almost essential. @@ -118,23 +84,7 @@ To use the `tags_enabled true` feature in Fluent Bit, the [instance-metadata-tag Assume the EC2 instance has many tags, some of which have lengthy values that are irrelevant to the logs you want to collect. Only two tags, `department` and `project`, are valuable for your purpose. The following configuration reflects this requirement: -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - - filters: - - name: aws - match: '*' - tags_enabled: true - tags_include: department,project -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [FILTER] Name aws Match * @@ -142,9 +92,6 @@ pipeline: tags_include department,project ``` -{% endtab %} -{% endtabs %} - If you run Fluent Bit logs might look like the following: ```text @@ -157,23 +104,7 @@ Suppose the EC2 instance has three tags: `Name:fluent-bit-docs-example`, `projec Here is an example configuration that achieves this: -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - - filters: - - name: aws - match: '*' - tags_enabled: true - tags_exclude: department -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [FILTER] Name aws Match * @@ -181,11 +112,8 @@ pipeline: tags_exclude department ``` -{% endtab %} -{% endtabs %} - The resulting logs might look like this: -```text +```shell {"log"=>"aws is awesome", "az"=>"us-east-1a", "ec2_instance_id"=>"i-0e66fc7f9809d7168", "Name"=>"fluent-bit-docs-example", "project"=>"fluentbit"} -``` \ No newline at end of file +``` diff --git a/pipeline/filters/ecs-metadata.md b/pipeline/filters/ecs-metadata.md index 3e8ecc42f..faebb73d9 100644 --- a/pipeline/filters/ecs-metadata.md +++ b/pipeline/filters/ecs-metadata.md @@ -35,50 +35,9 @@ The following template variables can be used for values with the `ADD` option. S ### Configuration file -Below configurations assume a properly configured parsers file and 'storage.path' variable defined in the services -section of the fluent bit configuration (not shown below). - #### Example 1: Attach Task ID and cluster name to container logs -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: tail - tag: ecs.* - path: /var/lib/docker/containers/*/*.log - docker_mode: on - docker_mode_flush: 5 - docker_mode_parser: container_firstline - parser: docker - db: /var/fluent-bit/state/flb_container.db - mem_buf_limit: 50MB - skip_long_lines: on - refresh_interval: 10 - rotate_wait: 30 - storage.type: filesystem - read_from_head: off - - filters: - - name: ecs - match: '*' - ecs_tag_prefix: ecs.var.lib.docker.containers. - add: - - ecs_task_id $TaskID - - cluster $ClusterName - - outputs: - - name: stdout - match: '*' - format: json_lines -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [INPUT] Name tail Tag ecs.* @@ -108,9 +67,6 @@ pipeline: Format json_lines ``` -{% endtab %} -{% endtabs %} - The output log should be similar to: ```text @@ -124,42 +80,6 @@ The output log should be similar to: #### Example 2: Attach customized resource name to container logs -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: tail - tag: ecs.* - path: /var/lib/docker/containers/*/*.log - docker_mode: on - docker_mode_flush: 5 - docker_mode_parser: container_firstline - parser: docker - db: /var/fluent-bit/state/flb_container.db - mem_buf_limit: 50MB - skip_long_lines: on - refresh_interval: 10 - rotate_wait: 30 - storage.type: filesystem - read_from_head: off - - filters: - - name: ecs - match: '*' - ecs_tag_prefix: ecs.var.lib.docker.containers. - add: resource $ClusterName.$TaskDefinitionFamily.$TaskID.$ECSContainerName - - outputs: - - name: stdout - match: '*' - format: json_lines -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - ```text [INPUT] Name tail @@ -189,9 +109,6 @@ pipeline: Format json_lines ``` -{% endtab %} -{% endtabs %} - The output log would be similar to: ```text @@ -207,42 +124,9 @@ The template variables in the value for the `resource` key are separated by dot #### Example 3: Attach cluster metadata to non-container logs -This examples shows a use case for the `Cluster_Metadata_Only` option attaching cluster metadata to ECS Agent logs. - -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: tail - tag: ecsagent.* - path: /var/log/ecs/* - db: /var/fluent-bit/state/flb_ecs.db - mem_buf_limit: 50MB - skip_long_lines: on - refresh_interval: 10 - rotate_wait: 30 - storage.type: filesystem - # Collect all logs on instance - read_from_head: on +This examples shows a use case for the `Cluster_Metadata_Only` option- attaching cluster metadata to ECS Agent logs. - filters: - - name: ecs - match: '*' - cluster_metadata_only: on - add: cluster $ClusterName - - outputs: - - name: stdout - match: '*' - format: json_lines -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [INPUT] Name tail Tag ecsagent.* @@ -267,6 +151,3 @@ pipeline: Match * Format json_lines ``` - -{% endtab %} -{% endtabs %} \ No newline at end of file diff --git a/pipeline/filters/geoip2-filter.md b/pipeline/filters/geoip2-filter.md index c7d8c0270..c5b2b5ad2 100644 --- a/pipeline/filters/geoip2-filter.md +++ b/pipeline/filters/geoip2-filter.md @@ -22,33 +22,7 @@ This plugin supports the following configuration parameters: The following configuration processes the incoming `remote_addr` and appends country information retrieved from the GeoLite2 database. -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: dummy - dummy: {"remote_addr": "8.8.8.8"} - - filters: - - name: gioip2 - match: '*' - database: GioLite2-City.mmdb - lookup_key: remote_addr - record: - - country remote_addr %{country.names.en} - - isocode remote_addr %{country.iso_code} - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [INPUT] Name dummy Dummy {"remote_addr": "8.8.8.8"} @@ -66,9 +40,6 @@ pipeline: Match * ``` -{% endtab %} -{% endtabs %} - Each `Record` parameter specifies the following triplet: - `country`: The field name to be added to records. @@ -77,6 +48,6 @@ Each `Record` parameter specifies the following triplet: By running Fluent Bit with this configuration, you will see the following output: -```text +```javascript {"remote_addr": "8.8.8.8", "country": "United States", "isocode": "US"} -``` \ No newline at end of file +``` diff --git a/pipeline/filters/kubernetes.md b/pipeline/filters/kubernetes.md index 845b0a994..8a33a9c0a 100644 --- a/pipeline/filters/kubernetes.md +++ b/pipeline/filters/kubernetes.md @@ -65,24 +65,9 @@ The plugin supports the following configuration parameters: ## Processing the `log` value -Kubernetes filter provides several ways to process the data contained in the `log` key. The following explanation of the workflow assumes that your original Docker parser defined in a `parsers` file is as follows: +Kubernetes filter provides several ways to process the data contained in the `log` key. The following explanation of the workflow assumes that your original Docker parser defined in `parsers.conf` is as follows: -{% tabs %} -{% tab title="parsers.yaml" %} - -```yaml -parsers: - - name: docker - format: json - time_key: time - time_format: '%Y-%m-%dT%H:%M:%S.%L' - time_keep: on -``` - -{% endtab %} -{% tab title="parsers.conf" %} - -```text +```python [PARSER] Name docker Format json @@ -91,9 +76,6 @@ parsers: Time_Keep On ``` -{% endtab %} -{% endtabs %} - To avoid data-type conflicts in Fluent Bit v1.2 or greater, don't use decoders (`Decode_Field_As`) if you're using Elasticsearch database in the output. To perform processing of the `log` key, you must enable the `Merge_Log` configuration property in this filter, then the following processing order will be done: @@ -191,32 +173,7 @@ For example: Kubernetes Filter depends on either [Tail](../inputs/tail.md) or [Systemd](../inputs/systemd.md) input plugins to process and enrich records with Kubernetes metadata. Consider the following configuration example: -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: tail - tag: kube.* - path: /var/log/containers/*.log - multiline.parser: docker,cri - - filters: - - name: kubernetes - match: 'kube.*' - kube_url: https://kubernetes.default.svc:443 - kube_ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - kube_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kube_tag_prefix: kube.var.log.containers. - merge_log: on - merge_log_key: log_processed -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [INPUT] Name tail Tag kube.* @@ -234,9 +191,6 @@ pipeline: Merge_Log_Key log_processed ``` -{% endtab %} -{% endtabs %} - In the input section, the [Tail](../inputs/tail.md) plugin monitors all files ending in `.log` in the path `/var/log/containers/`. For every file it will read every line and apply the Docker parser. The records are emitted to the next step with an expanded tag. Tail supports tags expansion. If a tag has a star character (`*`), it will replace the value with the absolute path of the monitored file, so if your filename and path is: @@ -295,36 +249,7 @@ Under some uncommon conditions, a user might want to alter that hard-coded regul One such use case involves splitting logs by namespace, pods, containers or container ID. The tag is restructured within the tail input using match groups. Restructuring can simplify the filtering by those match groups later in the pipeline. Since the tag no longer follows the original filename, a custom `Regex_Parser` that matches the new tag structure is required: - -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -parsers: - - name: custom-tag - format: regex - regex: '^(?[^_]+)\.(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)\.(?.+)\.(?[a-z0-9]{64})' - -pipeline: - inputs: - - name: tail - tag: kube.... - path: /var/log/containers/*.log - tag_regex: '(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)-(?[a-z0-9]{64})\.log$' - parser: cri - - filters: - - name: kubernetes - match: 'kube.*' - kube_tag_prefix: kube. - regex_parser: custom-tag - merge_log: on -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [PARSER] Name custom-tag Format regex @@ -345,9 +270,6 @@ pipeline: Merge_Log On ``` -{% endtab %} -{% endtabs %} - The filter can now gather the values of `pod_name` and `namespace`. With that information, it will check in the local cache (internal hash table) if some metadata for that key pair exists. If it exists, it will enrich the record with the metadata value. Otherwise, it connects to the Kubernetes Master/API Server and retrieves that information. ## Using Kubelet to get metadata @@ -404,37 +326,6 @@ For Fluent Bit configuration, you must set the `Use_Kubelet` to `true` to enable Fluent Bit configuration example: -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: tail - tag: kube.* - path: /var/log/containers/*.log - db: /var/log/flb_kube.db - parser: docker - docker_mode: on - mem_buf_limit: 50MB - skip_login_lines: on - refresh_interval: 10 - - filters: - - name: kubernetes - match: 'kube.*' - kube_url: https://kubernetes.default.svc.cluster.local:443 - kube_ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - kube_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - merge_log: on - buffer_size: 0 - use_kubelet: ture - kubelet_port: 10250 -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - ```yaml [INPUT] Name tail @@ -459,10 +350,6 @@ pipeline: Kubelet_Port 10250 ``` -{% endtab %} -{% endtabs %} - - DaemonSet configuration example: ```yaml @@ -590,4 +477,4 @@ Learn how to solve them to ensure that the Fluent Bit Kubernetes filter is opera ## Credit -The Kubernetes Filter plugin is fully inspired by the [Fluentd Kubernetes Metadata Filter](https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter) written by [Jimmi Dyson](https://github.com/jimmidyson). \ No newline at end of file +The Kubernetes Filter plugin is fully inspired by the [Fluentd Kubernetes Metadata Filter](https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter) written by [Jimmi Dyson](https://github.com/jimmidyson). diff --git a/pipeline/filters/lua.md b/pipeline/filters/lua.md index 520935ef8..780c58bed 100644 --- a/pipeline/filters/lua.md +++ b/pipeline/filters/lua.md @@ -2,6 +2,8 @@ The _Lua_ filter lets you modify incoming records (or split one record into multiple records) using custom [Lua](https://www.lua.org/) scripts. + + A Lua-based filter requires two steps: 1. Configure the filter in the main configuration. @@ -30,38 +32,17 @@ To test the Lua filter, you can run the plugin from the command line or through From the command line you can use the following options: -```shell -$ ./fluent-bit -i dummy -F lua -p script=test.lua -p call=cb_print -m '*' -o null +```bash +fluent-bit -i dummy -F lua -p script=test.lua -p call=cb_print -m '*' -o null ``` ### Configuration file In your main configuration file, append the following `Input`, `Filter`, and `Output` sections: - -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: dummy - - filters: - - name: lua - match: '*' - script: test.lua - call: cb_print - - outputs: - - name: null - match: '*' -``` - {% tabs %} {% tab title="fluent-bit.conf" %} - -```text +```python [INPUT] Name dummy @@ -75,10 +56,26 @@ pipeline: Name null Match * ``` +{% endtab %} +{% tab title="fluent-bit.yaml" %} +```yaml +pipeline: + inputs: + - name: dummy + filters: + - name: lua + match: '*' + script: test.lua + call: cb_print + outputs: + - name: null + match: '*' +``` {% endtab %} {% endtabs %} + ## Lua script filter API The life cycle of a filter has the following steps: @@ -124,39 +121,7 @@ Each callback must return three values: The [Fluent Bit smoke tests](https://github.com/fluent/fluent-bit/tree/master/packaging/testing/smoke/container) include examples to verify during CI. {% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -service: - flush: 1 - daemon: off - log_level: info - -pipeline: - inputs: - - name: random - tag: test - samples: 10 - - filters: - - name: lua - match: '*' - call: append_tag - code: | - function append_tag(tag, timestamp, record) - new_record = record - new_record["tag"] = tag - return 1, timestamp, new_record - end - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} {% tab title="fluent-bit.conf" %} - ``` [SERVICE] flush 1 @@ -178,7 +143,36 @@ pipeline: Name stdout Match * ``` +{% endtab %} + +{% tab title="fluent-bit.yaml" %} +```yaml +service: + flush: 1 + daemon: off + log_level: info + +pipeline: + inputs: + - name: random + tag: test + samples: 10 + filters: + - name: lua + match: "*" + call: append_tag + code: | + function append_tag(tag, timestamp, record) + new_record = record + new_record["tag"] = tag + return 1, timestamp, new_record + end + + outputs: + - name: stdout + match: "*" +``` {% endtab %} {% endtabs %} @@ -207,24 +201,8 @@ The environment variable is set as `KUBERNETES_SERVICE_HOST: api.sandboxbsh-a.pr The goal of this example is to extract the `sandboxbsh` name and add it to the record as a special key. {% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - - - filters: - - name: lua - alias: filter-iots-lua - match: iots_thread.* - script: filters.lua - call: set_landscape_deployment -``` - -{% endtab %} {% tab title="fluent-bit.conf" %} - -```text +``` [FILTER] Name lua Alias filter-iots-lua @@ -232,10 +210,21 @@ Match iots_thread.* Script filters.lua Call set_landscape_deployment ``` +{% endtab %} +{% tab title="fluent-bit.yaml" %} +```yaml + filters: + - name: lua + alias: filter-iots-lua + match: iots_thread.* + script: filters.lua + call: set_landscape_deployment +``` {% endtab %} {% endtabs %} + filters.lua: ```lua -- Use a Lua function to create some additional entries based @@ -283,27 +272,8 @@ end #### Configuration {% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: stdin - - filters: - - name: lua - match: '*' - script: test.lua - call: cb_split - - outputs: - - name: stdout - match: '*' -``` -{% endtab %} {% tab title="fluent-bit.conf" %} - -```text +```python [Input] Name stdin @@ -317,13 +287,28 @@ pipeline: Name stdout Match * ``` +{% endtab %} +{% tab title="fluent-bit.yaml" %} +```yaml +pipeline: + inputs: + - name: stdin + filters: + - name: lua + match: '*' + script: test.lua + call: cb_split + outputs: + - name: stdout + match: '*' +``` {% endtab %} {% endtabs %} #### Input -```text +``` {"x": [ {"a1":"aa", "z1":"zz"}, {"b1":"bb", "x1":"xx"}, {"c1":"cc"} ]} {"x": [ {"a2":"aa", "z2":"zz"}, {"b2":"bb", "x2":"xx"}, {"c2":"cc"} ]} {"a3":"aa", "z3":"zz", "b3":"bb", "x3":"xx", "c3":"cc"} @@ -331,7 +316,7 @@ pipeline: #### Output -```text +``` [0] stdin.0: [1538435928.310583591, {"a1"=>"aa", "z1"=>"zz"}] [1] stdin.0: [1538435928.310583591, {"x1"=>"xx", "b1"=>"bb"}] [2] stdin.0: [1538435928.310583591, {"c1"=>"cc"}] @@ -369,33 +354,8 @@ end Configuration to get Istio logs and apply response code filter to them. {% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: tail - path: /var/log/containers/*_istio-proxy-*.log - multiline.parser: 'docker, cri' - tag: istio.* - mem_buf_limit: 64MB - skip_long_lines: off - - filters: - - name: lua - match: istio.* - script: response_code_filter.lua - call: cb_response_code_filter - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} {% tab title="fluent-bit.conf" %} - -```text +```ini [INPUT] Name tail Path /var/log/containers/*_istio-proxy-*.log @@ -414,7 +374,27 @@ pipeline: Name stdout Match * ``` +{% endtab %} +{% tab title="fluent-bit.yaml" %} +```yaml +pipeline: + inputs: + - name: tail + path: /var/log/containers/*_istio-proxy-*.log + multiline.parser: 'docker, cri' + tag: istio.* + mem_buf_limit: 64MB + skip_long_lines: off + filters: + - name: lua + match: istio.* + script: response_code_filter.lua + call: cb_response_code_filter + outputs: + - name: stdout + match: '*' +``` {% endtab %} {% endtabs %} @@ -492,51 +472,8 @@ end Use this configuration to obtain a JSON key with `datetime`, and then convert it to another format. {% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: dummy - dummy: '{"event": "Restock", "pub_date": "Tue, 30 Jul 2024 18:01:06 +0000"}' - tag: event_category_a - - - name: dummy - dummy: '{"event": "Soldout", "pub_date": "Mon, 29 Jul 2024 10:15:00 +0600"}' - tag: event_category_b - - filters: - - name: lua - match: '*' - code: | - function convert_to_utc(tag, timestamp, record) - local date_time = record["pub_date"] - local new_record = record - if date_time then - if string.find(date_time, ",") then - local pattern = "(%a+, %d+ %a+ %d+ %d+:%d+:%d+) ([+-]%d%d%d%d)" - local date_part, zone_part = date_time:match(pattern) - if date_part and zone_part then - local command = string.format("date -u -d '%s %s' +%%Y-%%m-%%dT%%H:%%M:%%SZ", date_part, zone_part) - local handle = io.popen(command) - local result = handle:read("*a") - handle:close() - new_record["pub_date"] = result:match("%S+") - end - end - end - return 1, timestamp, new_record - end - call: convert_to_utc - - outputs: - - name: stdout - match: '*' -``` -{% endtab %} {% tab title="fluent-bit.conf" %} - -```text +```ini [INPUT] Name dummy Dummy {"event": "Restock", "pub_date": "Tue, 30 Jul 2024 18:01:06 +0000"} @@ -547,6 +484,7 @@ pipeline: Dummy {"event": "Soldout", "pub_date": "Mon, 29 Jul 2024 10:15:00 +0600"} Tag event_category_b + [FILTER] Name lua Match * @@ -557,7 +495,48 @@ pipeline: Name stdout Match * ``` +{% endtab %} +{% tab title="fluent-bit.yaml" %} +```yaml +pipeline: + inputs: + - name: dummy + dummy: '{"event": "Restock", "pub_date": "Tue, 30 Jul 2024 18:01:06 +0000"}' + tag: event_category_a + + - name: dummy + dummy: '{"event": "Soldout", "pub_date": "Mon, 29 Jul 2024 10:15:00 +0600"}' + tag: event_category_b + + filters: + - name: lua + match: '*' + code: | + function convert_to_utc(tag, timestamp, record) + local date_time = record["pub_date"] + local new_record = record + if date_time then + if string.find(date_time, ",") then + local pattern = "(%a+, %d+ %a+ %d+ %d+:%d+:%d+) ([+-]%d%d%d%d)" + local date_part, zone_part = date_time:match(pattern) + if date_part and zone_part then + local command = string.format("date -u -d '%s %s' +%%Y-%%m-%%dT%%H:%%M:%%SZ", date_part, zone_part) + local handle = io.popen(command) + local result = handle:read("*a") + handle:close() + new_record["pub_date"] = result:match("%S+") + end + end + end + return 1, timestamp, new_record + end + call: convert_to_utc + + outputs: + - name: stdout + match: '*' +``` {% endtab %} {% endtabs %} @@ -577,7 +556,7 @@ Which are handled by dummy in this example. The output of this process shows the conversion of the `datetime` of two timezones to ISO 8601 format in UTC. -```text +```ini ... [2024/08/01 00:56:25] [ info] [output:stdout:stdout.0] worker #0 started [0] event_category_a: [[1722452186.727104902, {}], {"event"=>"Restock", "pub_date"=>"2024-07-30T18:01:06Z"}] @@ -589,24 +568,15 @@ The output of this process shows the conversion of the `datetime` of two timezon Fluent Bit supports definition of configuration variables, which can be done in the following way: -{% tabs %} -{% tab title="fluent-bit.yaml" %} - ```yaml env: myvar1: myvalue1 ``` -{% endtab %} -{% endtabs %} - -These variables can be accessed from the Lua code by referring to the `FLB_ENV` Lua table. Since this is a Lua table, you can access its sub-records through the same syntax (for example, `FLB_ENV['A']`). +These variables can be accessed from the Lua code by referring to the `FLB_ENV` Lua table. Since this is a Lua table, you can access its subrecords through the same syntax (for example, `FLB_ENV['A']`). #### Configuration -{% tabs %} -{% tab title="fluent-bit.yaml" %} - ```yaml env: A: aaa @@ -614,19 +584,19 @@ env: C: ccc service: - flush: 1 - log_level: info + flush: 1 + log_level: info pipeline: inputs: - - name: random - tag: test + - name: random + tag: test samples: 10 filters: - - name: lua - match: '*' - call: append_tag + - name: lua + match: "*" + call: append_tag code: | function append_tag(tag, timestamp, record) new_record = record @@ -635,15 +605,12 @@ pipeline: end outputs: - - name: stdout - match: '*' + - name: stdout + match: "*" ``` -{% endtab %} -{% endtabs %} - #### Output -```text +```shell test: [[1731990257.781970977, {}], {"my_env"=>{"A"=>"aaa", "C"=>"ccc", "HOSTNAME"=>"monox-2.lan", "B"=>"bbb"}, "rand_value"=>4805047635809401856}] -``` \ No newline at end of file +``` diff --git a/pipeline/filters/multiline-stacktrace.md b/pipeline/filters/multiline-stacktrace.md index b0629e3b3..b31a59784 100644 --- a/pipeline/filters/multiline-stacktrace.md +++ b/pipeline/filters/multiline-stacktrace.md @@ -20,7 +20,6 @@ When using this filter: - To concatenate messages read from a log file, it's highly recommended to use the multiline support in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support) itself. This is because performing concatenation while reading the log file is more performant. Concatenating messages originally split by Docker or CRI container engines, is supported in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support). {% hint style="warning" %} - This filter only performs buffering that persists across different Chunks when `Buffer` is enabled. Otherwise, the filter processes one chunk at a time and isn't suitable for most inputs which might send multiline messages in separate chunks. When buffering is enabled, the filter doesn't immediately emit messages it receives. It uses the `in_emitter` plugin, similar to the [Rewrite Tag filter](pipeline/filters/rewrite-tag.md), and emits messages once they're fully concatenated, or a timeout is reached. @@ -59,40 +58,10 @@ The following example files can be located [in the Fluent Bit repository](https: Example files content: {% tabs %} -{% tab title="fluent-bit.yaml" %} - -This is the primary Fluent Bit YAML configuration file. It includes the `parsers_multiline.yaml` and tails the file `test.log` by applying the multiline parsers `multiline-regex-test` and `go`. Then it sends the processing to the standard output. - - -```yaml -service: - flush: 1 - log_level: info - parsers_file: parsers_multiline.yaml - -pipeline: - inputs: - - name: tail - path: test.log - read_from_head: true - - filters: - - name: multiline - match: '*' - multiline.key_content: log - multiline.parser: go,multiline-regex-test - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} {% tab title="fluent-bit.conf" %} +This is the primary Fluent Bit configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parsers `multiline-regex-test` and `go`. Then it sends the processing to the standard output. -This is the primary Fluent Bit classic configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parsers `multiline-regex-test` and `go`. Then it sends the processing to the standard output. - -```text +```python [SERVICE] flush 1 log_level info @@ -116,41 +85,11 @@ This is the primary Fluent Bit classic configuration file. It includes the `pars ``` {% endtab %} -{% tab title="parsers_multiline.yaml" %} - -This file defines a multiline parser for the example. A second multiline parser called `go` is used in `fluent-bit.yaml`, but this one is a built-in parser. - -```yaml -multiline_parsers: - - name: multiline-regex-test - type: regex - flush_timeout: 1000 - # - # Regex rules for multiline parsing - # --------------------------------- - # - # configuration hints: - # - # - first state always has the name: start_state - # - every field in the rule must be inside double quotes - # - # rules | state name | regex pattern | next state - # ------|---------------|-------------------------------------------- - rules: - - state: start_state - regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/' - next_state: cont - - state: cont - regex: '/^\s+at.*/' - next_state: cont -``` -{% endtab %} {% tab title="parsers_multiline.conf" %} +This second file defines a multiline parser for the example. A second multiline parser called `go` is used in `fluent-bit.conf`, but this one is a built-in parser. -This file defines a multiline parser for the example. A second multiline parser called `go` is used in `fluent-bit.conf`, but this one is a built-in parser. - -```text +```python [MULTILINE_PARSER] name multiline-regex-test type regex @@ -172,8 +111,8 @@ This file defines a multiline parser for the example. A second multiline parser ``` {% endtab %} -{% tab title="test.log" %} +{% tab title="test.log" %} An example file with multiline and multi-format content: ```text @@ -246,7 +185,7 @@ one more line, no multiline Running Fluent Bit with the given configuration file: ```shell -$ ./fluent-bit -c fluent-bit.conf +fluent-bit -c fluent-bit.conf ``` Should return something like the following: @@ -327,23 +266,7 @@ When Fluent Bit is consuming logs from a container runtime, such as Docker, thes Fluent Bit can re-combine these logs that were split by the runtime and remove the partial message fields. The following filter example is for this use case. -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - - filters: - - name: multiline - match: '*' - multiline.key_content: log - mode: partial_message -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [FILTER] name multiline match * @@ -351,7 +274,4 @@ pipeline: mode partial_message ``` -{% endtab %} -{% endtabs %} - -The two options for `mode` are mutually exclusive in the filter. If you set the `mode` to `partial_message` then the `multiline.parser` option isn't allowed. \ No newline at end of file +The two options for `mode` are mutually exclusive in the filter. If you set the `mode` to `partial_message` then the `multiline.parser` option isn't allowed. diff --git a/pipeline/filters/nightfall.md b/pipeline/filters/nightfall.md index c1d975b90..b7dea1152 100644 --- a/pipeline/filters/nightfall.md +++ b/pipeline/filters/nightfall.md @@ -27,32 +27,6 @@ The plugin supports the following configuration parameters: The following is an example of a configuration file for the Nightfall filter: -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: http - host: 0.0.0.0 - port: 8000 - - filters: - - name: nightfall - match: '*' - nightfall_api_key: - policy_id: 5991946b-1cc8-4c38-9240-72677029a3f7 - sampling_rate: 1 - tls.ca_path: /etc/ssl/certs - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - ```text [INPUT] name http @@ -69,22 +43,15 @@ pipeline: [OUTPUT] Name stdout - Match * ``` -{% endtab %} -{% endtabs %} - ### Command line -After you configure the filter, you can use it from the command line by running a command like: +After you configure the filter, you can use the it from the command line by running a +command like: ```shell -# For YAML configuration. -$ ./fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.yaml - -# For classic configuration. -$ ./fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.conf +bin/fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.conf ``` Replace _`PATH_TO_CONF_FILE`_ with the path for where your filter configuration file @@ -104,4 +71,4 @@ Which results in output like: [0] app.log: [1644464790.280412000, {"A"=>"there is nothing sensitive here", "B"=>[{"A"=>"my credit card number is *******************"}, {"A"=>"*********** is my social security."}], "C"=>false, "D"=>"key ********************"}] [2022/02/09 19:47:25] [ info] [filter:nightfall:nightfall.0] Nightfall request http_do=0, HTTP Status: 200 [0] app.log: [1644464845.675431000, {"A"=>"a very safe string"}] -``` \ No newline at end of file +``` diff --git a/pipeline/filters/record-modifier.md b/pipeline/filters/record-modifier.md index 08473bc6d..ea4cfc1b9 100644 --- a/pipeline/filters/record-modifier.md +++ b/pipeline/filters/record-modifier.md @@ -31,30 +31,67 @@ The following configuration file appends a product name and hostname to a record using an environment variable: {% tabs %} +{% tab title="fluent-bit.conf" %} + +```python copy +[INPUT] + Name mem + Tag mem.local + +[OUTPUT] + Name stdout + Match * + +[FILTER] + Name record_modifier + Match * + Record hostname ${HOSTNAME} + Record product Awesome_Tool +``` + +{% endtab %} + {% tab title="fluent-bit.yaml" %} -```yaml +```yaml copy pipeline: inputs: - name: mem tag: mem.local - filters: - name: record_modifier match: '*' record: - hostname ${HOSTNAME} - product Awesome_Tool - outputs: - name: stdout match: '*' ``` {% endtab %} +{% endtabs %} + +You can run the filter from command line: + +```shell copy +fluent-bit -i mem -o stdout -F record_modifier -p 'Record=hostname ${HOSTNAME}' -p 'Record=product Awesome_Tool' -m '*' +``` + +The output looks something like: + +```python copy +[0] mem.local: [1492436882.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724, "hostname"=>"localhost.localdomain", "product"=>"Awesome_Tool"}] +``` + +### Remove fields with `Remove_key` + +The following configuration file removes `Swap.*` fields: + +{% tabs %} {% tab title="fluent-bit.conf" %} -```text +```python copy [INPUT] Name mem Tag mem.local @@ -66,38 +103,20 @@ pipeline: [FILTER] Name record_modifier Match * - Record hostname ${HOSTNAME} - Record product Awesome_Tool + Remove_key Swap.total + Remove_key Swap.used + Remove_key Swap.free ``` {% endtab %} -{% endtabs %} - -You can run the filter from command line: -```shell -$ ./fluent-bit -i mem -o stdout -F record_modifier -p 'Record=hostname ${HOSTNAME}' -p 'Record=product Awesome_Tool' -m '*' -``` - -The output looks something like: - -```text -[0] mem.local: [1492436882.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>299352, "Swap.total"=>2064380, "Swap.used"=>32656, "Swap.free"=>2031724, "hostname"=>"localhost.localdomain", "product"=>"Awesome_Tool"}] -``` - -### Remove fields with `Remove_key` - -The following configuration file removes `Swap.*` fields: - -{% tabs %} {% tab title="fluent-bit.yaml" %} -```yaml +```yaml copy pipeline: inputs: - name: mem tag: mem.local - filters: - name: record_modifier match: '*' @@ -105,16 +124,34 @@ pipeline: - Swap.total - Swap.used - Swap.free - outputs: - name: stdout match: '*' ``` {% endtab %} +{% endtabs %} + +You can also run the filter from command line. + +```shell copy +fluent-bit -i mem -o stdout -F record_modifier -p 'Remove_key=Swap.total' -p 'Remove_key=Swap.free' -p 'Remove_key=Swap.used' -m '*' +``` + +The output looks something like: + +```python +[0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}] +``` + +### Retain fields with `Allowlist_key` + +The following configuration file retains `Mem.*` fields. + +{% tabs %} {% tab title="fluent-bit.conf" %} -```text +```python copy [INPUT] Name mem Tag mem.local @@ -126,39 +163,20 @@ pipeline: [FILTER] Name record_modifier Match * - Remove_key Swap.total - Remove_key Swap.used - Remove_key Swap.free + Allowlist_key Mem.total + Allowlist_key Mem.used + Allowlist_key Mem.free ``` {% endtab %} -{% endtabs %} - -You can also run the filter from command line. - -```shell -$ ./fluent-bit -i mem -o stdout -F record_modifier -p 'Remove_key=Swap.total' -p 'Remove_key=Swap.free' -p 'Remove_key=Swap.used' -m '*' -``` - -The output looks something like: -```text -[0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}] -``` - -### Retain fields with `Allowlist_key` - -The following configuration file retains `Mem.*` fields. - -{% tabs %} {% tab title="fluent-bit.yaml" %} -```yaml +```yaml copy pipeline: inputs: - name: mem tag: mem.local - filters: - name: record_modifier match: '*' @@ -166,43 +184,22 @@ pipeline: - Mem.total - Mem.used - Mem.free - outputs: - name: stdout match: '*' ``` -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text -[INPUT] - Name mem - Tag mem.local - -[FILTER] - Name record_modifier - Match * - Allowlist_key Mem.total - Allowlist_key Mem.used - Allowlist_key Mem.free - - [OUTPUT] - Name stdout - Match * -``` - {% endtab %} {% endtabs %} You can also run the filter from command line: -```shell -$ ./fluent-bit -i mem -o stdout -F record_modifier -p 'Allowlist_key=Mem.total' -p 'Allowlist_key=Mem.free' -p 'Allowlist_key=Mem.used' -m '*' +```shell copy +fluent-bit -i mem -o stdout -F record_modifier -p 'Allowlist_key=Mem.total' -p 'Allowlist_key=Mem.free' -p 'Allowlist_key=Mem.used' -m '*' ``` The output looks something like: -```text +```python [0] mem.local: [1492436998.000000000, {"Mem.total"=>1016024, "Mem.used"=>716672, "Mem.free"=>295332}] -``` \ No newline at end of file +``` diff --git a/pipeline/filters/standard-output.md b/pipeline/filters/standard-output.md index a7df5cc9f..65a94991f 100644 --- a/pipeline/filters/standard-output.md +++ b/pipeline/filters/standard-output.md @@ -9,34 +9,23 @@ The plugin has no configuration parameters. Use the following command from the command line: ```shell -$ ./fluent-bit -i cpu -F stdout -m '*' -o null +fluent-bit -i cpu -F stdout -m '*' -o null ``` Fluent Bit specifies gathering [CPU](../inputs/cpu-metrics.md) usage metrics and prints them out in a human-readable way when they flow through the stdout plugin. ```text -Fluent Bit v4.0.3 -* Copyright (C) 2015-2025 The Fluent Bit Authors +Fluent Bit v1.x.x +* Copyright (C) 2019-2021 The Fluent Bit Authors +* Copyright (C) 2015-2018 Treasure Data * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd * https://fluentbit.io -______ _ _ ______ _ _ ___ _____ -| ___| | | | | ___ (_) | / || _ | -| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' | -| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| | -| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ / -\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/ - - -[2025/07/03 16:15:34] [ info] [fluent bit] version=4.0.3, commit=3a91b155d6, pid=23196 -[2025/07/03 16:15:34] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128 -[2025/07/03 16:15:34] [ info] [simd ] disabled -[2025/07/03 16:15:34] [ info] [cmetrics] version=1.0.3 -[2025/07/03 16:15:34] [ info] [ctraces ] version=0.6.6 -[2025/07/03 16:15:34] [ info] [input:dummy:dummy.0] initializing -[2025/07/03 16:15:34] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only) -[2025/07/03 16:15:34] [ info] [output:stdout:stdout.0] worker #0 started -[2025/07/03 16:15:34] [ info] [sp] stream processor started +[2021/06/04 14:53:59] [ info] [engine] started (pid=3236719) +[2021/06/04 14:53:59] [ info] [storage] version=1.1.1, initializing... +[2021/06/04 14:53:59] [ info] [storage] in-memory +[2021/06/04 14:53:59] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128 +[2021/06/04 14:53:59] [ info] [sp] stream processor started [0] cpu.0: [1622789640.379532062, {"cpu_p"=>9.000000, "user_p"=>6.500000, "system_p"=>2.500000, "cpu0.p_cpu"=>8.000000, "cpu0.p_user"=>6.000000, "cpu0.p_system"=>2.000000, "cpu1.p_cpu"=>9.000000, "cpu1.p_user"=>6.000000, "cpu1.p_system"=>3.000000}] [0] cpu.0: [1622789641.379529426, {"cpu_p"=>22.500000, "user_p"=>18.000000, "system_p"=>4.500000, "cpu0.p_cpu"=>34.000000, "cpu0.p_user"=>30.000000, "cpu0.p_system"=>4.000000, "cpu1.p_cpu"=>11.000000, "cpu1.p_user"=>6.000000, "cpu1.p_system"=>5.000000}] [0] cpu.0: [1622789642.379544020, {"cpu_p"=>26.500000, "user_p"=>16.000000, "system_p"=>10.500000, "cpu0.p_cpu"=>30.000000, "cpu0.p_user"=>24.000000, "cpu0.p_system"=>6.000000, "cpu1.p_cpu"=>22.000000, "cpu1.p_user"=>8.000000, "cpu1.p_system"=>14.000000}] @@ -45,4 +34,4 @@ ______ _ _ ______ _ _ ___ _____ [2021/06/04 14:54:04] [ info] [input] pausing cpu.0 [2021/06/04 14:54:04] [ warn] [engine] service will stop in 5 seconds [2021/06/04 14:54:08] [ info] [engine] service stopped -``` \ No newline at end of file +``` diff --git a/pipeline/filters/tensorflow.md b/pipeline/filters/tensorflow.md index 385de7cff..2ea7c9f92 100644 --- a/pipeline/filters/tensorflow.md +++ b/pipeline/filters/tensorflow.md @@ -25,23 +25,23 @@ The plugin supports the following configuration parameters: To create a Tensorflow Lite shared library: 1. Clone the [Tensorflow repository](https://github.com/tensorflow/tensorflow). -2. Install the [Bazel](https://bazel.build/) package manager. -3. Run the following command to create the shared library: +1. Install the [Bazel](https://bazel.build/) package manager. +1. Run the following command to create the shared library: - ```shell - $ ./bazel build -c opt //tensorflow/lite/c:tensorflowlite_c # see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/c + ```bash + bazel build -c opt //tensorflow/lite/c:tensorflowlite_c # see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/c ``` The script creates the shared library `bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so`. -4. Copy the library to a location such as `/usr/lib` that can be used by Fluent Bit. +1. Copy the library to a location such as `/usr/lib` that can be used by Fluent Bit. ## Building Fluent Bit with Tensorflow filter plugin The Tensorflow filter plugin is disabled by default. You must build Fluent Bit with the Tensorflow plugin enabled. In addition, it requires access to Tensorflow Lite header files to compile. Therefore, you must pass the address of the Tensorflow source code on your machine to the [build script](https://github.com/fluent/fluent-bit#build-from-scratch): -```shell -$ ./cmake -DFLB_FILTER_TENSORFLOW=On -DTensorflow_DIR= ... +```bash +cmake -DFLB_FILTER_TENSORFLOW=On -DTensorflow_DIR= ... ``` ### Command line @@ -50,8 +50,8 @@ If Tensorflow plugin initializes correctly, it reports successful creation of th The command: -```shell -$ ./fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field=image' -p 'model_file=/home/user/model.tflite' -p +```bash +bin/fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field=image' -p 'model_file=/home/user/model.tflite' -p ``` produces an output like: @@ -67,37 +67,7 @@ produces an output like: ### Configuration file -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -service: - flush: 1 - daemon: off - log_level: info - -pipeline: - inputs: - - name: mqtt - tag: mqtt.data - - filters: - - name: tensorflow - match: mqtt.data - input_field: image - model_file: /home/m/model.tflite - include_input_fields: false - normalization_value: 255 - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [SERVICE] Flush 1 Daemon Off @@ -119,6 +89,3 @@ pipeline: Name stdout Match * ``` - -{% endtab %} -{% endtabs %} \ No newline at end of file diff --git a/pipeline/filters/throttle.md b/pipeline/filters/throttle.md index 14ed83700..d426e8292 100644 --- a/pipeline/filters/throttle.md +++ b/pipeline/filters/throttle.md @@ -67,7 +67,7 @@ The last pane of the window was overwritten and 1 message was dropped. ### Interval versus Window size -You might notice it's possible to configure the `Interval` of the `Window` shift. It's counterintuitive, but there is a difference between the two previous examples: +You might notice it's possible to configure the `Interval` of the `Window` shift. It's counter intuitive, but there is a difference between the two previous examples: ```text Rate 60 @@ -117,31 +117,7 @@ bin/fluent-bit -i tail -p 'path=lines.txt' -F throttle -p 'rate=1' -m '*' -o std ### Configuration File -{% tabs %} -{% tab title="fluent-bit.yaml" %} - -```yaml -pipeline: - inputs: - - name: tail - path: lines.txt - - filters: - - name: throttle - match: '*' - rate: 1000 - window: 300 - interval: 1s - - outputs: - - name: stdout - match: '*' -``` - -{% endtab %} -{% tab title="fluent-bit.conf" %} - -```text +```python [INPUT] Name tail Path lines.txt @@ -158,7 +134,4 @@ pipeline: Match * ``` -{% endtab %} -{% endtabs %} - -This example will pass 1000 messages per second in average over 300 seconds. \ No newline at end of file +This example will pass 1000 messages per second in average over 300 seconds.