diff --git a/administration/configuring-fluent-bit/unit-sizes.md b/administration/configuring-fluent-bit/unit-sizes.md index 8f168cf37..8808b589e 100644 --- a/administration/configuring-fluent-bit/unit-sizes.md +++ b/administration/configuring-fluent-bit/unit-sizes.md @@ -1,6 +1,6 @@ # Unit sizes -Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail), [Forward Input](../../pipeline/inputs/forward.md) or generic properties like [`Mem_Buf_Limit`](../backpressure) use unit sizes. +Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail.md), [Forward Input](../../pipeline/inputs/forward.md), or generic properties like [`Mem_Buf_Limit`](../backpressure.md) use unit sizes. Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean: diff --git a/administration/configuring-fluent-bit/yaml/configuration-file.md b/administration/configuring-fluent-bit/yaml/configuration-file.md index e314aa7e1..23be9488f 100644 --- a/administration/configuring-fluent-bit/yaml/configuration-file.md +++ b/administration/configuring-fluent-bit/yaml/configuration-file.md @@ -4,7 +4,8 @@ description: Learn about the YAML configuration file used by Fluent Bit # YAML configuration file - + One of the ways to configure Fluent Bit is using a YAML configuration file that works at a global scope. diff --git a/concepts/data-pipeline/parser.md b/concepts/data-pipeline/parser.md index 6465c848f..5792b6e9b 100644 --- a/concepts/data-pipeline/parser.md +++ b/concepts/data-pipeline/parser.md @@ -26,7 +26,7 @@ The parser converts unstructured data to structured data. As an example, conside 192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395 ``` -This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression) is used, the log entry could be converted to: +This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression.md) is used, the log entry could be converted to: ```javascript { diff --git a/local-testing/validating-your-data-and-structure.md b/local-testing/validating-your-data-and-structure.md index f129cf477..0b9b64046 100644 --- a/local-testing/validating-your-data-and-structure.md +++ b/local-testing/validating-your-data-and-structure.md @@ -4,7 +4,7 @@ Fluent Bit supports multiple sources and formats. In addition, it provides filte Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems. -In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect) filter, which you can use to validate keys and values from your records and take action when an exception is found. +In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect.md) filter, which you can use to validate keys and values from your records and take action when an exception is found. A simplified view of the data processing pipeline is as follows: @@ -20,8 +20,8 @@ IS --> Fil --> OD Consider the following pipeline, which uses a JSON file as its data source and has two filters: -- [Grep](../pipeline/filters/grep) to exclude certain records. -- [Record Modifier](../pipeline/filters/record-modifier) to alter records' content by adding and removing specific keys. +- [Grep](../pipeline/filters/grep.md) to exclude certain records. +- [Record Modifier](../pipeline/filters/record-modifier.md) to alter records' content by adding and removing specific keys. ```mermaid flowchart LR @@ -37,7 +37,7 @@ record --> stdout Add data validation between each step to ensure your data structure is correct. -This example uses the [Expect](../pipeline/filters/expect) filter. +This example uses the [Expect](../pipeline/filters/expect.md) filter. ```mermaid flowchart LR @@ -164,7 +164,7 @@ The following is the Fluent Bit classic parsers file: {% endtab %} {% endtabs %} -If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail) input (`parser json`), the Expect filter triggers the `exit` action. +If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail.md) input (`parser json`), the Expect filter triggers the `exit` action. To extend the pipeline, add a Grep filter to match records that map `label` containing a key called `name` with value the `abc`, and add an Expect filter to re-validate that condition: diff --git a/pipeline/filters/checklist.md b/pipeline/filters/checklist.md index 128142e6c..13af8471f 100644 --- a/pipeline/filters/checklist.md +++ b/pipeline/filters/checklist.md @@ -9,7 +9,7 @@ The plugin supports the following configuration parameters | Key | Description | Default | | :-- | :---------- | :------ | | `file` | The single value file that Fluent Bit will use as a lookup table to determine if the specified `lookup_key` exists. | _none_ | -| `lookup_key` | The specific key to look up and determine if it exists. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor). | _none_ | +| `lookup_key` | The specific key to look up and determine if it exists. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | _none_ | | `record` | The record to add if the `lookup_key` is found in the specified `file`. You can add multiple record parameters. | _none_ | | `mode` | Set the check mode. `exact` and `partial` are supported. | `exact`| | `print_query_time` | Print to stdout the elapsed query time for every matched record. | `false` | diff --git a/pipeline/filters/grep.md b/pipeline/filters/grep.md index a80937400..a8d48fb83 100644 --- a/pipeline/filters/grep.md +++ b/pipeline/filters/grep.md @@ -39,7 +39,7 @@ To start filtering records, run the filter from the command line or through the When using the command line, pay close attention to quote the regular expressions. Using a configuration file might be easier. -The following command loads the [tail](../../pipeline/inputs/tail) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`: +The following command loads the [tail](../../pipeline/inputs/tail.md) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`: ```shell fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdout diff --git a/pipeline/filters/log_to_metrics.md b/pipeline/filters/log_to_metrics.md index 9148f715e..f5e104605 100644 --- a/pipeline/filters/log_to_metrics.md +++ b/pipeline/filters/log_to_metrics.md @@ -26,9 +26,9 @@ The plugin supports the following configuration parameters: | `metric_description` | Required. Sets a description for the metric. | | | `bucket` | Required for mode `histogram`. Defines a bucket for histograms. | For example, `0.75` | | `add_label` | Adds a custom label `NAME` and set the value to the value of `KEY`. | | -| `label_field` | Includes a record field as label dimension in the metric. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor) notation for nested fields. | -| `value_field` | Required for modes `gauge` and `histogram`. Specifies the record field that holds a numerical value. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor) notation for nested fields. | -| `kubernetes_mode` | If enabled, adds `pod_id`, `pod_name`, `namespace_name`, `docker_id` and `container_name` to the metric as labels. This option is intended to be used in combination with the [Kubernetes](./kubernetes) filter plugin, which fills those fields. | | +| `label_field` | Includes a record field as label dimension in the metric. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) notation for nested fields. | +| `value_field` | Required for modes `gauge` and `histogram`. Specifies the record field that holds a numerical value. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) notation for nested fields. | +| `kubernetes_mode` | If enabled, adds `pod_id`, `pod_name`, `namespace_name`, `docker_id` and `container_name` to the metric as labels. This option is intended to be used in combination with the [Kubernetes](./kubernetes.md) filter plugin, which fills those fields. | | | `Regex` | Includes records in which the content of `KEY` matches the regular expression. | `KEY REGEX` | | `Exclude` | Excludes records in which the content of `KEY` matches the regular expression. | `KEY REGEX` | | `Flush_Interval_Sec` | The interval for metrics emission, in seconds. If `Flush_Interval_Sec` and `Flush_Interval_Nsec` are either both unset or both set to `0`, the filter emits metrics immediately after each filter match. Otherwise, if either parameter is set to a non-zero value, the filter emits metrics at the specified interval. Longer intervals help lower resource consumption in high-load situations. Default value: `0`. | | diff --git a/pipeline/inputs/process.md b/pipeline/inputs/process.md index f8d685987..a9bb99841 100644 --- a/pipeline/inputs/process.md +++ b/pipeline/inputs/process.md @@ -2,7 +2,7 @@ The _Process metrics_ input plugin lets you check how healthy a process is. It does so by performing service checks at specified intervals. -This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics) input plugin. +This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics.md) input plugin. ## Configuration parameters @@ -16,7 +16,7 @@ The plugin supports the following configuration parameters: | `Alert` | If enabled, the plugin will only generate messages if the target process is down. | `false` | | `Fd` | If enabled, a number of `fd` is appended to each record. | `true` | | `Mem` | If enabled, memory usage of the process is appended to each record. | `true` | -| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading#inputs). | `false` | +| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` | ## Get started diff --git a/pipeline/outputs/azure_kusto.md b/pipeline/outputs/azure_kusto.md index deae7de6a..fae3b7fd4 100644 --- a/pipeline/outputs/azure_kusto.md +++ b/pipeline/outputs/azure_kusto.md @@ -26,7 +26,7 @@ Fluent Bit uses the application's credentials to ingest data into your cluster. - [Register an application](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application) - [Add a client secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#add-a-client-secret) -- [Authorize the app in your database](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/access-control/principals-and-identity-providers#azure-ad-tenants) +- [Authorize the app in your database](https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization) ## Create a table @@ -70,7 +70,7 @@ By default, Kusto will insert incoming ingestion data into a table by inferring | `buffering_enabled` | Optional. Enable buffering into disk before ingesting into Azure Kusto. | `Off` | | `buffer_dir` | Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. | `/tmp/fluent-bit/azure-kusto/` | | `upload_timeout` | Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. | `30m` | -| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. | `200MB` | +| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in megabytes. | `200MB` | | `azure_kusto_buffer_key` | Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. | `key` | | `store_dir_limit_size` | Optional. When buffering is `On`, set the max size of the buffer directory. | `8GB` | | `buffer_file_delete_early` | Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. | `Off` | diff --git a/pipeline/outputs/kinesis.md b/pipeline/outputs/kinesis.md index af9db2495..047ba89f6 100644 --- a/pipeline/outputs/kinesis.md +++ b/pipeline/outputs/kinesis.md @@ -10,7 +10,7 @@ This is the documentation for the core Fluent Bit Kinesis plugin written in C. I This `kinesis_streams` plugin always uses a random partition key when uploading records to Kinesis through the [PutRecords API](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html). -For information about how AWS credentials are fetched, see [AWS credentials](../../administration/aws-credentials). +For information about how AWS credentials are fetched, see [AWS credentials](../../administration/aws-credentials.md). ## Configuration parameters diff --git a/pipeline/outputs/new-relic.md b/pipeline/outputs/new-relic.md index cba4c88ac..7c5224a17 100644 --- a/pipeline/outputs/new-relic.md +++ b/pipeline/outputs/new-relic.md @@ -10,7 +10,7 @@ The _New Relic_ output plugin lets you send logs to New Relic. | `api_key` | Your [New Relic API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required.| _none_ | | `license_key` | Your [New Relic license key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required. | _none_ | | `compress` | Sets the compression mechanism for the payload. Possible values: `gzip` or `false`. | `gzip` | -| `workers` | Sets the number of [workers](../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | +| `workers` | Sets the number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | ## Example configuration diff --git a/pipeline/outputs/observe.md b/pipeline/outputs/observe.md index 25f16b804..36bbd95d2 100644 --- a/pipeline/outputs/observe.md +++ b/pipeline/outputs/observe.md @@ -1,6 +1,6 @@ # Observe -Use the [HTTP output plugin](./http) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format. +Use the [HTTP output plugin](`http`) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format. ## Configuration parameters @@ -8,7 +8,7 @@ The following HTTP configuration parameters are relevant to Observe: | Key | Description | Default | | --- | ----------- | ------- | -| `host` | IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/common-topics/HelpfulHints.html?highlight=customer%20id#customer-id). | `OBSERVE_CUSTOMER.collect.observeinc.com` | +| `host` | IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/hints/CustomerId.html). | `OBSERVE_CUSTOMER.collect.observeinc.com` | | `port` | TCP port to use when sending data to Observe. | `443` | | `tls` | Specifies whether to use TLS. | `on` | | `uri` | Specifies the HTTP URI for Observe. | `/v1/http/fluentbit` | diff --git a/pipeline/outputs/oci-logging-analytics.md b/pipeline/outputs/oci-logging-analytics.md index b1bc9a1a8..f57423c5a 100644 --- a/pipeline/outputs/oci-logging-analytics.md +++ b/pipeline/outputs/oci-logging-analytics.md @@ -16,7 +16,7 @@ This plugin uses the following configuration parameters: | `profile_name` | The OCI configuration profile name to be used from the configuration file. | `DEFAULT` | | `namespace` | The OCI tenancy namespace to upload log data to. | _none_ | | `proxy` | The proxy name, in `http://host:port` format. Only supports HTTP protocol. | _none_ | -| `workers` | The number of [workers](../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` | +| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` | | `oci_config_in_record` | If set to `true`, the following `oci_la_*` will be read from the record itself instead of the output plugin configuration. | `false` | | `oci_la_log_group_id` | Required. The Oracle Cloud Identifier (OCID) of the Logging Analytics where you want to store logs. | _none_ | | `oci_la_log_source_name` | Required. The Logging Analytics Source to use for processing log records. | _none_ | diff --git a/pipeline/outputs/standard-output.md b/pipeline/outputs/standard-output.md index 4535d1714..4b773ae2c 100644 --- a/pipeline/outputs/standard-output.md +++ b/pipeline/outputs/standard-output.md @@ -17,7 +17,7 @@ The _standard output_ plugin prints ingested data to standard output. fluent-bit -i cpu -o stdout -v ``` -This example directs the plugin to gather [CPU](../inputs/cpu-metrics) usage metrics and print them out to the standard output in a human-readable way: +This example directs the plugin to gather [CPU](../inputs/cpu-metrics.md) usage metrics and print them out to the standard output in a human-readable way: ```shell fluent-bit -i cpu -o stdout -p format=msgpack -v diff --git a/pipeline/processors/README.md b/pipeline/processors/README.md index 94b5e546a..a231a0b86 100644 --- a/pipeline/processors/README.md +++ b/pipeline/processors/README.md @@ -6,7 +6,7 @@ Additionally, filters can be implemented in a way that mimics the behavior of pr {% hint style="info" %} -Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/configuration-file) support processors. +Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/configuration-file.md) support processors. {% endhint %}