You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: administration/configuring-fluent-bit/unit-sizes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Unit sizes
2
2
3
-
Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail.md), [Forward Input](../../pipeline/inputs/forward.md) or generic properties like [`Mem_Buf_Limit`](../backpressure.md) use unit sizes.
3
+
Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail.md), [Forward Input](../../pipeline/inputs/forward.md), or generic properties like [`Mem_Buf_Limit`](../backpressure.md) use unit sizes.
4
4
5
5
Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression) is used, the log entry could be converted to:
29
+
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression.md) is used, the log entry could be converted to:
Copy file name to clipboardExpand all lines: local-testing/validating-your-data-and-structure.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ Fluent Bit supports multiple sources and formats. In addition, it provides filte
4
4
5
5
Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems.
6
6
7
-
In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect) filter, which you can use to validate keys and values from your records and take action when an exception is found.
7
+
In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect.md) filter, which you can use to validate keys and values from your records and take action when an exception is found.
8
8
9
9
A simplified view of the data processing pipeline is as follows:
10
10
@@ -20,8 +20,8 @@ IS --> Fil --> OD
20
20
21
21
Consider the following pipeline, which uses a JSON file as its data source and has two filters:
22
22
23
-
-[Grep](../pipeline/filters/grep) to exclude certain records.
24
-
-[Record Modifier](../pipeline/filters/record-modifier) to alter records' content by adding and removing specific keys.
23
+
-[Grep](../pipeline/filters/grep.md) to exclude certain records.
24
+
-[Record Modifier](../pipeline/filters/record-modifier.md) to alter records' content by adding and removing specific keys.
25
25
26
26
```mermaid
27
27
flowchart LR
@@ -37,7 +37,7 @@ record --> stdout
37
37
38
38
Add data validation between each step to ensure your data structure is correct.
39
39
40
-
This example uses the [Expect](../pipeline/filters/expect) filter.
40
+
This example uses the [Expect](../pipeline/filters/expect.md) filter.
41
41
42
42
```mermaid
43
43
flowchart LR
@@ -164,7 +164,7 @@ The following is the Fluent Bit classic parsers file:
164
164
{% endtab %}
165
165
{% endtabs %}
166
166
167
-
If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail) input (`parser json`), the Expect filter triggers the `exit` action.
167
+
If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail.md) input (`parser json`), the Expect filter triggers the `exit` action.
168
168
169
169
To extend the pipeline, add a Grep filter to match records that map `label` containing a key called `name` with value the `abc`, and add an Expect filter to re-validate that condition:
Copy file name to clipboardExpand all lines: pipeline/filters/grep.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ To start filtering records, run the filter from the command line or through the
39
39
40
40
When using the command line, pay close attention to quote the regular expressions. Using a configuration file might be easier.
41
41
42
-
The following command loads the [tail](../../pipeline/inputs/tail) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`:
42
+
The following command loads the [tail](../../pipeline/inputs/tail.md) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`:
Copy file name to clipboardExpand all lines: pipeline/inputs/process.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
The _Process metrics_ input plugin lets you check how healthy a process is. It does so by performing service checks at specified intervals.
4
4
5
-
This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics) input plugin.
5
+
This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics.md) input plugin.
6
6
7
7
## Configuration parameters
8
8
@@ -16,7 +16,7 @@ The plugin supports the following configuration parameters:
16
16
|`Alert`| If enabled, the plugin will only generate messages if the target process is down. |`false`|
17
17
|`Fd`| If enabled, a number of `fd` is appended to each record. |`true`|
18
18
|`Mem`| If enabled, memory usage of the process is appended to each record. |`true`|
19
-
|`Threaded`| Specifies whether to run this input in its own [thread](../../administration/multithreading#inputs). |`false`|
19
+
|`Threaded`| Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). |`false`|
Copy file name to clipboardExpand all lines: pipeline/outputs/azure_kusto.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ Fluent Bit uses the application's credentials to ingest data into your cluster.
26
26
27
27
-[Register an application](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application)
28
28
-[Add a client secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#add-a-client-secret)
29
-
-[Authorize the app in your database](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/access-control/principals-and-identity-providers#azure-ad-tenants)
29
+
-[Authorize the app in your database](https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization)
30
30
31
31
## Create a table
32
32
@@ -70,7 +70,7 @@ By default, Kusto will insert incoming ingestion data into a table by inferring
70
70
|`buffering_enabled`| Optional. Enable buffering into disk before ingesting into Azure Kusto. |`Off`|
71
71
|`buffer_dir`| Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. |`/tmp/fluent-bit/azure-kusto/`|
72
72
|`upload_timeout`| Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. |`30m`|
73
-
|`upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. |`200MB`|
73
+
|`upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in megabytes. |`200MB`|
74
74
|`azure_kusto_buffer_key`| Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. |`key`|
75
75
|`store_dir_limit_size`| Optional. When buffering is `On`, set the max size of the buffer directory. |`8GB`|
76
76
|`buffer_file_delete_early`| Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. |`Off`|
Copy file name to clipboardExpand all lines: pipeline/outputs/logdna.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,6 @@ This plugin uses the following configuration parameters:
18
18
|`tags`| A list of comma-separated strings to group records in LogDNA and simplify the query with filters. |_none_|
19
19
|`file`| Optional name of a file being monitored. This value is only set if the record doesn't contain a reference to it. |_none_|
20
20
|`app`| Name of the application. This value is automatically discovered on each record. If no value is found, the default value is used. |`Fluent Bit`|
21
-
|`workers`| The number of [workers](../../administration/multithreading#outputs) to perform flush operations for this output. |`0`|
Copy file name to clipboardExpand all lines: pipeline/outputs/new-relic.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ The _New Relic_ output plugin lets you send logs to New Relic.
10
10
|`api_key`| Your [New Relic API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required.|_none_|
11
11
|`license_key`| Your [New Relic license key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required. |_none_|
12
12
|`compress`| Sets the compression mechanism for the payload. Possible values: `gzip` or `false`. |`gzip`|
13
-
|`workers`| Sets the number of [workers](../administration/multithreading.md#outputs) to perform flush operations for this output. |`0`|
13
+
|`workers`| Sets the number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`0`|
Copy file name to clipboardExpand all lines: pipeline/outputs/observe.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ The following HTTP configuration parameters are relevant to Observe:
8
8
9
9
| Key | Description | Default |
10
10
| --- | ----------- | ------- |
11
-
|`host`| IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/common-topics/HelpfulHints.html?highlight=customer%20id#customer-id). |`OBSERVE_CUSTOMER.collect.observeinc.com`|
11
+
|`host`| IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/hints/CustomerId.html). |`OBSERVE_CUSTOMER.collect.observeinc.com`|
12
12
|`port`| TCP port to use when sending data to Observe. |`443`|
13
13
|`tls`| Specifies whether to use TLS. |`on`|
14
14
|`uri`| Specifies the HTTP URI for Observe. |`/v1/http/fluentbit`|
0 commit comments