You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/inputs/tail.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ The plugin supports the following configuration parameters:
14
14
|`Buffer_Max_Size`| Set the limit of the buffer size per monitored file. When a buffer needs to be increased, this value is used to restrict the memory buffer growth. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. |`32k`|
15
15
|`Path`| Pattern specifying a specific log file or multiple ones through the use of common wildcards. Allows multiple patterns separated by commas. |_none_|
16
16
|`Path_Key`| If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map. |_none_|
17
-
|`Exclude_Path`| Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, For example, `Exclude_Path *.gz,*.zip`|_none_|
17
+
|`Exclude_Path`| Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, For example, `Exclude_Path *.gz,*.zip`.|_none_|
18
18
|`Offset_Key`| If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map. |_none_|
19
19
|`Read_from_Head`| For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail. |`False`|
20
20
|`Refresh_Interval`| The interval of refreshing the list of watched files in seconds. |`60`|
@@ -23,7 +23,7 @@ The plugin supports the following configuration parameters:
23
23
|`Skip_Long_Lines`| When a monitored file reaches its buffer capacity due to a very long line (`Buffer_Max_Size`), the default behavior is to stop monitoring that file. `Skip_Long_Lines` alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fit into the buffer size. |`Off`|
24
24
|`Skip_Empty_Lines`| Skips empty lines in the log file from any further processing or output. |`Off`|
25
25
|`DB`| Specify the database file to keep track of monitored files and offsets. |_none_|
26
-
|`DB.sync`| Set a default synchronization (I/O) method. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option see [the SQLite documentation](https://www.sqlite.org/pragma.html#pragma_synchronous). Most scenarios will be fine with `normal` mode. If you need full synchronization after every write operation set `full` mode. `full` has a high I/O performance cost. Values: `Extra`, `Full`, `Normal`, `Off`. |`normal`|
26
+
|`DB.sync`| Set a default synchronization (I/O) method. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option see [the SQLite documentation](https://www.sqlite.org/pragma.html#pragma_synchronous). Most scenarios will be fine with `normal` mode. If you need full synchronization after every write operation set `full` mode. `full` has a high I/O performance cost. Values: `Extra`, `Full`, `Normal`, `Off`. |`normal`|
27
27
|`DB.locking`| Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps increase performance when accessing the database but restricts externals tool from querying the content. |`false`|
28
28
|`DB.journal_mode`| Sets the journal mode for databases (`WAL`). Enabling `WAL` provides higher performance. `WAL` isn't compatible with shared network file systems. |`WAL`|
29
29
|`DB.compare_filename`| This option determines whether to review both `inode` and `filename` when retrieving stored file information from the database. `true` verifies both `inode` and `filename`, while `false` checks only the `inode`. To review the `inode` and `filename` in the database, refer [see `keep_state`](#tailing-files-keeping-state). |`false`|
@@ -33,7 +33,7 @@ The plugin supports the following configuration parameters:
33
33
|`Key`| When a message is unstructured (no parser applied), it's appended as a string under the key name `log`. This option lets you define an alternative name for that key. |`log`|
34
34
|`Inotify_Watcher`| Set to `false` to use file stat watcher instead of `inotify`. |`true`|
35
35
|`Tag`| Set a tag with `regexextract` fields that will be placed on lines read. For example, `kube.<namespace_name>.<pod_name>.<container_name>.<container_id>`. Tag expansion is supported: if the tag includes an asterisk (`*`), that asterisk will be replaced with the absolute path of the monitored file, with slashes replaced by dots. See [Workflow of Tail + Kubernetes Filter](../filters/kubernetes.md#workflow-of-tail--kubernetes-filter). |_none_|
36
-
|`Tag_Regex`| Set a regular expression to extract fields from the filename. For example: `(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<container_id>[a-z0-9]{64})\.log$`|_none_|
36
+
|`Tag_Regex`| Set a regular expression to extract fields from the filename. For example: `(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<container_id>[a-z0-9]{64})\.log$`.|_none_|
37
37
|`Static_Batch_Size`| Set the maximum number of bytes to process per iteration for the monitored static files (files that already exist upon Fluent Bit start). |`50M`|
38
38
|`File_Cache_Advise`| Set the `posix_fadvise` in `POSIX_FADV_DONTNEED` mode. This reduces the usage of the kernel file cache. This option is ignored if not running on Linux. |`On`|
39
39
|`Threaded`| Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). |`false`|
@@ -44,7 +44,7 @@ If the database parameter `DB` isn't specified, by default the plugin reads each
44
44
45
45
## Monitor a large number of files
46
46
47
-
To monitor a large number of files, you can increase the inotify settings in your Linux environment by modifying the following `sysctl` parameters:
47
+
To monitor a large number of files, you can increase the `inotify` settings in your Linux environment by modifying the following `sysctl` parameters:
48
48
49
49
```text
50
50
sysctl fs.inotify.max_user_watches=LIMIT1
@@ -157,8 +157,8 @@ For the old multiline configuration, the following options exist to configure th
157
157
| Key | Description | Default |
158
158
| :--- | :--- | :--- |
159
159
|`Multiline`| If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. When this option is enabled the Parser option isn't used. |`Off`|
160
-
|`Multiline_Flush`| Wait period time in seconds to process queued multiline messages |`4`|
161
-
|`Parser_Firstline`| Name of the parser that matches the beginning of a multiline message. The regular expression defined in the parser must include a group name (named `capture`), and the value of the last match group must be a string |_none_|
160
+
|`Multiline_Flush`| Wait period time in seconds to process queued multiline messages.|`4`|
161
+
|`Parser_Firstline`| Name of the parser that matches the beginning of a multiline message. The regular expression defined in the parser must include a group name (named `capture`), and the value of the last match group must be a string.|_none_|
162
162
|`Parser_N`| Optional. Extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers. For example, `Parser_1 ab1`, `Parser_2 ab2`, `Parser_N abN`. |_none_|
163
163
164
164
### Old Docker mode configuration parameters
@@ -173,7 +173,7 @@ Docker mode exists to recombine JSON log lines split by the Docker daemon due to
173
173
174
174
## Get started
175
175
176
-
To tail text or log files, you can run the plugin from the command line or through the configuration file:
176
+
To tail text or log files, you can run the plugin from the command line or through the configuration file.
177
177
178
178
### Command line
179
179
@@ -221,7 +221,7 @@ pipeline:
221
221
222
222
When using multiline configuration you need to first specify `Multiline On` in the configuration and use the `Parser_Firstline` and additional parser parameters `Parser_N` if needed.
223
223
224
-
For example, you might be trying to read the following Java stack trace as a single event
224
+
For example, you might be trying to read the following Java stack trace as a single event:
225
225
226
226
```text
227
227
Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
@@ -232,7 +232,7 @@ Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something
232
232
at com.myproject.module.MyProject.main(MyProject.java:6)
233
233
```
234
234
235
-
Specify a `Parser_Firstline` parameter that matches the first line of a multiline event. Once a match is made Fluent Bit reads all future lines until another match with `Parser_Firstline` is made.
235
+
Specify a `Parser_Firstline` parameter that matches the first line of a multiline event. When a match is made, Fluent Bit reads all future lines until another match with `Parser_Firstline` is made.
236
236
237
237
In this case you can use the following parser, which extracts the time as `time` and the remaining portion of the multiline as `log`.
When Fluent Bit runs, the database file `_/path/to/logs.db` will be created. This database is backed by SQLite3. f you are interested in exploring the content, you can open it with the SQLite client tool:
338
+
When Fluent Bit runs, the database file `_/path/to/logs.db` will be created. This database is backed by SQLite3. If you are interested in exploring the content, you can open it with the SQLite client tool:
339
339
340
340
```shell
341
341
sqlite3 tail.db
@@ -418,7 +418,7 @@ While file rotation is handled, there are risks of potential log loss when using
418
418
419
419
- Race conditions: logs can be lost in the brief window between copying and truncating the file.
420
420
- Backpressure: if Fluent Bit is under backpressure, logs might be dropped if `copyttruncate` occurs before they can be processed and sent.
421
-
- See `logroate man page`: There is a very small time slice between copying the file and truncating it, so some logging data might be lost.
421
+
- See `logroate man page`: there is a very small time slice between copying the file and truncating it, so some logging data might be lost.
422
422
- Final note: the `Path` patterns can't match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.
0 commit comments