You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/inputs/http.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ description: The HTTP input plugin allows you to send custom records to an HTTP
12
12
| port | The port for Fluent Bit to listen on | 9880 |
13
13
| tag_key | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. ||
14
14
| buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M |
15
-
| buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
15
+
| buffer_chunk_size | This sets the chunk size for incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
16
16
| successful_response_code | It allows to set successful response code. `200`, `201` and `204` are supported. | 201 |
17
17
| success_header | Add an HTTP header key/value pair on success. Multiple headers can be set. Example: `X-Custom custom-answer`||
18
18
| threaded | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). |`false`|
@@ -34,7 +34,7 @@ The http input plugin allows Fluent Bit to open up an HTTP port that you can the
34
34
#### How to set tag
35
35
36
36
The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system.
37
-
For example, in the following curl message below the tag set is `app.log**. **` because the end end path is `/app_log`:
37
+
For example, in the following curl message below the tag set is `app.log**. **` because the end path is `/app_log`:
Copy file name to clipboardExpand all lines: pipeline/inputs/prometheus-remote-write.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ This input plugin allows you to ingest a payload in the Prometheus remote-write
13
13
| listen | The address to listen on | 0.0.0.0 |
14
14
| port | The port for Fluent Bit to listen on | 8080 |
15
15
| buffer\_max\_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M |
16
-
| buffer\_chunk\_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
16
+
| buffer\_chunk\_size | This sets the chunk size for incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
17
17
|successful\_response\_code | It allows to set successful response code. `200`, `201` and `204` are supported.| 201 |
18
18
| tag\_from\_uri | If true, tag will be created from uri, e.g. api\_prom\_push from /api/prom/push, and any tag specified in the config will be ignored. If false then a tag must be provided in the config for this input. | true |
19
19
| uri | Specify an optional HTTP URI for the target web server listening for prometheus remote write payloads, e.g: /api/prom/push ||
| port | The port for Fluent Bit to listen on | 9880 |
11
11
| tag_key | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. ||
12
12
| buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M |
13
-
| buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
13
+
| buffer_chunk_size | This sets the chunk size for incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K |
14
14
| successful_response_code | It allows to set successful response code. `200`, `201` and `204` are supported. | 201 |
15
15
| splunk\_token | Specify a Splunk token for HTTP HEC authentication. If multiple tokens are specified (with commas and no spaces), usage will be divided across each of the tokens. ||
16
16
| store\_token\_in\_metadata | Store Splunk HEC tokens in the Fluent Bit metadata. If set false, they will be stored as normal key-value pairs in the record data. | true |
Copy file name to clipboardExpand all lines: pipeline/inputs/standard-input.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ If no parser is configured for the stdin plugin, it expects *valid JSON* input d
17
17
1. A JSON object with one or more key-value pairs: `{ "key": "value", "key2": "value2" }`
18
18
3. A 2-element JSON array in [Fluent Bit Event](../../concepts/key-concepts.md#event-or-record) format, which may be:
19
19
*`[TIMESTAMP, { "key": "value" }]` where TIMESTAMP is a floating point value representing a timestamp in seconds; or
20
-
* from Fluent Bit v2.1.0, `[[TIMESTAMP, METADATA], { "key": "value" }]` where TIMESTAMP has the same meaning as above and and METADATA is a JSON object.
20
+
* from Fluent Bit v2.1.0, `[[TIMESTAMP, METADATA], { "key": "value" }]` where TIMESTAMP has the same meaning as above and METADATA is a JSON object.
Copy file name to clipboardExpand all lines: pipeline/outputs/postgresql.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ According to the parameters you have set in the configuration file, the plugin w
12
12
13
13
> **NOTE:** If you are not familiar with how PostgreSQL's users and grants system works, you might find useful reading the recommended links in the "References" section at the bottom.
14
14
15
-
A typical installation normally consists of a self-contained database for Fluent Bit in which you can store the output of one or more pipelines. Ultimately, it is your choice to to store them in the same table, or in separate tables, or even in separate databases based on several factors, including workload, scalability, data protection and security.
15
+
A typical installation normally consists of a self-contained database for Fluent Bit in which you can store the output of one or more pipelines. Ultimately, it is your choice to store them in the same table, or in separate tables, or even in separate databases based on several factors, including workload, scalability, data protection and security.
16
16
17
17
In this example, for the sake of simplicity, we use a single table called `fluentbit` in a database called `fluentbit` that is owned by the user `fluentbit`. Feel free to use different names. Preferably, for security reasons, do not use the `postgres` user \(which has `SUPERUSER` privileges\).
The native processor plugin supports the following configuration parameters:
9
+
The Metrics Selector processor supports the following configuration parameters:
10
10
11
11
| Key | Description | Default |
12
12
| :---------- | :--- | :--- |
13
-
|Metric\_Name|Keep metrics in which the metric of name matches with the actual name or the regular expression. ||
14
-
|Context|Specify matching context. Currently, metric\_name and delete\_label\_value are only supported. |`Metrics_Name`|
15
-
|Action|Specify the action for specified metrics. INCLUDE and EXCLUDE are allowed. ||
16
-
|Operation\_Type|Specify the operation type of action for metrics payloads. PREFIX and SUBSTRING are allowed. ||
17
-
|Label|Specify a label key and value pair. ||
13
+
|`metric_name`|The string that determines which metrics are affected by this processor, depending on the active [matching operation](#matching-operations). ||
14
+
|`context`|Specifies matching context. Possible values: `metric_name` or `delete_label`. |`metrics_name`|
15
+
|`action`|Specifies whether to include or exclude matching metrics. Possible values: `INCLUDE` or `EXCLUDE`. ||
16
+
|`operation_type`|Specifies the [matching operation](#matching-operations) to apply to the value of `metric_name`. Possible values: `PREFIX` or `SUBSTRING`. ||
The Metrics Selector processor has two matching operations: prefix matching and substring matching.
22
+
23
+
### Prefix matching
24
+
25
+
Prefix matching compares the value of `metric_name` to the beginning of each incoming metric name. For example, `metric_name: fluentbit_input` results in a match for metrics named `fluentbit_input_records`, but not for metrics named `total_fluentbit_input`.
26
+
27
+
If no `operation_type` value is specified and the value of `metric_name` is a standard string, the Metrics Selector processor defaults to prefix matching.
28
+
29
+
### Substring matching
30
+
31
+
Substring matching treats the value of `metric_name` as a regex pattern, and compares this pattern against each incoming metric name accordingly. This pattern can appear anywhere within the name of the incoming metric. For example, `metric_name: bytes` results in a match for metrics named `bytes_total` and metrics named `input_bytes_count`.
32
+
33
+
If the value of `metric_name` is a string wrapped in forward slashes (for example, `metric_name: /storage..*/`), the Metrics Selector processor defaults to substring matching, regardless of whether an `operation_type` value is specified. This means that a `metric_name` value wrapped in forward slashes will always use substring matching, even if `operation_type` is set to `PREFIX`.
34
+
35
+
However, if `operation_type` is explicitly set to `SUBSTRING`, you don't need to wrap the value of `metric_name` in forward slashes.
36
+
37
+
## Configuration examples
38
+
39
+
The following examples show possible configurations of the Metrics Selector processor.
Copy file name to clipboardExpand all lines: pipeline/processors/sampling.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
-
# Traces
1
+
# Sampling
2
2
3
-
The _Traces_ sampling processor is designed with a pluggable architecture, allowing easy extension to support multiple sampling strategies and backends. It provides you with the ability to apply head or tail sampling to incoming trace telemetry data.
3
+
The _Sampling_processor is designed with a pluggable architecture, allowing easy extension to support multiple trace sampling strategies and backends. It provides you with the ability to apply head or tail sampling to incoming trace telemetry data.
4
4
5
5
Available samplers:
6
6
@@ -21,9 +21,9 @@ Conditions:
21
21
22
22
The processor does not provide any extra configuration parameter, it can be used directly in your _processors_ Yaml directive.
23
23
24
-
## Traces types
24
+
## Sampling types
25
25
26
-
Traces have both a name and a type with the following possible settings:
26
+
Sampling has both a name and a type with the following possible settings:
0 commit comments