diff --git a/pipeline/processors/conditional-processing.md b/pipeline/processors/conditional-processing.md index cf4459357..0b77c7188 100644 --- a/pipeline/processors/conditional-processing.md +++ b/pipeline/processors/conditional-processing.md @@ -9,8 +9,8 @@ Conditional processing is available in Fluent Bit version 4.0 and greater. You can turn a standard processor into a conditional processor by adding a `condition` block to the processor's YAML configuration settings. {% hint style="info" %} -- Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors, including conditional processors. -- Conditional processing isn't supported if you're using a [filter as a processor](../pipeline/processors/filters). +- Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/README.md) support processors, including conditional processors. +- Conditional processing isn't supported if you're using a [filter as a processor](../filters). {% endhint %} These `condition` blocks use the following syntax: @@ -20,22 +20,23 @@ These `condition` blocks use the following syntax: ```yaml pipeline: - inputs: + inputs: <...> - processors: - logs: - - name: processor_name - <...> - condition: - op: {and|or} - rules: - - field: {field_name1} - op: {comparison_operator} - value: {comparison_value1} - - field: {field_name2} - op: {comparison_operator} - value: {comparison_value2} - <...> + + processors: + logs: + - name: processor_name + <...> + condition: + op: {and|or} + rules: + - field: {field_name1} + op: {comparison_operator} + value: {comparison_value1} + - field: {field_name2} + op: {comparison_operator} + value: {comparison_value2} + <...> ``` {% endtab %} @@ -62,10 +63,11 @@ Each item in the `condition.rules` array must include values for the following p Rules are evaluated against each log that passes through your data pipeline. For example, given a rule with these parameters: -``` -- field: "$status" - op: eq - value: 200 +```yaml +rules: + - field: "$status" + op: eq + value: 200 ``` This rule evaluates to `true` for a log that contains the string `'status':200`, but evaluates to `false` for a log that contains the string `'status':403`. @@ -102,23 +104,23 @@ This example applies a condition that only processes logs that contain the strin ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"request": {"method": "GET", "path": "/api/v1/resource"}}' - tag: request.log - - processors: - logs: - - name: content_modifier - action: insert - key: modified_if_post - value: true - condition: - op: and - rules: - - field: "$request['method']" - op: eq - value: "POST" + inputs: + - name: dummy + dummy: '{"request": {"method": "GET", "path": "/api/v1/resource"}}' + tag: request.log + + processors: + logs: + - name: content_modifier + action: insert + key: modified_if_post + value: true + condition: + op: and + rules: + - field: "$request['method']" + op: eq + value: "POST" ``` {% endtab %} @@ -133,26 +135,26 @@ This example applies a condition that only processes logs when all the specified ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"request": {"method": "POST", "path": "/api/v1/sensitive-data"}}' - tag: request.log - - processors: - logs: - - name: content_modifier - action: insert - key: requires_audit - value: true - condition: - op: and - rules: - - field: "$request['method']" - op: eq - value: "POST" - - field: "$request['path']" - op: regex - value: "\/sensitive-.*" + inputs: + - name: dummy + dummy: '{"request": {"method": "POST", "path": "/api/v1/sensitive-data"}}' + tag: request.log + + processors: + logs: + - name: content_modifier + action: insert + key: requires_audit + value: true + condition: + op: and + rules: + - field: "$request['method']" + op: eq + value: "POST" + - field: "$request['path']" + op: regex + value: "\/sensitive-.*" ``` {% endtab %} @@ -167,26 +169,26 @@ This example applies a condition that only processes logs when one or more of th ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"request": {"method": "GET", "path": "/api/v1/resource", "status_code": 200, "response_time": 150}}' - tag: request.log - - processors: - logs: - - name: content_modifier - action: insert - key: requires_performance_check - value: true - condition: - op: or - rules: - - field: "$request['response_time']" - op: gt - value: 100 - - field: "$request['status_code']" - op: gte - value: 400 + inputs: + - name: dummy + dummy: '{"request": {"method": "GET", "path": "/api/v1/resource", "status_code": 200, "response_time": 150}}' + tag: request.log + + processors: + logs: + - name: content_modifier + action: insert + key: requires_performance_check + value: true + condition: + op: or + rules: + - field: "$request['response_time']" + op: gt + value: 100 + - field: "$request['status_code']" + op: gte + value: 400 ``` {% endtab %} @@ -201,23 +203,23 @@ This example uses an array for the value of `condition.rules.value`: ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"request": {"method": "GET", "path": "/api/v1/resource"}}' - tag: request.log - - processors: - logs: - - name: content_modifier - action: insert - key: high_priority_method - value: true - condition: - op: and - rules: - - field: "$request['method']" - op: in - value: ["POST", "PUT", "DELETE"] + inputs: + - name: dummy + dummy: '{"request": {"method": "GET", "path": "/api/v1/resource"}}' + tag: request.log + + processors: + logs: + - name: content_modifier + action: insert + key: high_priority_method + value: true + condition: + op: and + rules: + - field: "$request['method']" + op: in + value: ["POST", "PUT", "DELETE"] ``` {% endtab %} @@ -232,43 +234,43 @@ This example uses multiple processors with conditional processing enabled for ea ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"log": "Error: Connection refused", "level": "error", "service": "api-gateway"}' - tag: app.log - - processors: - logs: - - name: content_modifier - action: insert - key: alert - value: true - condition: - op: and - rules: - - field: "$level" - op: eq - value: "error" - - field: "$service" - op: in - value: ["api-gateway", "authentication", "database"] - - - name: content_modifier - action: insert - key: paging_required - value: true - condition: - op: and - rules: - - field: "$log" - op: regex - value: "(?i)(connection refused|timeout|crash)" - - field: "$level" - op: in - value: ["error", "fatal"] + inputs: + - name: dummy + dummy: '{"log": "Error: Connection refused", "level": "error", "service": "api-gateway"}' + tag: app.log + + processors: + logs: + - name: content_modifier + action: insert + key: alert + value: true + condition: + op: and + rules: + - field: "$level" + op: eq + value: "error" + - field: "$service" + op: in + value: ["api-gateway", "authentication", "database"] + + - name: content_modifier + action: insert + key: paging_required + value: true + condition: + op: and + rules: + - field: "$log" + op: regex + value: "(?i)(connection refused|timeout|crash)" + - field: "$level" + op: in + value: ["error", "fatal"] ``` {% endtab %} {% endtabs %} -This configuration adds an `alert` field to error logs from critical services, and adds a `paging_required` field to errors that contain specific critical patterns. +This configuration adds an `alert` field to error logs from critical services, and adds a `paging_required` field to errors that contain specific critical patterns. \ No newline at end of file diff --git a/pipeline/processors/content-modifier.md b/pipeline/processors/content-modifier.md index d09047be3..0d6abd8a5 100644 --- a/pipeline/processors/content-modifier.md +++ b/pipeline/processors/content-modifier.md @@ -6,7 +6,7 @@ Similar to how filters work, this processor uses a unified mechanism to perform {% hint style="info" %} -Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors. +Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/README.md) support processors. {% endhint %} @@ -42,7 +42,7 @@ The following contexts are available: {% hint style="info" %} -If your data doesn't follow the OpenTelemetry log schema, but your log destination expects to be in that format, you can use the [OpenTelemetry envelope](../pipeline/processors/opentelemetry-envelope) processor to transform your data. You can then pass that transformed data through the content modifier filter and use OpenTelemetry contexts accordingly. +If your data doesn't follow the OpenTelemetry log schema, but your log destination expects to be in that format, you can use the [OpenTelemetry envelope](opentelemetry-envelope.md) processor to transform your data. You can then pass that transformed data through the content modifier filter and use OpenTelemetry contexts accordingly. {% endhint %} @@ -81,21 +81,21 @@ The following example appends the key `color` with the value `blue` to the log s ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"key1": "123.4"}' - - processors: - logs: - - name: content_modifier - action: insert - key: "color" - value: "blue" - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"key1": "123.4"}' + + processors: + logs: + - name: content_modifier + action: insert + key: "color" + value: "blue" + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -110,26 +110,26 @@ Update the value of `key1` and insert `key2`: ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"key1": "123.4"}' - - processors: - logs: - - name: content_modifier - action: upsert - key: "key1" - value: "5678" - - - name: content_modifier - action: upsert - key: "key2" - value: "example" - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"key1": "123.4"}' + + processors: + logs: + - name: content_modifier + action: upsert + key: "key1" + value: "5678" + + - name: content_modifier + action: upsert + key: "key2" + value: "example" + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -144,20 +144,20 @@ Delete `key2` from the stream: ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"key1": "123.4", "key2": "example"}' - - processors: - logs: - - name: content_modifier - action: delete - key: "key2" - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"key1": "123.4", "key2": "example"}' + + processors: + logs: + - name: content_modifier + action: delete + key: "key2" + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -172,21 +172,21 @@ Change the name of `key2` to `test`: ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"key1": "123.4", "key2": "example"}' - - processors: - logs: - - name: content_modifier - action: rename - key: "key2" - value: "test" - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"key1": "123.4", "key2": "example"}' + + processors: + logs: + - name: content_modifier + action: rename + key: "key2" + value: "test" + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -201,20 +201,20 @@ Apply the SHA-256 algorithm for the value of the key `password`: ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"username": "bob", "password": "12345"}' - - processors: - logs: - - name: content_modifier - action: hash - key: "password" - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"username": "bob", "password": "12345"}' + + processors: + logs: + - name: content_modifier + action: hash + key: "password" + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -222,28 +222,28 @@ pipeline: #### Extract example -By using a domain address, perform a extraction of the components of it as a list of key value pairs: +By using a domain address, perform an extraction of the components of it as a list of key value pairs: {% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"http.url": "https://fluentbit.io/docs?q=example"}' - - processors: - logs: - - name: content_modifier - action: extract - key: "http.url" - pattern: ^(?https?):\/\/(?[^\/\?]+)(?\/[^?]*)?(?:\?(?.*))? - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"http.url": "https://fluentbit.io/docs?q=example"}' + + processors: + logs: + - name: content_modifier + action: extract + key: "http.url" + pattern: ^(?https?):\/\/(?[^\/\?]+)(?\/[^?]*)?(?:\?(?.*))? + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -258,27 +258,27 @@ Both keys in the example are strings. Convert the `key1` to a double/float type ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"key1": "123.4", "key2": "true"}' - - processors: - logs: - - name: content_modifier - action: convert - key: key1 - converted_type: int - - - name: content_modifier - action: convert - key: key2 - converted_type: boolean - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"key1": "123.4", "key2": "true"}' + + processors: + logs: + - name: content_modifier + action: convert + key: key1 + converted_type: int + + - name: content_modifier + action: convert + key: key2 + converted_type: boolean + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} -{% endtabs %} +{% endtabs %} \ No newline at end of file diff --git a/pipeline/processors/filters.md b/pipeline/processors/filters.md index ddc65961a..9879f2b94 100644 --- a/pipeline/processors/filters.md +++ b/pipeline/processors/filters.md @@ -4,7 +4,7 @@ You can use any [filter](../filters/README.md) as a processor in Fluent Bit. {% hint style="info" %} -Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors. +Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/README.md) support processors. {% endhint %} @@ -17,19 +17,20 @@ In this example, the [Grep](../filters/grep.md) filter is an output processor th ```yaml pipeline: - inputs: - - name: tail - path: lines.txt - parser: json - outputs: - - name: stdout - match: '*' - - processors: - logs: - - name: grep - regex: log aa + inputs: + - name: tail + path: lines.txt + parser: json + + outputs: + - name: stdout + match: '*' + + processors: + logs: + - name: grep + regex: log aa ``` {% endtab %} -{% endtabs %} +{% endtabs %} \ No newline at end of file diff --git a/pipeline/processors/labels.md b/pipeline/processors/labels.md index d114cbc55..c6a902b06 100644 --- a/pipeline/processors/labels.md +++ b/pipeline/processors/labels.md @@ -2,11 +2,11 @@ The _labels_ processor lets you manipulate the labels of metrics. -Similar to filters, this processor presents a enriching/modifying mechanism to perform operations for labels manipulation. The most significant difference is that processors perform better than filters, and when chaining them there are no encoding or decoding performance penalties. +Similar to filters, this processor presents an enriching/modifying mechanism to perform operations for labels manipulation. The most significant difference is that processors perform better than filters, and when chaining them there are no encoding or decoding performance penalties. {% hint style="info" %} -Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors. +Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/README.md) support processors. {% endhint %} @@ -29,16 +29,17 @@ The following example changes the value of the `name` key to `fluentbit`: ```yaml pipeline: - inputs: - - name: fluentbit_metrics - - processors: - metrics: - - name: labels - update: name fluentbit - outputs: - - name : stdout - match: '*' + inputs: + - name: fluentbit_metrics + + processors: + metrics: + - name: labels + update: name fluentbit + + outputs: + - name : stdout + match: '*' ``` {% endtab %} @@ -53,17 +54,17 @@ The following example appends the key `agent` with the value `fluentbit` as the ```yaml pipeline: - inputs: - - name: fluentbit_metrics + inputs: + - name: fluentbit_metrics - processors: - metrics: - - name: labels - insert: agent fluentbit + processors: + metrics: + - name: labels + insert: agent fluentbit - outputs: - - name : stdout - match: '*' + outputs: + - name : stdout + match: '*' ``` {% endtab %} @@ -78,16 +79,17 @@ The following example upserts the value of `name` and inserts `fluentbit`: ```yaml pipeline: - inputs: - - name: fluentbit_metrics - - processors: - metrics: - - name: labels - upsert: name fluentbit - outputs: - - name : stdout - match: '*' + inputs: + - name: fluentbit_metrics + + processors: + metrics: + - name: labels + upsert: name fluentbit + + outputs: + - name : stdout + match: '*' ``` {% endtab %} @@ -102,16 +104,17 @@ The following example deletes the `name` key from metrics: ```yaml pipeline: - inputs: - - name: fluentbit_metrics - - processors: - metrics: - - name: labels - delete: name - outputs: - - name : stdout - match: '*' + inputs: + - name: fluentbit_metrics + + processors: + metrics: + - name: labels + delete: name + + outputs: + - name : stdout + match: '*' ``` {% endtab %} @@ -126,17 +129,18 @@ The following example applies the SHA-1 algorithm for the value of the key `host ```yaml pipeline: - inputs: - - name: fluentbit_metrics - - processors: - metrics: - - name: labels - hash: hostname - outputs: - - name : stdout - match: '*' + inputs: + - name: fluentbit_metrics + + processors: + metrics: + - name: labels + hash: hostname + + outputs: + - name : stdout + match: '*' ``` {% endtab %} -{% endtabs %} +{% endtabs %} \ No newline at end of file diff --git a/pipeline/processors/metrics-selector.md b/pipeline/processors/metrics-selector.md index ae4845e94..e42384f06 100644 --- a/pipeline/processors/metrics-selector.md +++ b/pipeline/processors/metrics-selector.md @@ -1,8 +1,6 @@ # Metrics selector -The _metrics selector_ processor lets you choose which metrics to include or exclude, similar to the [grep](../pipeline/filters/grep) filter for logs. - - +The _metrics selector_ processor lets you choose which metrics to include or exclude, similar to the [grep](../filters/grep.md) filter for logs. ## Configuration parameters @@ -57,6 +55,7 @@ pipeline: - name: metrics_selector metric_name: /storage/ action: include + - name: metrics_selector metric_name: /fs/ action: exclude @@ -64,7 +63,6 @@ pipeline: - name: labels delete: name - outputs: - name: stdout match: '*' @@ -96,4 +94,4 @@ pipeline: outputs: - name: stdout match: '*' -``` +``` \ No newline at end of file diff --git a/pipeline/processors/opentelemetry-envelope.md b/pipeline/processors/opentelemetry-envelope.md index b8c5c8cab..d22662adb 100644 --- a/pipeline/processors/opentelemetry-envelope.md +++ b/pipeline/processors/opentelemetry-envelope.md @@ -6,7 +6,7 @@ The _OpenTelemetry envelope_ processor transforms your data to be compatible wit {% hint style="info" %} -Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors. +Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/README.md) support processors. {% endhint %} @@ -23,26 +23,26 @@ The following example uses the `dummy` input plugin to generate one sample messa ```yaml service: - flush: 1 - log_level: info + flush: 1 + log_level: info pipeline: - inputs: - - name: dummy - dummy: '{"message": "Hello World"}' - - processors: - logs: - - name: opentelemetry_envelope - - outputs: - - name : stdout - match: '*' - - - name: opentelemetry - match: '*' - host: 127.0.0.1 - port: 4318 + inputs: + - name: dummy + dummy: '{"message": "Hello World"}' + + processors: + logs: + - name: opentelemetry_envelope + + outputs: + - name : stdout + match: '*' + + - name: opentelemetry + match: '*' + host: 127.0.0.1 + port: 4318 ``` {% endtab %} @@ -50,25 +50,25 @@ pipeline: ```yaml receivers: - otlp: - protocols: - http: - endpoint: 127.0.0.1:4318 + otlp: + protocols: + http: + endpoint: 127.0.0.1:4318 exporters: - file: - path: out.json - logging: - loglevel: info + file: + path: out.json + logging: + loglevel: info service: - telemetry: - logs: - level: debug - pipelines: - logs: - receivers: [otlp] - exporters: [file, logging] + telemetry: + logs: + level: debug + pipelines: + logs: + receivers: [otlp] + exporters: [file, logging] ``` {% endtab %} @@ -111,32 +111,32 @@ If you're interested in additional transformations, you can also use the [conten ```yaml service: - flush: 1 - log_level: info + flush: 1 + log_level: info pipeline: - inputs: - - name: dummy - dummy: '{"message": "Hello World"}' - - processors: - logs: - - name: opentelemetry_envelope - - - name: content_modifier - context: otel_resource_attributes - action: upsert - key: service.name - value: my-service - - outputs: - - name : stdout - match: '*' - - - name: opentelemetry - match: '*' - host: 127.0.0.1 - port: 4318 + inputs: + - name: dummy + dummy: '{"message": "Hello World"}' + + processors: + logs: + - name: opentelemetry_envelope + + - name: content_modifier + context: otel_resource_attributes + action: upsert + key: service.name + value: my-service + + outputs: + - name : stdout + match: '*' + + - name: opentelemetry + match: '*' + host: 127.0.0.1 + port: 4318 ``` {% endtab %} @@ -178,4 +178,4 @@ The collector JSON output will resemble the following: } ``` -For more details about further processing, read the [content modifier](../processors/content-modifier.md) processor documentation. +For more details about further processing, read the [content modifier](../processors/content-modifier.md) processor documentation. \ No newline at end of file diff --git a/pipeline/processors/sampling.md b/pipeline/processors/sampling.md index 698e3d377..5b37635b3 100644 --- a/pipeline/processors/sampling.md +++ b/pipeline/processors/sampling.md @@ -4,7 +4,7 @@ The _sampling_ processor uses an extendable architecture that supports multiple {% hint style="info" %} -Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors. +Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/README.md) support processors. {% endhint %} @@ -20,7 +20,7 @@ This processor uses the following configuration parameters: ## Head sampling -Head sampling makes the decision whether or not to keep a trace at the very beginning of its ingestion. This is when a root span is created but before the request is actually fulfilled. +Head sampling makes the decision whether to keep a trace at the very beginning of its ingestion. This is when a root span is created but before the request is actually fulfilled. ![Head sampling diagram](../../.gitbook/assets/traces_head_sampling.png) @@ -37,26 +37,26 @@ This example uses head sampling to process a smaller percentage of the overall i ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Head sampling of traces (percentage) - - name: sampling - type: probabilistic - sampling_settings: - sampling_percentage: 40 - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Head sampling of traces (percentage) + - name: sampling + type: probabilistic + sampling_settings: + sampling_percentage: 40 + + outputs: + - name: stdout + match: "*" ``` {% endtab %} @@ -110,30 +110,30 @@ The following example waits five seconds before making a decision. It then sampl ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Tail sampling of traces (latency) - - name: sampling - type: tail - sampling_settings: - decision_wait: 5s - conditions: - - type: latency - threshold_ms_low: 200 - threshold_ms_high: 3000 - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Tail sampling of traces (latency) + - name: sampling + type: tail + sampling_settings: + decision_wait: 5s + conditions: + - type: latency + threshold_ms_low: 200 + threshold_ms_high: 3000 + + outputs: + - name: stdout + match: "*" ``` {% endtab %} @@ -158,30 +158,30 @@ The following example configuration waits five seconds before making a decision. ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Tail sampling of traces (span_count) - - name: sampling - type: tail - sampling_settings: - decision_wait: 5s - conditions: - - type: span_count - min_spans: 3 - max_spans: 5 - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Tail sampling of traces (span_count) + - name: sampling + type: tail + sampling_settings: + decision_wait: 5s + conditions: + - type: span_count + min_spans: 3 + max_spans: 5 + + outputs: + - name: stdout + match: "*" ``` {% endtab %} @@ -205,29 +205,29 @@ The following example configuration samples only spans with the `ERROR` status c ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Tail sampling of traces (status_code) - - name: sampling - type: tail - sampling_settings: - decision_wait: 5s - conditions: - - type: status_code - status_codes: [ERROR] - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Tail sampling of traces (status_code) + - name: sampling + type: tail + sampling_settings: + decision_wait: 5s + conditions: + - type: status_code + status_codes: [ERROR] + + outputs: + - name: stdout + match: "*" ``` {% endtab %} @@ -258,49 +258,49 @@ The following example configuration waits two seconds before making a decision. ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Tail sampling of traces (string_attribute) - - name: sampling - type: tail - sampling_settings: - decision_wait: 2s - conditions: - # Exact matching - - type: string_attribute - match_type: strict - key: "http.method" - values: ["GET"] - - # Check if attribute exists - - type: string_attribute - match_type: exists - key: "service.name" - - # Regex pattern matching - - type: string_attribute - match_type: regex - key: "http.url" - values: ["^https://api\\..*", ".*\\/health$"] - - # Multiple regex patterns for error conditions - - type: string_attribute - match_type: regex - key: "error.message" - values: ["timeout.*", "connection.*failed", ".*rate.?limit.*"] - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Tail sampling of traces (string_attribute) + - name: sampling + type: tail + sampling_settings: + decision_wait: 2s + conditions: + # Exact matching + - type: string_attribute + match_type: strict + key: "http.method" + values: ["GET"] + + # Check if attribute exists + - type: string_attribute + match_type: exists + key: "service.name" + + # Regex pattern matching + - type: string_attribute + match_type: regex + key: "http.url" + values: ["^https://api\\..*", ".*\\/health$"] + + # Multiple regex patterns for error conditions + - type: string_attribute + match_type: regex + key: "error.message" + values: ["timeout.*", "connection.*failed", ".*rate.?limit.*"] + + outputs: + - name: stdout + match: "*" ``` {% endtab %} @@ -327,31 +327,31 @@ The following example configuration samples only spans with the key `http.status ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Tail sampling of traces (status_code) - - name: sampling - type: tail - sampling_settings: - decision_wait: 5s - conditions: - - type: numeric_attribute - key: "http.status_code" - min_value: 400 - max_value: 504 - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Tail sampling of traces (status_code) + - name: sampling + type: tail + sampling_settings: + decision_wait: 5s + conditions: + - type: numeric_attribute + key: "http.status_code" + min_value: 400 + max_value: 504 + + outputs: + - name: stdout + match: "*" ``` {% endtab %} @@ -376,30 +376,30 @@ The following example configuration waits two seconds before making a decision. ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Tail sampling of traces (boolean_attribute) - - name: sampling - type: tail - sampling_settings: - decision_wait: 2s - conditions: - - type: boolean_attribute - key: "user.logged" - value: false - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Tail sampling of traces (boolean_attribute) + - name: sampling + type: tail + sampling_settings: + decision_wait: 2s + conditions: + - type: boolean_attribute + key: "user.logged" + value: false + + outputs: + - name: stdout + match: "*" ``` {% endtab %} @@ -423,32 +423,32 @@ Example configuration: ```yaml service: - flush: 1 - log_level: info - hot_reload: on + flush: 1 + log_level: info + hot_reload: on pipeline: - inputs: - - name: opentelemetry - port: 4318 - - processors: - traces: - # Tail sampling of traces (trace_state) - - name: sampling - type: tail - sampling_settings: - decision_wait: 2s - conditions: - - type: trace_state - values: [debug=false, priority=high] - - outputs: - - name: stdout - match: "*" + inputs: + - name: opentelemetry + port: 4318 + + processors: + traces: + # Tail sampling of traces (trace_state) + - name: sampling + type: tail + sampling_settings: + decision_wait: 2s + conditions: + - type: trace_state + values: [debug=false, priority=high] + + outputs: + - name: stdout + match: "*" ``` {% endtab %} {% endtabs %} -For more details about further processing, read the [content modifier](../processors/content-modifier.md) processor documentation. +For more details about further processing, read the [content modifier](../processors/content-modifier.md) processor documentation. \ No newline at end of file diff --git a/pipeline/processors/sql.md b/pipeline/processors/sql.md index 4e433a44e..4ade4b640 100644 --- a/pipeline/processors/sql.md +++ b/pipeline/processors/sql.md @@ -6,7 +6,7 @@ This processor differs from the stream processor interface that runs after filte {% hint style="info" %} -Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors. +Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/README.md) support processors. {% endhint %} @@ -25,19 +25,19 @@ The following example generates a sample message with the keys `key` and `http.u ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"key1": "123.4", "http.url": "https://fluentbit.io/search?q=docs"}' - - processors: - logs: - - name: sql - query: "SELECT http.url FROM STREAM;" - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"key1": "123.4", "http.url": "https://fluentbit.io/search?q=docs"}' + + processors: + logs: + - name: sql + query: "SELECT http.url FROM STREAM;" + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -52,24 +52,24 @@ The following example is similar to the previous example, but additionally extra ```yaml pipeline: - inputs: - - name: dummy - dummy: '{"key1": "123.4", "http.url": "https://fluentbit.io/search?q=docs"}' - - processors: - logs: - - name: content_modifier - action: extract - key: "http.url" - pattern: ^(?https?):\/\/(?[^\/\?]+)(?\/[^?]*)?(?:\?(?.*))? - - - name: sql - query: "SELECT http_domain FROM STREAM;" - - outputs: - - name : stdout - match: '*' - format: json_lines + inputs: + - name: dummy + dummy: '{"key1": "123.4", "http.url": "https://fluentbit.io/search?q=docs"}' + + processors: + logs: + - name: content_modifier + action: extract + key: "http.url" + pattern: ^(?https?):\/\/(?[^\/\?]+)(?\/[^?]*)?(?:\?(?.*))? + + - name: sql + query: "SELECT http_domain FROM STREAM;" + + outputs: + - name : stdout + match: '*' + format: json_lines ``` {% endtab %} @@ -82,4 +82,4 @@ The resulting output resembles the following: "date": 1711059261.630668, "http_domain": "fluentbit.io" } -``` +``` \ No newline at end of file