Skip to content

Commit 071eb08

Browse files
Filters: fix remaining vale/markdownlint errors
Signed-off-by: Alexa Kreizinger <[email protected]>
1 parent a056b9e commit 071eb08

File tree

13 files changed

+85
-80
lines changed

13 files changed

+85
-80
lines changed

SUMMARY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@
172172
* [Sysinfo](pipeline/filters/sysinfo.md)
173173
* [Tensorflow](pipeline/filters/tensorflow.md)
174174
* [Throttle](pipeline/filters/throttle.md)
175-
* [Type Converter](pipeline/filters/type-converter.md)
175+
* [Type converter](pipeline/filters/type-converter.md)
176176
* [Wasm](pipeline/filters/wasm.md)
177177
* [Outputs](pipeline/outputs/README.md)
178178
* [Amazon CloudWatch](pipeline/outputs/cloudwatch.md)

pipeline/filters/ecs-metadata.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The plugin supports the following configuration parameters:
1010

1111
| Key | Description | Default |
1212
| :--- | :--- | :--- |
13-
| `Add` | Similar to the `ADD` option in the [modify filter](https://docs.fluentbit.io/manual/pipeline/filters/modify). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](https://docs.fluentbit.io/manual/v/1.5/administration/configuring-fluent-bit/record-accessor) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option allows you to control both the key names for metadata and the format for metadata values. | _none_ |
13+
| `Add` | Similar to the `ADD` option in the [modify filter](https://docs.fluentbit.io/manual/pipeline/filters/modify). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](https://docs.fluentbit.io/manual/v/1.5/administration/configuring-fluent-bit/record-accessor) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option lets you control both the key names for metadata and the format for metadata values. | _none_ |
1414
| `ECS_Tag_Prefix` | Similar to the `Kube_Tag_Prefix` option in the [Kubernetes filter](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) and performs the same function. The full log tag should be prefixed with this string and after the prefix the filter must find the next characters in the tag to be the Docker Container Short ID (the first 12 characters of the full container ID). The filter uses this to identify which container the log came from so it can find which task it's a part of. See the design section for more information. If not specified, it defaults to empty string, meaning that the tag must be prefixed with the 12 character container short ID. If you want to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks, don't set this parameter and enable the `Cluster_Metadata_Only` option | empty string |
1515
| `Cluster_Metadata_Only` | When enabled, the plugin will only attempt to attach cluster metadata values. Use to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks. | `Off` |
1616
| `ECS_Meta_Cache_TTL` | The filter builds a hash table in memory mapping each unique container short ID to its metadata. This option sets a max `TTL` for objects in the hash table. You should set this if you have frequent container or task restarts. For example, if your cluster runs short running batch jobs that complete in less than 10 minutes, there is no reason to keep any stored metadata longer than 10 minutes. You would therefore set this parameter to `10m`. | `1h` |
@@ -269,4 +269,4 @@ pipeline:
269269
```
270270

271271
{% endtab %}
272-
{% endtabs %}
272+
{% endtabs %}

pipeline/filters/grep.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ The plugin supports the following configuration parameters:
1616
| `Exclude` | `KEY REGEX` | Exclude records where the content of `KEY` matches the regular expression. |
1717
| `Logical_Op` | `Operation` | Specify a logical operator: `AND`, `OR` or `legacy` (default). In `legacy` mode the behaviour is either `AND` or `OR` depending on whether the `grep` is including (uses `AND`) or excluding (uses OR). Available from 2.1 or higher. |
1818

19-
### Record Accessor enabled
19+
### Record accessor enabled
2020

21-
Enable the [Record Accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) feature to specify the `KEY`. Use the record accessor to match values against nested values.
21+
Enable the [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) feature to specify the `KEY`. Use the record accessor to match values against nested values.
2222

2323
## Filter records
2424

@@ -53,18 +53,18 @@ fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdou
5353
```yaml
5454
service:
5555
parsers_file: /path/to/parsers.conf
56-
56+
5757
pipeline:
5858
inputs:
5959
- name: tail
6060
path: lines.txt
6161
parser: json
62-
62+
6363
filters:
6464
- name: grep
6565
match: '*'
6666
regex: log aa
67-
67+
6868
outputs:
6969
- name: stdout
7070
match: '*'
@@ -95,8 +95,7 @@ pipeline:
9595
{% endtab %}
9696
{% endtabs %}
9797

98-
99-
The filter allows to use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions)).
98+
The filter lets you use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions)).
10099

101100
### Nested fields example
102101

@@ -127,8 +126,8 @@ For example, to exclude records that match the nested field `kubernetes.labels.a
127126
{% tab title="fluent-bit.yaml" %}
128127

129128
```yaml
130-
pipeline:
131-
129+
pipeline:
130+
132131
filters:
133132
- name: grep
134133
match: '*'
@@ -162,7 +161,7 @@ The following example checks for a specific valid value for the key:
162161

163162
```yaml
164163
pipeline:
165-
164+
166165
filters:
167166
# Use Grep to verify the contents of the iot_timestamp value.
168167
# If the iot_timestamp key does not exist, this will fail
@@ -214,7 +213,7 @@ pipeline:
214213
- name: dummy
215214
dummy: '{"endpoint":"localhost", "value":"something"}'
216215
tag: dummy
217-
216+
218217
filters:
219218
- name: grep
220219
match: '*'
@@ -257,4 +256,4 @@ The output looks similar to:
257256
```text
258257
[0] dummy: [1674348410.558341857, {"endpoint"=>"localhost", "value"=>"something"}]
259258
[0] dummy: [1674348411.546425499, {"endpoint"=>"localhost", "value"=>"something"}]
260-
```
259+
```

pipeline/filters/kubernetes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -302,7 +302,7 @@ parsers:
302302
- name: custom-tag
303303
format: regex
304304
regex: '^(?<namespace_name>[^_]+)\.(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)\.(?<container_name>.+)\.(?<container_id>[a-z0-9]{64})'
305-
305+
306306
pipeline:
307307
inputs:
308308
- name: tail
@@ -560,7 +560,7 @@ Learn how to solve them to ensure that the Fluent Bit Kubernetes filter is opera
560560

561561
If set roles are configured correctly, it should respond with `yes`.
562562

563-
For instance, using Azure AKS, running the previous command might respond with:
563+
For instance, using Azure Kubernetes Service (AKS), running the previous command might respond with:
564564

565565
```text
566566
no - Azure does not have opinion for this user.

pipeline/filters/log_to_metrics.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Generate metrics from logs
44

55
# Logs to metrics
66

7-
The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a guage for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values.
7+
The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a gauge for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values.
88

99
This filter doesn't actually act as a record filter and therefore doesn't change or drop records. All records will pass through this filter untouched, and any generated metrics will be emitted into a separate metric pipeline.
1010

@@ -53,13 +53,13 @@ The following example takes records from two `dummy` inputs and counts all messa
5353
service:
5454
flush: 1
5555
log_level: info
56-
56+
5757
pipeline:
5858
inputs:
5959
- name: dummy
6060
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
6161
tag: dummy.log
62-
62+
6363
- name: dummy
6464
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
6565
tag: dummy.log2
@@ -154,13 +154,13 @@ The `gauge` mode needs a `value_field` to specify where to generate the metric v
154154
service:
155155
flush: 1
156156
log_level: info
157-
157+
158158
pipeline:
159159
inputs:
160160
- name: dummy
161161
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
162162
tag: dummy.log
163-
163+
164164
- name: dummy
165165
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
166166
tag: dummy.log2
@@ -176,7 +176,7 @@ pipeline:
176176
kubernetes_mode: on
177177
regex: 'message .*el.*'
178178
add_label: app $kubernetes['labels']['app']
179-
label_field:
179+
label_field:
180180
- color
181181
- shape
182182

@@ -218,7 +218,7 @@ pipeline:
218218
add_label app $kubernetes['labels']['app']
219219
label_field color
220220
label_field shape
221-
221+
222222
[OUTPUT]
223223
name prometheus_exporter
224224
match *
@@ -278,13 +278,13 @@ Similar to the `gauge` mode, the `histogram` mode needs a `value_field` to speci
278278
service:
279279
flush: 1
280280
log_level: info
281-
281+
282282
pipeline:
283283
inputs:
284284
- name: dummy
285285
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
286286
tag: dummy.log
287-
287+
288288
- name: dummy
289289
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
290290
tag: dummy.log2
@@ -342,7 +342,7 @@ pipeline:
342342
add_label app $kubernetes['labels']['app']
343343
label_field color
344344
label_field shape
345-
345+
346346
[OUTPUT]
347347
name prometheus_exporter
348348
match *
@@ -417,13 +417,13 @@ In the resulting output, there are several buckets by default: `0.005, 0.01, 0.0
417417
service:
418418
flush: 1
419419
log_level: info
420-
420+
421421
pipeline:
422422
inputs:
423423
- name: dummy
424424
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
425425
tag: dummy.log
426-
426+
427427
- name: dummy
428428
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
429429
tag: dummy.log2
@@ -496,7 +496,7 @@ pipeline:
496496
regex message .*el.*
497497
label_field color
498498
label_field shape
499-
499+
500500
[OUTPUT]
501501
name prometheus_exporter
502502
match *

0 commit comments

Comments
 (0)