You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/outputs/azure_blob.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ The Fluent Bit plugin works with the official Azure Service and can be configure
10
10
11
11
## Azure Storage account
12
12
13
-
Ensure you have an Azure Storage account. [Azure Blob Storage Tutorial \(Video\)](https://www.youtube.com/watch?v=-sCKnOm8G_g) explains how to set up your account.
13
+
Ensure you have an Azure Storage account. [Azure Blob Storage Tutorial (video)](https://www.youtube.com/watch?v=-sCKnOm8G_g) explains how to set up your account.
14
14
15
15
## Configuration parameters
16
16
@@ -49,7 +49,7 @@ Fluent Bit exposes the following configuration properties.
49
49
50
50
Fluent Bit can deliver records to the official service or an emulator.
51
51
52
-
### Configuration for Azure Storage Service
52
+
### Configuration for Azure Storage
53
53
54
54
The following configuration example generates a random message with a custom tag:
55
55
@@ -212,4 +212,4 @@ Azurite Queue service is successfully listening at http://127.0.0.1:10001
Copy file name to clipboardExpand all lines: pipeline/outputs/azure_kusto.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,7 +70,7 @@ By default, Kusto will insert incoming ingestion data into a table by inferring
70
70
|`buffering_enabled`| Optional. Enable buffering into disk before ingesting into Azure Kusto. |`Off`|
71
71
|`buffer_dir`| Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. |`/tmp/fluent-bit/azure-kusto/`|
72
72
|`upload_timeout`| Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. |`30m`|
73
-
|`upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. |`200MB`|
73
+
|`upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in megabytes. |`200MB`|
74
74
|`azure_kusto_buffer_key`| Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. |`key`|
75
75
|`store_dir_limit_size`| Optional. When buffering is `On`, set the max size of the buffer directory. |`8GB`|
76
76
|`buffer_file_delete_early`| Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. |`Off`|
The _Amazon CloudWatch_ output plugin lets you ingest your records into the [CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) service. Support for CloudWatch Metrics is also provided using [Embedded Metric Format (EMF)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html).
Copy file name to clipboardExpand all lines: pipeline/outputs/gelf.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Graylog Extended Log Format (GELF
1
+
# Graylog Extended Log Format (GELF)
2
2
3
3
The _[Graylog](https://www.graylog.org) Extended Log Format (GELF)_ output plugin lets you send logs in GELF format directly to a Graylog input using TLS, TCP, or UDP protocols.
4
4
@@ -26,7 +26,7 @@ According to the [GELF Payload Specification](https://go2docs.graylog.org/5-0/ge
26
26
27
27
### TLS / SSL
28
28
29
-
The GELF output plugin supports TLS/SSL. For iformation about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md).
29
+
The GELF output plugin supports TLS/SSL. For information about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md).
Copy file name to clipboardExpand all lines: pipeline/outputs/logdna.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ When the login processes a record or log, it searches for specific key names tha
28
28
29
29
| Key | Description |
30
30
| :--- | :--- |
31
-
|`level`| If the record contains a key called `level` or `severity`, it will populate the context `level` key with that value. If not found, the context key is not set. |
31
+
|`level`| If the record contains a key called `level` or `severity`, Fluent Bit will populate the context `level` key with that value. If not found, Fluent Bit won't set the context key. |
32
32
|`file`| If the record contains a key called `file`, it will populate the context `file` with the value found. Otherwise, if the plugin configuration provided a `file` property, that value will be used instead. |
33
33
|`app`| If the record contains a key called `app`, it will populate the context `app` with the value found, otherwise it will use the value set for `app` in the configuration property. |
34
34
|`meta`| If the record contains a key called `meta`, it will populate the context `meta` with the value found. |
Copy file name to clipboardExpand all lines: pipeline/outputs/loki.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,8 +22,8 @@ There is a separate Golang output plugin provided by [Grafana](https://grafana.c
22
22
|`labels`| Stream labels for API request. It can be multiple comma separated of strings specifying `key=value` pairs. Allows fixed parameters, or adding custom record keys (similar to the `label_keys` property). See the Labels section. |`job=fluent-bit`|
23
23
|`label_keys`| (Optional.) List of record keys that will be placed as stream labels. This configuration property is for records key only. See the Labels section. |_none_|
24
24
|`label_map_path`| Specify the label map path. The file defines how to extract labels from each record. See the Labels section. |_none_|
25
-
|`structured_metadata`| (Optional.) Comma-separated list of `key=value` strings specifying structured metadata for the log line. Like the `labels` parameter, values can reference record keys using record accessors. See [Structured metadata](#structured_metadata). |_none_|
26
-
|`structured_metadata_map_keys`| (Optional.) Comma-separated list of record key strings specifying record values of type `map`, used to dynamically populate structured metadata for the log line. Values can only reference record keys using record accessors, which should reference map values. Each entry from the referenced map will be used to add an entry to the structured metadata. See [Structured metadata](#structured_metadata). |_none_|
25
+
|`structured_metadata`| (Optional.) Comma-separated list of `key=value` strings specifying structured metadata for the log line. Like the `labels` parameter, values can reference record keys using record accessors. See [Use `structured_metadata`.](#use-structured_metadata). |_none_|
26
+
|`structured_metadata_map_keys`| (Optional.) Comma-separated list of record key strings specifying record values of type `map`, used to dynamically populate structured metadata for the log line. Values can only reference record keys using record accessors, which should reference map values. Each entry from the referenced map will be used to add an entry to the structured metadata. See [Use `structured_metadata`.](#use-structured_metadata). |_none_|
27
27
|`remove_keys`| (Optional.) List of keys to remove. |_none_|
28
28
|`drop_single_key`| When set to `true` and after extracting labels only a single key remains, the log line sent to Loki will be the value of that key in `line_format`. If set to `raw` and the log line is a string, the log line will be sent unquoted. |`off`|
29
29
|`line_format`| Format to use when flattening the record to a log line. Valid values are `json` or `key_value`. If set to `json`, the log line sent to Loki will be the Fluent Bit record dumped as JSON. If set to `key_value`, the log line will be each item in the record concatenated together (separated by a single space) in the format. |`json`|
If you're running in a Kubernetes environment, consider enabling the `auto_kubernetes_labels` option, which autopopulates the streams with the Pod labels for you. Consider the following configuration:
275
+
If you're running in a Kubernetes environment, consider enabling the `auto_kubernetes_labels` option, which populates the streams with the Pod labels for you. Consider the following configuration:
276
276
277
277
{% tabs %}
278
278
{% tab title="fluent-bit.yaml" %}
@@ -479,7 +479,7 @@ pipeline:
479
479
{% endtab %}
480
480
{% endtabs %}
481
481
482
-
## Networking and TLS Configuration
482
+
## Networking and TLS configuration
483
483
484
484
This plugin inherits core Fluent Bit features to customize the network behavior and optionally enable TLS in the communication channel. For more details about the specific options available, refer to the following articles:
485
485
@@ -492,7 +492,7 @@ All options mentioned in these articles must be enabled in the plugin configurat
492
492
493
493
Fluent Bit supports sending logs and metrics to [Grafana Cloud](https://grafana.com/products/cloud/) by providing the appropriate URL and ensuring TLS is enabled.
494
494
495
-
Below is an example configuration, be sure to set the credentials (shown here with XXX) and ensure the host URL matches the correct one for your deployment:
495
+
The following samples show example configurations. Be sure to set the credentials (shown here with `XXX`) and ensure the host URL matches the correct one for your deployment:
496
496
497
497
{% tabs %}
498
498
{% tab title="fluent-bit.yaml" %}
@@ -529,9 +529,9 @@ pipeline:
529
529
{% endtab %}
530
530
{% endtabs %}
531
531
532
-
## Get Started
532
+
## Get started
533
533
534
-
The following configuration example emits a dummy example record and ingests it on Loki. Copy and paste the corresponding content below into a file `out_loki.yaml` or `out_loki.conf`:
534
+
The following configuration example emits a dummy example record and ingests it on Loki. Copy and paste the following content into a file `out_loki.yaml` or `out_loki.conf`:
Copy file name to clipboardExpand all lines: pipeline/outputs/stackdriver_special_fields.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,11 +62,11 @@ For the special fields that map to `LogEntry` prototypes, add them as objects wi
62
62
}
63
63
```
64
64
65
-
Adding special fields to logs is best done through the [`modify` filter](https://docs.fluentbit.io/manual/pipeline/filters/modify) for simple fields, or [a Lua script using the `lua` filter](https://docs.fluentbit.io/manual/pipeline/filters/lua) for more complex fields.
65
+
Adding special fields to logs is best done through the [`modify` filter](https://docs.fluentbit.io/manual/pipeline/filters/modify) for basic fields, or [a Lua script using the `lua` filter](https://docs.fluentbit.io/manual/pipeline/filters/lua) for more complex fields.
66
66
67
-
## Simple type special fields
67
+
## Basic type special fields
68
68
69
-
Special fields with simple types (except for the [`logging.googleapis.com/insertId` field](#insert-id)) will follow this pattern (demonstrated with the `logging.googleapis.com/logName` field):
69
+
Special fields with basic types (except for the [`logging.googleapis.com/insertId` field](#insert-id)) will follow this pattern (demonstrated with the `logging.googleapis.com/logName` field):
70
70
71
71
1. If the special field matches the type, it will be moved to the corresponding LogEntry field. For example:
72
72
@@ -111,7 +111,7 @@ Special fields with simple types (except for the [`logging.googleapis.com/insert
111
111
}
112
112
```
113
113
114
-
### Exceptions
114
+
### Exceptions[#exceptions-basic]
115
115
116
116
#### Insert ID
117
117
@@ -244,9 +244,9 @@ the `logEntry will be:
244
244
}
245
245
```
246
246
247
-
### Exceptions
247
+
### Exceptions[#exceptions-proto]
248
248
249
-
#### Monitored Resource ID
249
+
#### `MonitoredResource` ID
250
250
251
251
The `logging.googleapis.com/monitored_resource` field is parsed in a special way, meaning it has some important exceptions:
0 commit comments