diff --git a/pipeline/outputs/azure_blob.md b/pipeline/outputs/azure_blob.md index f5881070a..36796e7e2 100644 --- a/pipeline/outputs/azure_blob.md +++ b/pipeline/outputs/azure_blob.md @@ -10,7 +10,7 @@ The Fluent Bit plugin works with the official Azure Service and can be configure ## Azure Storage account -Ensure you have an Azure Storage account. [Azure Blob Storage Tutorial \(Video\)](https://www.youtube.com/watch?v=-sCKnOm8G_g) explains how to set up your account. +Ensure you have an Azure Storage account. [Azure Blob Storage Tutorial (video)](https://www.youtube.com/watch?v=-sCKnOm8G_g) explains how to set up your account. ## Configuration parameters @@ -49,7 +49,7 @@ Fluent Bit exposes the following configuration properties. Fluent Bit can deliver records to the official service or an emulator. -### Configuration for Azure Storage Service +### Configuration for Azure Storage The following configuration example generates a random message with a custom tag: @@ -212,4 +212,4 @@ Azurite Queue service is successfully listening at http://127.0.0.1:10001 127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 404 - 127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log HTTP/1.1" 201 - 127.0.0.1 - - [03/Sep/2020:17:40:04 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 201 - -``` \ No newline at end of file +``` diff --git a/pipeline/outputs/azure_kusto.md b/pipeline/outputs/azure_kusto.md index deae7de6a..a841d242b 100644 --- a/pipeline/outputs/azure_kusto.md +++ b/pipeline/outputs/azure_kusto.md @@ -70,7 +70,7 @@ By default, Kusto will insert incoming ingestion data into a table by inferring | `buffering_enabled` | Optional. Enable buffering into disk before ingesting into Azure Kusto. | `Off` | | `buffer_dir` | Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. | `/tmp/fluent-bit/azure-kusto/` | | `upload_timeout` | Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. | `30m` | -| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. | `200MB` | +| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in megabytes. | `200MB` | | `azure_kusto_buffer_key` | Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. | `key` | | `store_dir_limit_size` | Optional. When buffering is `On`, set the max size of the buffer directory. | `8GB` | | `buffer_file_delete_early` | Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. | `Off` | diff --git a/pipeline/outputs/cloudwatch.md b/pipeline/outputs/cloudwatch.md index 3075479d5..b9007ab18 100644 --- a/pipeline/outputs/cloudwatch.md +++ b/pipeline/outputs/cloudwatch.md @@ -4,7 +4,7 @@ description: Send logs and metrics to Amazon CloudWatch # Amazon CloudWatch -![](<../../.gitbook/assets/image (3) (2) (2) (4) (4) (3) (1).png>) +![Amazon CloudWatch](<../../.gitbook/assets/image (3) (2) (2) (4) (4) (3) (1).png>) The _Amazon CloudWatch_ output plugin lets you ingest your records into the [CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) service. Support for CloudWatch Metrics is also provided using [Embedded Metric Format (EMF)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html). diff --git a/pipeline/outputs/dynatrace.md b/pipeline/outputs/dynatrace.md index b2be7517c..3912b05cb 100644 --- a/pipeline/outputs/dynatrace.md +++ b/pipeline/outputs/dynatrace.md @@ -78,8 +78,12 @@ To get started with sending logs to Dynatrace: ## References + + - [Dynatrace Fluent Bit documentation](https://docs.dynatrace.com/docs/shortlink/lma-stream-logs-with-fluent-bit) - [Fluent Bit integration in Dynatrace Hub](https://www.dynatrace.com/hub/detail/fluent-bit/?filter=log-management-and-analytics) - [Video: Stream a Log File to Dynatrace using Fluent Bit](https://www.youtube.com/watch?v=JJJNxhtJ6R0) - [Blog: Easily stream logs from Fluent Bit to Dynatrace](https://www.dynatrace.com/news/blog/easily-stream-logs-with-fluent-bit-to-dynatrace/) + + diff --git a/pipeline/outputs/gelf.md b/pipeline/outputs/gelf.md index 8737cb3b5..7bef1bb1c 100644 --- a/pipeline/outputs/gelf.md +++ b/pipeline/outputs/gelf.md @@ -1,4 +1,4 @@ -# Graylog Extended Log Format (GELF +# Graylog Extended Log Format (GELF) The _[Graylog](https://www.graylog.org) Extended Log Format (GELF)_ output plugin lets you send logs in GELF format directly to a Graylog input using TLS, TCP, or UDP protocols. @@ -26,7 +26,7 @@ According to the [GELF Payload Specification](https://go2docs.graylog.org/5-0/ge ### TLS / SSL -The GELF output plugin supports TLS/SSL. For iformation about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md). +The GELF output plugin supports TLS/SSL. For information about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md). ## Notes diff --git a/pipeline/outputs/logdna.md b/pipeline/outputs/logdna.md index 6d18fb061..af77a76f7 100644 --- a/pipeline/outputs/logdna.md +++ b/pipeline/outputs/logdna.md @@ -28,7 +28,7 @@ When the login processes a record or log, it searches for specific key names tha | Key | Description | | :--- | :--- | -| `level` | If the record contains a key called `level` or `severity`, it will populate the context `level` key with that value. If not found, the context key is not set. | +| `level` | If the record contains a key called `level` or `severity`, Fluent Bit will populate the context `level` key with that value. If not found, Fluent Bit won't set the context key. | | `file` | If the record contains a key called `file`, it will populate the context `file` with the value found. Otherwise, if the plugin configuration provided a `file` property, that value will be used instead. | | `app` | If the record contains a key called `app`, it will populate the context `app` with the value found, otherwise it will use the value set for `app` in the configuration property. | | `meta` | If the record contains a key called `meta`, it will populate the context `meta` with the value found. | diff --git a/pipeline/outputs/loki.md b/pipeline/outputs/loki.md index 7b34030c9..0343dcbd9 100644 --- a/pipeline/outputs/loki.md +++ b/pipeline/outputs/loki.md @@ -22,8 +22,8 @@ There is a separate Golang output plugin provided by [Grafana](https://grafana.c | `labels` | Stream labels for API request. It can be multiple comma separated of strings specifying `key=value` pairs. Allows fixed parameters, or adding custom record keys (similar to the `label_keys` property). See the Labels section. | `job=fluent-bit` | | `label_keys` | (Optional.) List of record keys that will be placed as stream labels. This configuration property is for records key only. See the Labels section. | _none_ | | `label_map_path` | Specify the label map path. The file defines how to extract labels from each record. See the Labels section. | _none_ | -| `structured_metadata` | (Optional.) Comma-separated list of `key=value` strings specifying structured metadata for the log line. Like the `labels` parameter, values can reference record keys using record accessors. See [Structured metadata](#structured_metadata). | _none_ | -| `structured_metadata_map_keys` | (Optional.) Comma-separated list of record key strings specifying record values of type `map`, used to dynamically populate structured metadata for the log line. Values can only reference record keys using record accessors, which should reference map values. Each entry from the referenced map will be used to add an entry to the structured metadata. See [Structured metadata](#structured_metadata). | _none_ | +| `structured_metadata` | (Optional.) Comma-separated list of `key=value` strings specifying structured metadata for the log line. Like the `labels` parameter, values can reference record keys using record accessors. See [Use `structured_metadata`.](#use-structured_metadata). | _none_ | +| `structured_metadata_map_keys` | (Optional.) Comma-separated list of record key strings specifying record values of type `map`, used to dynamically populate structured metadata for the log line. Values can only reference record keys using record accessors, which should reference map values. Each entry from the referenced map will be used to add an entry to the structured metadata. See [Use `structured_metadata`.](#use-structured_metadata). | _none_ | | `remove_keys` | (Optional.) List of keys to remove. | _none_ | | `drop_single_key` | When set to `true` and after extracting labels only a single key remains, the log line sent to Loki will be the value of that key in `line_format`. If set to `raw` and the log line is a string, the log line will be sent unquoted. | `off` | | `line_format` | Format to use when flattening the record to a log line. Valid values are `json` or `key_value`. If set to `json`, the log line sent to Loki will be the Fluent Bit record dumped as JSON. If set to `key_value`, the log line will be each item in the record concatenated together (separated by a single space) in the format. | `json` | @@ -272,7 +272,7 @@ job="fluentbit", stream="stdout" #### Kubernetes and labels -If you're running in a Kubernetes environment, consider enabling the `auto_kubernetes_labels` option, which autopopulates the streams with the Pod labels for you. Consider the following configuration: +If you're running in a Kubernetes environment, consider enabling the `auto_kubernetes_labels` option, which populates the streams with the Pod labels for you. Consider the following configuration: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -479,7 +479,7 @@ pipeline: {% endtab %} {% endtabs %} -## Networking and TLS Configuration +## Networking and TLS configuration This plugin inherits core Fluent Bit features to customize the network behavior and optionally enable TLS in the communication channel. For more details about the specific options available, refer to the following articles: @@ -492,7 +492,7 @@ All options mentioned in these articles must be enabled in the plugin configurat Fluent Bit supports sending logs and metrics to [Grafana Cloud](https://grafana.com/products/cloud/) by providing the appropriate URL and ensuring TLS is enabled. -Below is an example configuration, be sure to set the credentials (shown here with XXX) and ensure the host URL matches the correct one for your deployment: +The following samples show example configurations. Be sure to set the credentials (shown here with `XXX`) and ensure the host URL matches the correct one for your deployment: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -529,9 +529,9 @@ pipeline: {% endtab %} {% endtabs %} -## Get Started +## Get started -The following configuration example emits a dummy example record and ingests it on Loki. Copy and paste the corresponding content below into a file `out_loki.yaml` or `out_loki.conf`: +The following configuration example emits a dummy example record and ingests it on Loki. Copy and paste the following content into a file `out_loki.yaml` or `out_loki.conf`: {% tabs %} {% tab title="out-loki.yaml" %} diff --git a/pipeline/outputs/opensearch.md b/pipeline/outputs/opensearch.md index 33ea76e20..35c7d481b 100644 --- a/pipeline/outputs/opensearch.md +++ b/pipeline/outputs/opensearch.md @@ -150,7 +150,7 @@ becomes {"cpu0_p_cpu"=>17.000000} ``` -## FAQ +## Frequently asked questions ### `Logstash_Prefix_Key` diff --git a/pipeline/outputs/stackdriver_special_fields.md b/pipeline/outputs/stackdriver_special_fields.md index dc4ab844e..8d2fdcb3d 100644 --- a/pipeline/outputs/stackdriver_special_fields.md +++ b/pipeline/outputs/stackdriver_special_fields.md @@ -62,11 +62,11 @@ For the special fields that map to `LogEntry` prototypes, add them as objects wi } ``` -Adding special fields to logs is best done through the [`modify` filter](https://docs.fluentbit.io/manual/pipeline/filters/modify) for simple fields, or [a Lua script using the `lua` filter](https://docs.fluentbit.io/manual/pipeline/filters/lua) for more complex fields. +Adding special fields to logs is best done through the [`modify` filter](https://docs.fluentbit.io/manual/pipeline/filters/modify) for basic fields, or [a Lua script using the `lua` filter](https://docs.fluentbit.io/manual/pipeline/filters/lua) for more complex fields. -## Simple type special fields +## Basic type special fields -Special fields with simple types (except for the [`logging.googleapis.com/insertId` field](#insert-id)) will follow this pattern (demonstrated with the `logging.googleapis.com/logName` field): +Special fields with basic types (except for the [`logging.googleapis.com/insertId` field](#insert-id)) will follow this pattern (demonstrated with the `logging.googleapis.com/logName` field): 1. If the special field matches the type, it will be moved to the corresponding LogEntry field. For example: @@ -111,7 +111,7 @@ Special fields with simple types (except for the [`logging.googleapis.com/insert } ``` -### Exceptions +### Exceptions [#exceptions-basic] #### Insert ID @@ -244,9 +244,9 @@ the `logEntry will be: } ``` -### Exceptions +### Exceptions [#exceptions-proto] -#### Monitored Resource ID +#### `MonitoredResource` ID The `logging.googleapis.com/monitored_resource` field is parsed in a special way, meaning it has some important exceptions: diff --git a/vale-styles/FluentBit/FluentBitCasing.yml b/vale-styles/FluentBit/FluentBitCasing.yml index b1282cfc4..1e1931f1c 100644 --- a/vale-styles/FluentBit/FluentBitCasing.yml +++ b/vale-styles/FluentBit/FluentBitCasing.yml @@ -5,12 +5,12 @@ ignorecase: false scope: text tokens: # Matches incorrect variants in plain prose but excludes filenames, links, quotes, and code - - '(?