Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions pipeline/outputs/azure_blob.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The Fluent Bit plugin works with the official Azure Service and can be configure

## Azure Storage account

Ensure you have an Azure Storage account. [Azure Blob Storage Tutorial \(Video\)](https://www.youtube.com/watch?v=-sCKnOm8G_g) explains how to set up your account.
Ensure you have an Azure Storage account. [Azure Blob Storage Tutorial (video)](https://www.youtube.com/watch?v=-sCKnOm8G_g) explains how to set up your account.

## Configuration parameters

Expand Down Expand Up @@ -49,7 +49,7 @@ Fluent Bit exposes the following configuration properties.

Fluent Bit can deliver records to the official service or an emulator.

### Configuration for Azure Storage Service
### Configuration for Azure Storage

The following configuration example generates a random message with a custom tag:

Expand Down Expand Up @@ -212,4 +212,4 @@ Azurite Queue service is successfully listening at http://127.0.0.1:10001
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 404 -
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log HTTP/1.1" 201 -
127.0.0.1 - - [03/Sep/2020:17:40:04 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 201 -
```
```
2 changes: 1 addition & 1 deletion pipeline/outputs/azure_kusto.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ By default, Kusto will insert incoming ingestion data into a table by inferring
| `buffering_enabled` | Optional. Enable buffering into disk before ingesting into Azure Kusto. | `Off` |
| `buffer_dir` | Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. | `/tmp/fluent-bit/azure-kusto/` |
| `upload_timeout` | Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. | `30m` |
| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. | `200MB` |
| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in megabytes. | `200MB` |
| `azure_kusto_buffer_key` | Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. | `key` |
| `store_dir_limit_size` | Optional. When buffering is `On`, set the max size of the buffer directory. | `8GB` |
| `buffer_file_delete_early` | Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. | `Off` |
Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/cloudwatch.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Send logs and metrics to Amazon CloudWatch

# Amazon CloudWatch

![](<../../.gitbook/assets/image (3) (2) (2) (4) (4) (3) (1).png>)
![Amazon CloudWatch](<../../.gitbook/assets/image (3) (2) (2) (4) (4) (3) (1).png>)

The _Amazon CloudWatch_ output plugin lets you ingest your records into the [CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) service. Support for CloudWatch Metrics is also provided using [Embedded Metric Format (EMF)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html).

Expand Down
4 changes: 4 additions & 0 deletions pipeline/outputs/dynatrace.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,12 @@ To get started with sending logs to Dynatrace:

## References

<!-- vale FluentBit.Simplicity = NO -->

- [Dynatrace Fluent Bit documentation](https://docs.dynatrace.com/docs/shortlink/lma-stream-logs-with-fluent-bit)
- [Fluent Bit integration in Dynatrace Hub](https://www.dynatrace.com/hub/detail/fluent-bit/?filter=log-management-and-analytics)
- [Video: Stream a Log File to Dynatrace using Fluent Bit](https://www.youtube.com/watch?v=JJJNxhtJ6R0)
- [Blog: Easily stream logs from Fluent Bit to
Dynatrace](https://www.dynatrace.com/news/blog/easily-stream-logs-with-fluent-bit-to-dynatrace/)

<!-- vale FluentBit.Simplicity = YES -->
4 changes: 2 additions & 2 deletions pipeline/outputs/gelf.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Graylog Extended Log Format (GELF
# Graylog Extended Log Format (GELF)

The _[Graylog](https://www.graylog.org) Extended Log Format (GELF)_ output plugin lets you send logs in GELF format directly to a Graylog input using TLS, TCP, or UDP protocols.

Expand Down Expand Up @@ -26,7 +26,7 @@ According to the [GELF Payload Specification](https://go2docs.graylog.org/5-0/ge

### TLS / SSL

The GELF output plugin supports TLS/SSL. For iformation about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md).
The GELF output plugin supports TLS/SSL. For information about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md).

## Notes

Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/logdna.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ When the login processes a record or log, it searches for specific key names tha

| Key | Description |
| :--- | :--- |
| `level` | If the record contains a key called `level` or `severity`, it will populate the context `level` key with that value. If not found, the context key is not set. |
| `level` | If the record contains a key called `level` or `severity`, Fluent Bit will populate the context `level` key with that value. If not found, Fluent Bit won't set the context key. |
| `file` | If the record contains a key called `file`, it will populate the context `file` with the value found. Otherwise, if the plugin configuration provided a `file` property, that value will be used instead. |
| `app` | If the record contains a key called `app`, it will populate the context `app` with the value found, otherwise it will use the value set for `app` in the configuration property. |
| `meta` | If the record contains a key called `meta`, it will populate the context `meta` with the value found. |
Expand Down
14 changes: 7 additions & 7 deletions pipeline/outputs/loki.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ There is a separate Golang output plugin provided by [Grafana](https://grafana.c
| `labels` | Stream labels for API request. It can be multiple comma separated of strings specifying `key=value` pairs. Allows fixed parameters, or adding custom record keys (similar to the `label_keys` property). See the Labels section. | `job=fluent-bit` |
| `label_keys` | (Optional.) List of record keys that will be placed as stream labels. This configuration property is for records key only. See the Labels section. | _none_ |
| `label_map_path` | Specify the label map path. The file defines how to extract labels from each record. See the Labels section. | _none_ |
| `structured_metadata` | (Optional.) Comma-separated list of `key=value` strings specifying structured metadata for the log line. Like the `labels` parameter, values can reference record keys using record accessors. See [Structured metadata](#structured_metadata). | _none_ |
| `structured_metadata_map_keys` | (Optional.) Comma-separated list of record key strings specifying record values of type `map`, used to dynamically populate structured metadata for the log line. Values can only reference record keys using record accessors, which should reference map values. Each entry from the referenced map will be used to add an entry to the structured metadata. See [Structured metadata](#structured_metadata). | _none_ |
| `structured_metadata` | (Optional.) Comma-separated list of `key=value` strings specifying structured metadata for the log line. Like the `labels` parameter, values can reference record keys using record accessors. See [Use `structured_metadata`.](#use-structured_metadata). | _none_ |
| `structured_metadata_map_keys` | (Optional.) Comma-separated list of record key strings specifying record values of type `map`, used to dynamically populate structured metadata for the log line. Values can only reference record keys using record accessors, which should reference map values. Each entry from the referenced map will be used to add an entry to the structured metadata. See [Use `structured_metadata`.](#use-structured_metadata). | _none_ |
| `remove_keys` | (Optional.) List of keys to remove. | _none_ |
| `drop_single_key` | When set to `true` and after extracting labels only a single key remains, the log line sent to Loki will be the value of that key in `line_format`. If set to `raw` and the log line is a string, the log line will be sent unquoted. | `off` |
| `line_format` | Format to use when flattening the record to a log line. Valid values are `json` or `key_value`. If set to `json`, the log line sent to Loki will be the Fluent Bit record dumped as JSON. If set to `key_value`, the log line will be each item in the record concatenated together (separated by a single space) in the format. | `json` |
Expand Down Expand Up @@ -272,7 +272,7 @@ job="fluentbit", stream="stdout"

#### Kubernetes and labels

If you're running in a Kubernetes environment, consider enabling the `auto_kubernetes_labels` option, which autopopulates the streams with the Pod labels for you. Consider the following configuration:
If you're running in a Kubernetes environment, consider enabling the `auto_kubernetes_labels` option, which populates the streams with the Pod labels for you. Consider the following configuration:

{% tabs %}
{% tab title="fluent-bit.yaml" %}
Expand Down Expand Up @@ -479,7 +479,7 @@ pipeline:
{% endtab %}
{% endtabs %}

## Networking and TLS Configuration
## Networking and TLS configuration

This plugin inherits core Fluent Bit features to customize the network behavior and optionally enable TLS in the communication channel. For more details about the specific options available, refer to the following articles:

Expand All @@ -492,7 +492,7 @@ All options mentioned in these articles must be enabled in the plugin configurat

Fluent Bit supports sending logs and metrics to [Grafana Cloud](https://grafana.com/products/cloud/) by providing the appropriate URL and ensuring TLS is enabled.

Below is an example configuration, be sure to set the credentials (shown here with XXX) and ensure the host URL matches the correct one for your deployment:
The following samples show example configurations. Be sure to set the credentials (shown here with `XXX`) and ensure the host URL matches the correct one for your deployment:

{% tabs %}
{% tab title="fluent-bit.yaml" %}
Expand Down Expand Up @@ -529,9 +529,9 @@ pipeline:
{% endtab %}
{% endtabs %}

## Get Started
## Get started

The following configuration example emits a dummy example record and ingests it on Loki. Copy and paste the corresponding content below into a file `out_loki.yaml` or `out_loki.conf`:
The following configuration example emits a dummy example record and ingests it on Loki. Copy and paste the following content into a file `out_loki.yaml` or `out_loki.conf`:

{% tabs %}
{% tab title="out-loki.yaml" %}
Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/opensearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ becomes
{"cpu0_p_cpu"=>17.000000}
```

## FAQ
## Frequently asked questions

### `Logstash_Prefix_Key`

Expand Down
12 changes: 6 additions & 6 deletions pipeline/outputs/stackdriver_special_fields.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,11 +62,11 @@ For the special fields that map to `LogEntry` prototypes, add them as objects wi
}
```

Adding special fields to logs is best done through the [`modify` filter](https://docs.fluentbit.io/manual/pipeline/filters/modify) for simple fields, or [a Lua script using the `lua` filter](https://docs.fluentbit.io/manual/pipeline/filters/lua) for more complex fields.
Adding special fields to logs is best done through the [`modify` filter](https://docs.fluentbit.io/manual/pipeline/filters/modify) for basic fields, or [a Lua script using the `lua` filter](https://docs.fluentbit.io/manual/pipeline/filters/lua) for more complex fields.

## Simple type special fields
## Basic type special fields

Special fields with simple types (except for the [`logging.googleapis.com/insertId` field](#insert-id)) will follow this pattern (demonstrated with the `logging.googleapis.com/logName` field):
Special fields with basic types (except for the [`logging.googleapis.com/insertId` field](#insert-id)) will follow this pattern (demonstrated with the `logging.googleapis.com/logName` field):

1. If the special field matches the type, it will be moved to the corresponding LogEntry field. For example:

Expand Down Expand Up @@ -111,7 +111,7 @@ Special fields with simple types (except for the [`logging.googleapis.com/insert
}
```

### Exceptions
### Exceptions [#exceptions-basic]

#### Insert ID

Expand Down Expand Up @@ -244,9 +244,9 @@ the `logEntry will be:
}
```

### Exceptions
### Exceptions [#exceptions-proto]

#### Monitored Resource ID
#### `MonitoredResource` ID

The `logging.googleapis.com/monitored_resource` field is parsed in a special way, meaning it has some important exceptions:

Expand Down
18 changes: 9 additions & 9 deletions vale-styles/FluentBit/FluentBitCasing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ ignorecase: false
scope: text
tokens:
# Matches incorrect variants in plain prose but excludes filenames, links, quotes, and code
- '(?<![".\w:/\]])\bfluent bit\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bfluentbit\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bFluent bit\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bfluent Bit\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bfluent-bit\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bFluent-Bit\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bfluent\-Bit\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bFLUENT BIT\b(?![".\w:/\]])'
- '(?<![".\w:/\]])\bFLUENT-BIT\b(?![".\w:/\]])'
- '(?<![-[".\w:/\]])\bfluent bit\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bfluentbit\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bFluent bit\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bfluent Bit\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bfluent-bit\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bFluent-Bit\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bfluent\-Bit\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bFLUENT BIT\b(?![-\]".\w:/\]])'
- '(?<![-[".\w:/\]])\bFLUENT-BIT\b(?![-\]".\w:/\]])'
5 changes: 5 additions & 0 deletions vale-styles/FluentBit/Headings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,12 @@ exceptions:
- AWS MSK IAM
- AWS IAM
- Azure
- Azure Blob
- Azure Data Explorer
- Azure Logs Ingestion API
- Azure Log Analytics
- Azure Storage
- Azurite
- BuildKite
- CentOS
- CheckList
Expand Down Expand Up @@ -65,13 +68,15 @@ exceptions:
- GitHub
- Go
- Google
- Google Chronicle
- Google Cloud
- Google Cloud BigQuery
- Google Cloud Platform
- GPG
- Grafana
- Grafana Cloud
- Graylog Extended Log Format
- GELF
- gRPC
- gzip
- Helm
Expand Down
1 change: 0 additions & 1 deletion vale-styles/FluentBit/WordList.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ swap:
cellular network: mobile network
chapter: documents|pages|sections
Container Engine: Kubernetes Engine
content type: media type
curated roles: predefined roles
data are: data is
Developers Console: Google API Console|API Console
Expand Down