diff --git a/pipeline/outputs/new-relic.md b/pipeline/outputs/new-relic.md index f384dc717..d98e977c4 100644 --- a/pipeline/outputs/new-relic.md +++ b/pipeline/outputs/new-relic.md @@ -10,111 +10,74 @@ Before to get started with the plugin configuration, make sure to obtain the pro ## Configuration Parameters -| Key | Description | Default | -| :--- | :--- | :--- | - - - - - - - - - - - -
base_uri -

Full address of New Relic API end-point. By default the value points to - the US end-point. -
-

-

If you want to use the EU end-point you can set this key to the following - value: -
-
https://log-api.eu.newrelic.com/log/v1 -

-
https://log-api.newrelic.com/log/v1 -
- - - - - - - - - - -
api_key -

Your key for data ingestion. The API key is also called the ingestion - key, you can get more details on how to generated in the official documentation - here.

-

From a configuration perspective either an api_key or an license_key is - required. New Relic suggest to use primary the api_key.

-
- - - - - - - - - - -
license_key -

Optional authentication parameter for data ingestion. -
-

-

Note that New Relic suggest to use the api_key instead. You - can read more about the License Key here.

-
- -| compress | Set the compression mechanism for the payload. This option allows two values: `gzip` \(enabled by default\) or `false` to disable compression. | gzip | -| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | -| :--- | :--- | :--- | - - -The following configuration example, will emit a dummy example record and ingest it on New Relic. Copy and paste the following content in a file called `newrelic.conf`: - -```python +| Key | Description | Default | +|:------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| +| base_uri | Full address of New Relic API end-point. By default the value points to the US end-point. If you want to use the EU end-point you can set this key to the following value:
- https://log-api.eu.newrelic.com/log/v1 | https://log-api.newrelic.com/log/v1 | +| api_key | Your key for data ingestion. The API key is also called the ingestion key, you can get more details on how to generated in the official documentation here. From a configuration perspective either an `api_key` or a `license_key` is required. New Relic suggests to use primarily the `api_key`. | | +| license_key | Optional authentication parameter for data ingestion. Note that New Relic suggest to use the `api_key` instead. You can read more about the License Key here. | | +| compress | Set the compression mechanism for the payload. This option allows two values: `gzip` \(enabled by default\) or `false` to disable compression. | gzip | +| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | 0 | + +The following configuration example, will emit a dummy example record and ingest it on New Relic. In your main configuration file append the following: + + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +service: + flush: 1 + log_level: info + +pipeline: + inputs: + - name: dummy + dummy: '{"message":"a simple message", "temp": "0.74", "extra": "false"}' + samples: 1 + + outputs: + - name: nrlogs + match: '*' + api_key: YOUR_API_KEY_HERE +``` + +{% endtab %} +{% tab title="fluent-bit.conf" %} + +```text [SERVICE] - flush 1 - log_level info + flush 1 + log_level info [INPUT] - name dummy - dummy {"message":"a simple message", "temp": "0.74", "extra": "false"} - samples 1 + name dummy + dummy {"message":"a simple message", "temp": "0.74", "extra": "false"} + samples 1 [OUTPUT] - name nrlogs - match * - api_key YOUR_API_KEY_HERE + name nrlogs + match * + api_key YOUR_API_KEY_HERE ``` -run Fluent Bit with the new configuration file: +{% endtab %} +{% endtabs %} -```text -fluent-bit -c newrelic.conf +Run Fluent Bit with the new configuration file: + +```shell +# For YAML configuration. +fluent-bit --config fluent-bit.yaml + +# For classic configuration +fluent-bit --config fluent-bit.conf ``` Fluent Bit output: ```text -Fluent Bit v1.5.0 -* Copyright (C) 2019-2020 The Fluent Bit Authors -* Copyright (C) 2015-2018 Treasure Data -* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd -* https://fluentbit.io - -[2020/04/10 10:58:32] [ info] [storage] version=1.0.3, initializing... -[2020/04/10 10:58:32] [ info] [storage] in-memory -[2020/04/10 10:58:32] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128 -[2020/04/10 10:58:32] [ info] [engine] started (pid=2772591) -[2020/04/10 10:58:32] [ info] [output:newrelic:newrelic.0] configured, hostname=log-api.newrelic.com:443 -[2020/04/10 10:58:32] [ info] [sp] stream processor started +... [2020/04/10 10:58:35] [ info] [output:nrlogs:nrlogs.0] log-api.newrelic.com:443, HTTP status=202 {"requestId":"feb312fe-004e-b000-0000-0171650764ac"} -``` +... +``` \ No newline at end of file diff --git a/pipeline/outputs/null.md b/pipeline/outputs/null.md index 1b47b816a..c3ef929d5 100644 --- a/pipeline/outputs/null.md +++ b/pipeline/outputs/null.md @@ -14,21 +14,39 @@ You can run the plugin from the command line or through the configuration file: From the command line you can let Fluent Bit throws away events with the following options: -```bash +```shell fluent-bit -i cpu -o null ``` ### Configuration File -In your main configuration file append the following Input & Output sections: +In your main configuration file append the following: -```python +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + inputs: + - name: cpu + tag: cpu + + outputs: + - name: null + match: '*' +``` +{% endtab %} +{% tab title="fluent-bit.conf" %} + +```text [INPUT] - Name cpu - Tag cpu + Name cpu + Tag cpu [OUTPUT] - Name null - Match * + Name null + Match * ``` +{% endtab %} +{% endtabs %} \ No newline at end of file diff --git a/pipeline/outputs/observe.md b/pipeline/outputs/observe.md index 47be2503f..fc5d41204 100644 --- a/pipeline/outputs/observe.md +++ b/pipeline/outputs/observe.md @@ -2,28 +2,53 @@ Observe employs the **http** output plugin, allowing you to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). -For now the functionality is pretty basic and it issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format. +For now the functionality is pretty basic, and it issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format. The following are the specific HTTP parameters to employ: ## Configuration Parameters -| Key | Description | default | -| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | -| host | IP address or hostname of Observe's data collection endpoint. $(OBSERVE_CUSTOMER) is your [Customer ID](https://docs.observeinc.com/en/latest/content/common-topics/HelpfulHints.html?highlight=customer%20id#customer-id) | OBSERVE_CUSTOMER.collect.observeinc.com | -| port | TCP port of to employ when sending to Observe | 443 | -| tls | Specify to use tls | on | -| uri | Specify the HTTP URI for the Observe's data ingest | /v1/http/fluentbit | -| format | The data format to be used in the HTTP request body | msgpack | -| header | The specific header that provides the Observe token needed to authorize sending data [into a datastream](https://docs.observeinc.com/en/latest/content/data-ingestion/datastreams.html?highlight=ingest%20token#create-a-datastream). | Authorization Bearer ${OBSERVE_TOKEN} | -| header | The specific header to instructs Observe how to decode incoming payloads | X-Observe-Decoder fluent | -| compress | Set payload compression mechanism. Option available is 'gzip' | gzip | -| tls.ca_file | **For use with Windows**: provide path to root cert | | -| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | +| Key | Description | default | +|-------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------| +| host | IP address or hostname of Observe's data collection endpoint. $(OBSERVE_CUSTOMER) is your [Customer ID](https://docs.observeinc.com/en/latest/content/common-topics/HelpfulHints.html?highlight=customer%20id#customer-id) | OBSERVE_CUSTOMER.collect.observeinc.com | +| port | TCP port of to employ when sending to Observe | 443 | +| tls | Specify to use tls | on | +| uri | Specify the HTTP URI for the Observe's data ingest | /v1/http/fluentbit | +| format | The data format to be used in the HTTP request body | msgpack | +| header | The specific header that provides the Observe token needed to authorize sending data [into a data stream](https://docs.observeinc.com/en/latest/content/data-ingestion/datastreams.html?highlight=ingest%20token#create-a-datastream). | Authorization Bearer ${OBSERVE_TOKEN} | +| header | The specific header to instructs Observe how to decode incoming payloads | X-Observe-Decoder fluent | +| compress | Set payload compression mechanism. Option available is 'gzip' | gzip | +| tls.ca_file | **For use with Windows**: provide path to root cert | | +| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | 0 | ### Configuration File -In your main configuration file, append the following _Input_ & _Output_ sections: +In your main configuration file append the following: + + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + + outputs: + - name: http + match: '*' + host: my-observe-customer-id.collect.observeinc.com + port: 443 + tls: on + uri: /v1/http/fluentbit + format: msgpack + header: + - 'Authorization Bearer ${OBSERVE_TOKEN}' + - 'X-Observe-Decoder fluent' + compress: gzip + # For Windows: provide path to root cert + #tls.ca_file C:\fluent-bit\isrgrootx1.pem +``` +{% endtab %} +{% tab title="fluent-bit.conf" %} ```text [OUTPUT] @@ -42,5 +67,7 @@ In your main configuration file, append the following _Input_ & _Output_ section # For Windows: provide path to root cert #tls.ca_file C:\fluent-bit\isrgrootx1.pem - ``` + +{% endtab %} +{% endtabs %} \ No newline at end of file diff --git a/pipeline/outputs/oci-logging-analytics.md b/pipeline/outputs/oci-logging-analytics.md index 6f22c9746..8f4c9114a 100644 --- a/pipeline/outputs/oci-logging-analytics.md +++ b/pipeline/outputs/oci-logging-analytics.md @@ -4,11 +4,11 @@ description: Send logs to Oracle Cloud Infrastructure Logging Analytics Service # Oracle Cloud Infrastructure Logging Analytics -Oracle Cloud Infrastructure Logging Analytics output plugin allows you to ingest your log records into [OCI Logging Analytics](https://www.oracle.com/manageability/logging-analytics) service. +Oracle Cloud Infrastructure Logging Analytics output plugin allows you to ingest your log records into [OCI Logging Analytics](https://www.oracle.com/manageability/logging-analytics/) service. Oracle Cloud Infrastructure Logging Analytics is a machine learning-based cloud service that monitors, aggregates, indexes, and analyzes all log data from on-premises and multicloud environments. Enabling users to search, explore, and correlate this data to troubleshoot and resolve problems faster and derive insights to make better operational decisions. -For details about OCI Logging Analytics refer to https://docs.oracle.com/en-us/iaas/logging-analytics/index.html +For details about OCI Logging Analytics refer to [the documentation](https://docs.oracle.com/en-us/iaas/logging-analytics/index.html). ## Configuration Parameters @@ -20,7 +20,7 @@ Following are the top level configuration properties of the plugin: | profile_name | OCI Config Profile Name to be used from the configuration file | DEFAULT | | namespace | OCI Tenancy Namespace in which the collected log data is to be uploaded | | | proxy | define proxy if required, in http://host:port format, supports only http protocol | | -| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` | +| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` | The following parameters are to set the Logging Analytics resources that must be used to process your logs by OCI Logging Analytics. @@ -44,7 +44,7 @@ For more details about the properties available and general configuration, see [ ### Prerequisites -- OCI Logging Analytics service must be onboarded with the minumum required policies, in the OCI region where you want to monitor. Refer [Logging Analytics Quick Start](https://docs.oracle.com/en-us/iaas/logging-analytics/doc/quick-start.html) for details. +- OCI Logging Analytics service must be onboarded with the minimum required policies, in the OCI region where you want to monitor. Refer [Logging Analytics Quick Start](https://docs.oracle.com/en-us/iaas/logging-analytics/doc/quick-start.html) for details. - Create OCI Logging Analytics LogGroup(s) if not done already. Refer [Create Log Group](https://docs.oracle.com/en-us/iaas/logging-analytics/doc/create-logging-analytics-resources.html#GUID-D1758CFB-861F-420D-B12F-34D1CC5E3E0E) for details. ### Running the output plugin @@ -55,81 +55,174 @@ In order to insert records into the OCI Logging Analytics service, you can run t The OCI Logging Analytics plugin can read the parameters from the command line in two ways, through the -p argument (property), e.g: -```text +```shell fluent-bit -i dummy -t dummy -o oci_logan -p config_file_location= -p namespace= \ -p oci_la_log_group_id= -p oci_la_log_source_name= -p tls=on -p tls.verify=off -m '*' ``` #### Configuration file -In your main configuration file append the following Input & Output sections: +In your main configuration file append the following: + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + inputs: + - name: dummy + tag: dummy + + outputs: + - name: oracle_log_analytics + match: '*' + namespace: + config_file_location: + profile_name: ADMIN + oci_la_log_source_name: + oci_la_log_group_id: + tls: on + tls.verify: off +``` + +{% endtab %} +{% tab title="fluent-bit.conf" %} ```text [INPUT] - Name dummy - Tag dummy + Name dummy + Tag dummy + [Output] - Name oracle_log_analytics - Match * - Namespace - config_file_location - profile_name ADMIN - oci_la_log_source_name - oci_la_log_group_id - tls On - tls.verify Off + Name oracle_log_analytics + Match * + Namespace + config_file_location + profile_name ADMIN + oci_la_log_source_name + oci_la_log_group_id + tls On + tls.verify Off ``` +{% endtab %} +{% endtabs %} + ### Insert oci_la configs in the record -In case of multiple inputs, where oci_la_* properties can differ, you can add the properties in the record itself and instruct the plugin to read these properties from the record. The option oci_config_in_record, when set to true in the output config, will make the plugin read the mandatory and optional oci_la properties from the incoming record. The user must ensure that the necessary configs have been inserted using relevant filters, otherwise the respective chunk will be dropped. Below is an example to insert oci_la_log_source_name and oci_la_log_group_id in the record: +In case of multiple inputs, where `oci_la_*` properties can differ, you can add the properties in the record itself and instruct the plugin to read these properties from the record. The option `oci_config_in_record`, when set to true in the output config, will make the plugin read the mandatory and optional `oci_la` properties from the incoming record. The user must ensure that the necessary configs have been inserted using relevant filters, otherwise the respective chunk will be dropped. Below is an example to insert `oci_la_log_source_name` and `oci_la_log_group_id` in the record: + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + inputs: + - name: dummy + tag: dummy + + filters: + - name: modify + match: '*' + add: + - oci_la_log_source_name + - oci_la_log_group_id + + outputs: + - name: oracle_log_analytics + match: '*' + config_file_location: + profile_name: ADMIN + oci_config_in_record: true + tls: on + tls.verify: off +``` + +{% endtab %} +{% tab title="fluent-bit.conf" %} ```text [INPUT] - Name dummy - Tag dummy + Name dummy + Tag dummy [Filter] - Name modify - Match * - Add oci_la_log_source_name - Add oci_la_log_group_id + Name modify + Match * + Add oci_la_log_source_name + Add oci_la_log_group_id [Output] - Name oracle_log_analytics - Match * - config_file_location - profile_name ADMIN - oci_config_in_record true - tls On - tls.verify Off + Name oracle_log_analytics + Match * + config_file_location + profile_name ADMIN + oci_config_in_record true + tls On + tls.verify Off ``` +{% endtab %} +{% endtabs %} + ### Add optional metadata You can attach certain metadata to the log events collected from various inputs. +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + inputs: + - name: dummy + tag: dummy + + outputs: + - name: oracle_log_analytics + match: '*' + namespace: example_namespace + config_file_location: /Users/example_file_location + profile_name: ADMIN + oci_la_log_source_name: example_log_source + oci_la_log_group_id: ocid.xxxxxx + oci_la_global_metadata: + - glob_key1 value1 + - glob_key2 value2 + oci_la_metadata: + - key1 value1 + - key2 value2 + tls: on + tls.verify: off +``` + +{% endtab %} +{% tab title="fluent-bit.conf" %} + ```text [INPUT] - Name dummy - Tag dummy - + Name dummy + Tag dummy + [Output] - Name oracle_log_analytics - Match * - Namespace example_namespace - config_file_location /Users/example_file_location - profile_name ADMIN - oci_la_log_source_name example_log_source - oci_la_log_group_id ocid.xxxxxx - oci_la_global_metadata glob_key1 value1 - oci_la_global_metadata glob_key2 value2 - oci_la_metadata key1 value1 - oci_la_metadata key2 value2 - tls On - tls.verify Off + Name oracle_log_analytics + Match * + Namespace example_namespace + config_file_location /Users/example_file_location + profile_name ADMIN + oci_la_log_source_name example_log_source + oci_la_log_group_id ocid.xxxxxx + oci_la_global_metadata glob_key1 value1 + oci_la_global_metadata glob_key2 value2 + oci_la_metadata key1 value1 + oci_la_metadata key2 value2 + tls On + tls.verify Off ``` +{% endtab %} +{% endtabs %} + The above configuration will generate a payload that looks like this ```json @@ -153,44 +246,90 @@ The above configuration will generate a payload that looks like this } ``` -The multiple oci_la_global_metadata and oci_la_metadata options are turned into a JSON object of key value pairs, nested under the key metadata. +The multiple `oci_la_global_metadata` and `oci_la_metadata` options are turned into a JSON object of key value pairs, nested under the key metadata. + +With `oci_config_in_record` option set to true, the metadata key-value pairs will need to be injected in the record as an object of key value pair nested under the respective metadata field. Below is an example of one such configuration + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + inputs: + - name: dummy + tag: dummy + + filters: + - name: modify + match: '*' + add: + - olgm.key1 val1 + - olgm.key2 val2 + + - name: nest + match: '*' + operation: olgm.* + wildcard: olgm.* + nest_under: oci_la_global_metadata + remove_prefix: olgm. + + - name: modify + match: '*' + add: + - oci_la_log_source_name + - oci_la_log_group_id + + outputs: + - name: oracle_log_analytics + match: '*' + config_file_location: + namespace: + profile_name: ADMIN + oci_config_in_record: true + tls: on + tls.verify: off +``` -With oci_config_in_record option set to true, the metadata key-value pairs will need to be injected in the record as an object of key value pair nested under the respective metadata field. Below is an example of one such configuration +{% endtab %} +{% tab title="fluent-bit.conf" %} ```text [INPUT] - Name dummy - Tag dummy + Name dummy + Tag dummy [FILTER] - Name Modify - Match * - Add olgm.key1 val1 - Add olgm.key2 val2 + Name Modify + Match * + Add olgm.key1 val1 + Add olgm.key2 val2 [FILTER] - Name nest - Match * - Operation nest - Wildcard olgm.* - Nest_under oci_la_global_metadata - Remove_prefix olgm. + Name nest + Match * + Operation nest + Wildcard olgm.* + Nest_under oci_la_global_metadata + Remove_prefix olgm. [Filter] - Name modify - Match * - Add oci_la_log_source_name - Add oci_la_log_group_id + Name modify + Match * + Add oci_la_log_source_name + Add oci_la_log_group_id [Output] - Name oracle_log_analytics - Match * - config_file_location - namespace - profile_name ADMIN - oci_config_in_record true - tls On - tls.verify Off + Name oracle_log_analytics + Match * + config_file_location + namespace + profile_name ADMIN + oci_config_in_record true + tls On + tls.verify Off ``` -The above configuration first injects the necessary metadata keys and values in the record directly, with a prefix olgm. attached to the keys in order to segregate the metadata keys from rest of the record keys. Then, using a nest filter only the metadata keys are selected by the filter and nested under oci_la_global_metadata key in the record, and the prefix olgm. is removed from the metadata keys. +{% endtab %} +{% endtabs %} + +The above configuration first injects the necessary metadata keys and values in the record directly, with a prefix olgm. attached to the keys in order to segregate the metadata keys from rest of the record keys. Then, using a nest filter only the metadata keys are selected by the filter and nested under `oci_la_global_metadata` key in the record, and the prefix `olgm`. is removed from the metadata keys. \ No newline at end of file diff --git a/pipeline/outputs/openobserve.md b/pipeline/outputs/openobserve.md index 264bad156..9314fb90f 100644 --- a/pipeline/outputs/openobserve.md +++ b/pipeline/outputs/openobserve.md @@ -1,2 +1,76 @@ +--- +title: OpenObserve +description: Send logs to OpenObserve using Fluent Bit +--- + # OpenObserve +Use the OpenObserve output plugin to ingest logs into [OpenObserve](https://openobserve.ai/). + +Before you begin, you need an [OpenObserve account](https://cloud.openobserve.ai/), an +`HTTP_User`, and an `HTTP_Passwd`. You can find these fields under **Ingestion** in +OpenObserve Cloud. Alternatively, you can achieve this with various installation +types as mentioned in the +[OpenObserve documentation](https://openobserve.ai/docs/quickstart/) + +## Configuration Parameters + +| Key | Description | Default | +|------------------|---------------------------------------------------------------------------------------------------------------------------------------------|------------------------------| +| Host | Required. The OpenObserve server where you are sending logs. | `localhost` | +| TLS | Required: Enable end-to-end security using TLS. Set to `on` to enable TLS communication with OpenObserve. | `on` | +| compress | Recommended: Compresses the payload in GZIP format. OpenObserve supports and recommends setting this to `gzip` for optimized log ingestion. | _none_ | +| HTTP_User | Required: Username for HTTP authentication. | _none_ | +| HTTP_Passwd | Required: Password for HTTP authentication. | _none_ | +| URI | Required: The API path used to send logs. | `/api/default/default/_json` | +| Format | Required: The format of the log payload. OpenObserve expects JSON. | `json` | +| json_date_key | Optional: The JSON key used for timestamps in the logs. | `timestamp` | +| json_date_format | Optional: The format of the date in logs. OpenObserve supports ISO 8601. | `iso8601` | +| include_tag_key | If `true`, a tag is appended to the output. The key name is used in the `tag_key` property. | `false` | + +### Configuration File + +Use this configuration file to get started: + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + + outputs: + - name: http + match: '*' + uri: /api/default/default/_json + host: localhost + port: 5080 + tls: on + format: json + json_date_key: timestamp + json_date_format: iso8601 + http_user: + http_passwd: + compress: gzip + +``` +{% endtab %} +{% tab title="fluent-bit.conf" %} + +```text +[OUTPUT] + Name http + Match * + URI /api/default/default/_json + Host localhost + Port 5080 + tls on + Format json + Json_date_key timestamp + Json_date_format iso8601 + HTTP_User + HTTP_Passwd + compress gzip +``` + +{% endtab %} +{% endtabs %} \ No newline at end of file diff --git a/pipeline/outputs/opensearch.md b/pipeline/outputs/opensearch.md index 09a30ad3d..6c4c8efb5 100644 --- a/pipeline/outputs/opensearch.md +++ b/pipeline/outputs/opensearch.md @@ -4,49 +4,49 @@ description: Send logs to Amazon OpenSearch Service # OpenSearch -The **opensearch** output plugin, allows to ingest your records into an [OpenSearch](https://opensearch.org/) database. +The **OpenSearch** output plugin, allows to ingest your records into an [OpenSearch](https://opensearch.org/) database. The following instructions assumes that you have a fully operational OpenSearch service running in your environment. ## Configuration Parameters -| Key | Description | default | -| :--- |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| :--- | -| Host | IP address or hostname of the target OpenSearch instance | 127.0.0.1 | -| Port | TCP port of the target OpenSearch instance | 9200 | -| Path | OpenSearch accepts new data on HTTP query path "/\_bulk". But it is also possible to serve OpenSearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI. | Empty string | -| Buffer\_Size | Specify the buffer size used to read the response from the OpenSearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an _unlimited_ amount of memory set this value to **False**, otherwise the value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | 4KB | -| Pipeline | OpenSearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines. | | -| AWS\_Auth | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service | Off | -| AWS\_Region | Specify the AWS region for Amazon OpenSearch Service | | -| AWS\_STS\_Endpoint | Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service | | -| AWS\_Role\_ARN | AWS IAM Role to assume to put records to your Amazon cluster | | -| AWS\_External\_ID | External ID for the AWS IAM Role specified with `aws_role_arn` | | -| AWS\_Service\_Name | Service name to be used in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to `aoss`. See the [FAQ](opensearch.md#faq) section on Amazon OpenSearch Serverless for more information. | es | -| AWS\_Profile | AWS profile name | default | -| HTTP\_User | Optional username credential for access | | -| HTTP\_Passwd | Password for user defined in HTTP\_User | | -| Index | Index name, supports [Record Accessor syntax](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) from 2.0.5 onwards. | fluent-bit | -| Type | Type name. This option is ignored if `Suppress_Type_Name` is enabled. | \_doc | -| Logstash\_Format | Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off | Off | -| Logstash\_Prefix | When Logstash\_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash\_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated. | logstash | -| Logstash\_Prefix\_Key | When included: the value of the key in the record will be evaluated as key reference and overrides Logstash\_Prefix for index generation. If the key/value is not found in the record then the Logstash\_Prefix option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | | -| Logstash\_Prefix\_Separator | Set a separator between logstash_prefix and date. | - | -| Logstash\_DateFormat | Time format \(based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html)\) to generate the second part of the Index name. | %Y.%m.%d | -| Time\_Key | When Logstash\_Format is enabled, each record will get a new timestamp field. The Time\_Key property defines the name of that field. | @timestamp | -| Time\_Key\_Format | When Logstash\_Format is enabled, this property defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S | -| Time\_Key\_Nanos | When Logstash\_Format is enabled, enabling this property sends nanosecond precision timestamps. | Off | -| Include\_Tag\_Key | When enabled, it append the Tag name to the record. | Off | -| Tag\_Key | When Include\_Tag\_Key is enabled, this property defines the key name for the tag. | \_flb-key | -| Generate\_ID | When enabled, generate `_id` for outgoing records. This prevents duplicate records when retrying. | Off | -| Id\_Key | If set, `_id` will be the value of the key from incoming record and `Generate_ID` option is ignored. | | -| Write\_Operation | Operation to use to write in bulk requests. | create | -| Replace\_Dots | When enabled, replace field name dots with underscore. | Off | -| Trace\_Output | When enabled print the OpenSearch API calls to stdout \(for diag only\) | Off | -| Trace\_Error | When enabled print the OpenSearch API calls to stdout when OpenSearch returns an error \(for diag only\) | Off | -| Current\_Time\_Index | Use current time for index generation instead of message record | Off | -| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. | Off | -| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | -| Compress | Set payload compression mechanism. The only available option is `gzip`. Default = "", which means no compression. | | +| Key | Description | default | +|:----------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------| +| Host | IP address or hostname of the target OpenSearch instance | 127.0.0.1 | +| Port | TCP port of the target OpenSearch instance | 9200 | +| Path | OpenSearch accepts new data on HTTP query path "/\_bulk". But it is also possible to serve OpenSearch behind a reverse proxy on a sub-path. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI. | Empty string | +| Buffer\_Size | Specify the buffer size used to read the response from the OpenSearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an _unlimited_ amount of memory set this value to **False**, otherwise the value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | 4KB | +| Pipeline | OpenSearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines. | | +| AWS\_Auth | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service | Off | +| AWS\_Region | Specify the AWS region for Amazon OpenSearch Service | | +| AWS\_STS\_Endpoint | Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service | | +| AWS\_Role\_ARN | AWS IAM Role to assume to put records to your Amazon cluster | | +| AWS\_External\_ID | External ID for the AWS IAM Role specified with `aws_role_arn` | | +| AWS\_Service\_Name | Service name to be used in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to `aoss`. See the [FAQ](opensearch.md#faq) section on Amazon OpenSearch Serverless for more information. | es | +| AWS\_Profile | AWS profile name | default | +| HTTP\_User | Optional username credential for access | | +| HTTP\_Passwd | Password for user defined in HTTP\_User | | +| Index | Index name, supports [Record Accessor syntax](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) from 2.0.5 onwards. | fluent-bit | +| Type | Type name. This option is ignored if `Suppress_Type_Name` is enabled. | \_doc | +| Logstash\_Format | Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off | Off | +| Logstash\_Prefix | When Logstash\_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash\_Prefix is equals to `mydata` your index will become `mydata-YYYY.MM.DD`. The last string appended belongs to the date when the data is being generated. | logstash | +| Logstash\_Prefix\_Key | When included: the value of the key in the record will be evaluated as key reference and overrides Logstash\_Prefix for index generation. If the key/value is not found in the record then the Logstash\_Prefix option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | | +| Logstash\_Prefix\_Separator | Set a separator between logstash_prefix and date. | - | +| Logstash\_DateFormat | Time format \(based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html)\) to generate the second part of the Index name. | %Y.%m.%d | +| Time\_Key | When Logstash\_Format is enabled, each record will get a new timestamp field. The Time\_Key property defines the name of that field. | @timestamp | +| Time\_Key\_Format | When Logstash\_Format is enabled, this property defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S | +| Time\_Key\_Nanos | When Logstash\_Format is enabled, enabling this property sends nanosecond precision timestamps. | Off | +| Include\_Tag\_Key | When enabled, it append the Tag name to the record. | Off | +| Tag\_Key | When Include\_Tag\_Key is enabled, this property defines the key name for the tag. | \_flb-key | +| Generate\_ID | When enabled, generate `_id` for outgoing records. This prevents duplicate records when retrying. | Off | +| Id\_Key | If set, `_id` will be the value of the key from incoming record and `Generate_ID` option is ignored. | | +| Write\_Operation | Operation to use to write in bulk requests. | create | +| Replace\_Dots | When enabled, replace field name dots with underscore. | Off | +| Trace\_Output | When enabled print the OpenSearch API calls to stdout \(for diag only\) | Off | +| Trace\_Error | When enabled print the OpenSearch API calls to stdout when OpenSearch returns an error \(for diag only\) | Off | +| Current\_Time\_Index | Use current time for index generation instead of message record | Off | +| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. | Off | +| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | +| Compress | Set payload compression mechanism. The only available option is `gzip`. Default = "", which means no compression. | | > The parameters _index_ and _type_ can be confusing if you are new to OpenSearch, if you have used a common relational database before, they can be compared to the _database_ and _table_ concepts. Also see [the FAQ below](opensearch.md#faq) @@ -59,12 +59,12 @@ For more details about the properties available and general configuration, see [ The write\_operation can be any of: -| Operation | Description | -| ------------- | ----------- | -| create (default) | adds new data - if the data already exists (based on its id), the op is skipped.| -| index | new data is added while existing data (based on its id) is replaced (reindexed).| -| update | updates existing data (based on its id). If no data is found, the op is skipped.| -| upsert | known as merge or insert if the data does not exist, updates if the data exists (based on its id).| +| Operation | Description | +|------------------|----------------------------------------------------------------------------------------------------| +| create (default) | adds new data - if the data already exists (based on its id), the op is skipped. | +| index | new data is added while existing data (based on its id) is replaced (reindexed). | +| update | updates existing data (based on its id). If no data is found, the op is skipped. | +| upsert | known as merge or insert if the data does not exist, updates if the data exists (based on its id). | **Please note, `Id_Key` or `Generate_ID` is required in update, and upsert scenario.** @@ -74,7 +74,7 @@ In order to insert records into an OpenSearch service, you can run the plugin fr ### Command Line -The **opensearch** plugin, can read the parameters from the command line in two ways, through the **-p** argument \(property\) or setting them directly through the service URI. The URI format is the following: +The **OpenSearch** plugin, can read the parameters from the command line in two ways, through the **-p** argument \(property\) or setting them directly through the service URI. The URI format is the following: ```text es://host:port/index/type @@ -82,41 +82,62 @@ es://host:port/index/type Using the format specified, you could start Fluent Bit through: -```text +```shell fluent-bit -i cpu -t cpu -o es://192.168.2.3:9200/my_index/my_type \ - -o stdout -m '*' + -o stdout -m '*' ``` which is similar to do: -```text +```shell fluent-bit -i cpu -t cpu -o opensearch -p Host=192.168.2.3 -p Port=9200 \ - -p Index=my_index -p Type=my_type -o stdout -m '*' + -p Index=my_index -p Type=my_type -o stdout -m '*' ``` ### Configuration File -In your main configuration file append the following _Input_ & _Output_ sections. You can visualize this configuration [here](https://link.calyptia.com/qhq) +In your main configuration file append the following: + +{% tabs %} +{% tab title="fluent-bit.yaml" %} -```python +```yaml +pipeline: + inputs: + - name: cpu + tag: cpu + + outputs: + - name: opensearch + match: '*' + host: 192.168.2.3 + port: 9200 + index: my_index + type: my_type +``` +{% endtab %} +{% tab title="fluent-bit.conf" %} + +```text [INPUT] - Name cpu - Tag cpu + Name cpu + Tag cpu [OUTPUT] - Name opensearch - Match * - Host 192.168.2.3 - Port 9200 - Index my_index - Type my_type + Name opensearch + Match * + Host 192.168.2.3 + Port 9200 + Index my_index + Type my_type ``` -![example configuration visualization from calyptia](../../.gitbook/assets/image%20%282%29.png) +{% endtab %} +{% endtabs %} ## About OpenSearch field names -Some input plugins may generate messages where the field names contains dots. This **opensearch** plugin replaces them with an underscore, e.g: +Some input plugins may generate messages where the field names contains dots. This **OpenSearch** plugin replaces them with an underscore, e.g: ```text {"cpu0.p_cpu"=>17.000000} @@ -133,18 +154,37 @@ becomes ### Logstash_Prefix_Key The following snippet demonstrates using the namespace name as extracted by the -`kubernetes` filter as logstash preifix: +`kubernetes` filter as logstash prefix: + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + + outputs: + - name: opensearch + match: '*' + # ... + logstash_prefix: logstash + logstash_prefix_key: $kubernetes['namespace_name'] +``` +{% endtab %} +{% tab title="fluent-bit.conf" %} ```text [OUTPUT] - Name opensearch - Match * - # ... - Logstash_Prefix logstash - Logstash_Prefix_Key $kubernetes['namespace_name'] - # ... + Name opensearch + Match * + # ... + Logstash_Prefix logstash + Logstash_Prefix_Key $kubernetes['namespace_name'] + # ... ``` +{% endtab %} +{% endtabs %} + For records that do nor have the field `kubernetes.namespace_name`, the default prefix, `logstash` will be used. ### Fluent Bit + Amazon OpenSearch Service @@ -155,19 +195,42 @@ See [here](../../administration/aws-credentials.md) for details on how AWS crede Example configuration: +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + + outputs: + - name: opensearch + match: '*' + host: vpc-test-domain-ke7thhzoo7jawsrhmm6mb7ite7y.us-west-2.es.amazonaws.com + port: 443 + index: my_index + type: my_type + aws_auth: on + aws_region: us-west-2 + tls: on +``` +{% endtab %} +{% tab title="fluent-bit.conf" %} + ```text [OUTPUT] - Name opensearch - Match * - Host vpc-test-domain-ke7thhzoo7jawsrhmm6mb7ite7y.us-west-2.es.amazonaws.com - Port 443 - Index my_index - Type my_type - AWS_Auth On - AWS_Region us-west-2 - tls On + Name opensearch + Match * + Host vpc-test-domain-ke7thhzoo7jawsrhmm6mb7ite7y.us-west-2.es.amazonaws.com + Port 443 + Index my_index + Type my_type + AWS_Auth On + AWS_Region us-west-2 + tls On ``` +{% endtab %} +{% endtabs %} + Notice that the `Port` is set to `443`, `tls` is enabled, and `AWS_Region` is set. ### Action/metadata contains an unknown parameter type @@ -182,20 +245,53 @@ Without this you will see errors like: ### Fluent-Bit + Amazon OpenSearch Serverless Amazon OpenSearch Serverless is an offering that eliminates your need to manage OpenSearch clusters. All existing Fluent Bit OpenSearch output plugin options work with OpenSearch Serverless. For Fluent Bit, the only difference is that you must specify the service name as `aoss` (Amazon OpenSearch Serverless) when you enable `AWS_Auth`: + +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +pipeline: + + outputs: + - name: opensearch + match: '*' + host: vpc-test-domain-ke7thhzoo7jawsrhmm6mb7ite7y.us-west-2.es.amazonaws.com + port: 443 + index: my_index + type: my_type + aws_auth: on + aws_region: + aws_service_name: aoss ``` -AWS_Auth On -AWS_Region -AWS_Service_Name aoss +{% endtab %} +{% tab title="fluent-bit.conf" %} + +```text +[OUTPUT] + Name opensearch + Match * + Host vpc-test-domain-ke7thhzoo7jawsrhmm6mb7ite7y.us-west-2.es.amazonaws.com + Port 443 + Index my_index + Type my_type + AWS_Auth On + AWS_Region + AWS_Service_Name aoss ``` +{% endtab %} +{% endtabs %} + **Data Access Permissions** -When sending logs to OpenSearch Serverless, your AWS IAM entity needs [OpenSearch Serverless Data Access permisions](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-data-access.html). Give your IAM entity the following data access permissions to your serverless collection: -``` +When sending logs to OpenSearch Serverless, your AWS IAM entity needs [OpenSearch Serverless Data Access permissions](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-data-access.html). Give your IAM entity the following data access permissions to your serverless collection: + +```text aoss:CreateIndex aoss:UpdateIndex aoss:WriteDocument ``` + With data access permissions, IAM policies are not needed to access the collection. ### Issues with the OpenSearch cluster @@ -203,7 +299,9 @@ With data access permissions, IAM policies are not needed to access the collecti Occasionally the Fluent Bit service may generate errors without any additional detail in the logs to explain the source of the issue, even with the service's log_level attribute set to [Debug](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file). For example, in this scenario the logs show that a connection was successfully established with the OpenSearch domain, and yet an error is still returned: -``` + +```text +... [2023/07/10 19:26:00] [debug] [http_client] not using http_proxy for header [2023/07/10 19:26:00] [debug] [output:opensearch:opensearch.5] Signing request with AWS Sigv4 [2023/07/10 19:26:00] [debug] [aws_credentials] Requesting credentials from the EC2 provider.. @@ -213,17 +311,12 @@ For example, in this scenario the logs show that a connection was successfully e [2023/07/10 19:26:00] [debug] [task] task_id=2 reached retry-attempts limit 5/5 [2023/07/10 19:26:00] [error] [engine] chunk '7578-1689017013.184552017.flb' cannot be retried: task_id=2, input=tail.6 > output=opensearch.5 [2023/07/10 19:26:00] [debug] [task] destroy task=0x7fd1cc4d5ad0 (task_id=2) +... ``` -This behavior could be indicative of a hard-to-detect issue with index shard usage in the OpenSearch domain. +This behavior could be indicative of a hard-to-detect issue with index shard usage in the OpenSearch domain. While OpenSearch index shards and disk space are related, they are not directly tied to one another. OpenSearch domains are limited to 1000 index shards per data node, regardless of the size of the nodes. And, importantly, shard usage is not proportional to disk usage: an individual index shard can hold anywhere from a few kilobytes to dozens of gigabytes of data. -While OpenSearch index shards and disk space are related, they are not directly tied to one another. - -OpenSearch domains are limited to 1000 index shards per data node, regardless of the size of the nodes. And, importantly, shard usage is not proportional to disk usage: an individual index shard can hold anywhere from a few kilobytes to dozens of gigabytes of data. - -In other words, depending on the way index creation and shard allocation are configured in the OpenSearch domain, all of the available index shards could be used long before the data nodes run out of disk space and begin exhibiting disk-related performance issues (e.g. nodes crashing, data corruption, or the dashboard going offline). - -The primary issue that arises when a domain is out of available index shards is that new indexes can no longer be created (though logs can still be added to existing indexes). +In other words, depending on the way index creation and shard allocation are configured in the OpenSearch domain, all the available index shards could be used long before the data nodes run out of disk space and begin exhibiting disk-related performance issues (e.g. nodes crashing, data corruption, or the dashboard going offline). The primary issue that arises when a domain is out of available index shards is that new indexes can no longer be created (though logs can still be added to existing indexes). When that happens, the Fluent Bit OpenSearch output may begin showing confusing behavior. For example: - Errors suddenly appear (outputs were previously working and there were no changes to the Fluent Bit configuration when the errors began) @@ -232,16 +325,18 @@ When that happens, the Fluent Bit OpenSearch output may begin showing confusing If any of those symptoms are present, consider using the OpenSearch domain's API endpoints to troubleshoot possible shard issues. -Running this command will show both the shard count and disk usage on all of the nodes in the domain. -``` +Running this command will show both the shard count and disk usage on all the nodes in the domain. + +```text GET _cat/allocation?v ``` -Index creation issues will begin to appear if any hot data nodes have around 1000 shards OR if the total number of shards spread across hot and ultrawarm data nodes in the cluster is greater than 1000 times the total number of nodes (e.g., in a cluster with 6 nodes, the maximum shard count would be 6000). +Index creation issues will begin to appear if any hot data nodes have around 1000 shards OR if the total number of shards spread across hot and ultra warm data nodes in the cluster is greater than 1000 times the total number of nodes (e.g., in a cluster with 6 nodes, the maximum shard count would be 6000). Alternatively, running this command to manually create a new index will return an explicit error related to shard count if the maximum has been exceeded. -``` + +```text PUT ``` -There are multiple ways to resolve excessive shard usage in an OpenSearch domain such as deleting or combining indexes, adding more data nodes to the cluster, or updating the domain's index creation and sharding strategy. Consult the OpenSearch documentation for more information on how to use these strategies. +There are multiple ways to resolve excessive shard usage in an OpenSearch domain such as deleting or combining indexes, adding more data nodes to the cluster, or updating the domain's index creation and sharding strategy. Consult the OpenSearch documentation for more information on how to use these strategies. \ No newline at end of file diff --git a/pipeline/outputs/opentelemetry.md b/pipeline/outputs/opentelemetry.md index 4007955bd..89ddabf1d 100644 --- a/pipeline/outputs/opentelemetry.md +++ b/pipeline/outputs/opentelemetry.md @@ -8,42 +8,93 @@ The OpenTelemetry plugin allows you to take logs, metrics, and traces from Fluen Important Note: At the moment only HTTP endpoints are supported. -| Key | Description | Default | -| -------------------- | ------------------------------------------------------------ | --------- | -| host | IP address or hostname of the target HTTP Server | 127.0.0.1 | -| http_user | Basic Auth Username | | -| http_passwd | Basic Auth Password. Requires HTTP_user to be set | | -| port | TCP port of the target HTTP Server | 80 | -| proxy | Specify an HTTP Proxy. The expected format of this value is `http://HOST:PORT`. Note that HTTPS is **not** currently supported. It is recommended not to set this and to configure the [HTTP proxy environment variables](https://docs.fluentbit.io/manual/administration/http-proxy) instead as they support both HTTP and HTTPS. | | -| http2 | Defines whether HTTP/2 protocol is enabled. This setting also supports the `force` option, which forces HTTP/2 over a plaintext connection. | On | -| grpc | Enables gRPC over an HTTP/2 connection. This setting applies to HTTP/2 only. | off | -| metrics_uri | Specify an optional HTTP URI for the target web server listening for metrics, e.g: /v1/metrics | / | -| logs_uri | Specify an optional HTTP URI for the target web server listening for logs, e.g: /v1/logs | / | -| traces_uri | Specify an optional HTTP URI for the target web server listening for traces, e.g: /v1/traces | / | -| header | Add a HTTP header key/value pair. Multiple headers can be set. | | -| log_response_payload | Log the response payload within the Fluent Bit log | false | -| logs_body_key | The log body key to look up in the log events body/message. Sets the Body field of the opentelemtry logs data model. | message | -| logs_trace_id_message_key | The trace id key to look up in the log events body/message. Sets the TraceId field of the opentelemtry logs data model. | traceId | -| logs_span_id_message_key | The span id key to look up in the log events body/message. Sets the SpanId field of the opentelemtry logs data model. | spanId | -| logs_severity_text_message_key | The severity text id key to look up in the log events body/message. Sets the SeverityText field of the opentelemtry logs data model. | severityText | -| logs_severity_number_message_key | The severity number id key to look up in the log events body/message. Sets the SeverityNumber field of the opentelemtry logs data model. | severityNumber | -| add_label | This allows you to add custom labels to all metrics exposed through the OpenTelemetry exporter. You may have multiple of these fields | | -| compress | Set payload compression mechanism. Option available is 'gzip' | | -| logs_observed_timestamp_metadata_key | Specify an ObservedTimestamp key to look up in the metadata. | $ObservedKey | -| logs_timestamp_metadata_key |Specify a Timestamp key to look up in the metadata. | $Timestamp | -| logs_severity_key_metadata_key |Specify a SeverityText key to look up in the metadata.| $SeverityText | -| logs_severity_number_metadata_key | Specify a SeverityNumber key to look up in the metadata.| $SeverityNumber | -| logs_trace_flags_metadata_key |Specify a Flags key to look up in the metadata.| $Flags | -| logs_span_id_metadata_key |Specify a SpanId key to look up in the metadata.| $SpanId | -| logs_trace_id_metadata_key |Specify a TraceId key to look up in the metadata.| $TraceId | -| logs_attributes_metadata_key |Specify an Attributes key to look up in the metadata.| $Attributes | -| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | +| Key | Description | Default | +|--------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------| +| host | IP address or hostname of the target HTTP Server | 127.0.0.1 | +| http_user | Basic Auth Username | | +| http_passwd | Basic Auth Password. Requires HTTP_user to be set | | +| port | TCP port of the target HTTP Server | 80 | +| proxy | Specify an HTTP Proxy. The expected format of this value is `http://HOST:PORT`. Note that HTTPS is **not** currently supported. It is recommended not to set this and to configure the [HTTP proxy environment variables](https://docs.fluentbit.io/manual/administration/http-proxy) instead as they support both HTTP and HTTPS. | | +| http2 | Defines whether HTTP/2 protocol is enabled. This setting also supports the `force` option, which forces HTTP/2 over a plaintext connection. | On | +| grpc | Enables gRPC over an HTTP/2 connection. This setting applies to HTTP/2 only. | off | +| metrics_uri | Specify an optional HTTP URI for the target web server listening for metrics, e.g: /v1/metrics | / | +| logs_uri | Specify an optional HTTP URI for the target web server listening for logs, e.g: /v1/logs | / | +| traces_uri | Specify an optional HTTP URI for the target web server listening for traces, e.g: /v1/traces | / | +| header | Add a HTTP header key/value pair. Multiple headers can be set. | | +| log_response_payload | Log the response payload within the Fluent Bit log | false | +| logs_body_key | The log body key to look up in the log events body/message. Sets the Body field of the opentelemtry logs data model. | message | +| logs_trace_id_message_key | The trace id key to look up in the log events body/message. Sets the TraceId field of the opentelemtry logs data model. | traceId | +| logs_span_id_message_key | The span id key to look up in the log events body/message. Sets the SpanId field of the opentelemtry logs data model. | spanId | +| logs_severity_text_message_key | The severity text id key to look up in the log events body/message. Sets the SeverityText field of the opentelemtry logs data model. | severityText | +| logs_severity_number_message_key | The severity number id key to look up in the log events body/message. Sets the SeverityNumber field of the opentelemtry logs data model. | severityNumber | +| add_label | This allows you to add custom labels to all metrics exposed through the OpenTelemetry exporter. You may have multiple of these fields | | +| compress | Set payload compression mechanism. Option available is 'gzip' | | +| logs_observed_timestamp_metadata_key | Specify an ObservedTimestamp key to look up in the metadata. | $ObservedKey | +| logs_timestamp_metadata_key | Specify a Timestamp key to look up in the metadata. | $Timestamp | +| logs_severity_key_metadata_key | Specify a SeverityText key to look up in the metadata. | $SeverityText | +| logs_severity_number_metadata_key | Specify a SeverityNumber key to look up in the metadata. | $SeverityNumber | +| logs_trace_flags_metadata_key | Specify a Flags key to look up in the metadata. | $Flags | +| logs_span_id_metadata_key | Specify a SpanId key to look up in the metadata. | $SpanId | +| logs_trace_id_metadata_key | Specify a TraceId key to look up in the metadata. | $TraceId | +| logs_attributes_metadata_key | Specify an Attributes key to look up in the metadata. | $Attributes | +| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | ## Getting Started The OpenTelemetry plugin works with logs and only the metrics collected from one of the metric input plugins. In the following example, log records generated by the dummy plugin and the host metrics collected by the node exporter metrics plugin are exported by the OpenTelemetry output plugin. +{% tabs %} +{% tab title="fluent-bit.yaml" %} + +```yaml +# Dummy Logs & traces with Node Exporter Metrics export using OpenTelemetry output plugin +# ------------------------------------------- +# The following example collects host metrics on Linux and dummy logs & traces and delivers +# them through the OpenTelemetry plugin to a local collector : +# +service: + flush: 1 + log_level: info + +pipeline: + inputs: + - name: node_exporter_metrics + tag: node_metrics + scrape_interval: 2 + + - name: dummy + tag: dummy.log + rate: 3 + + - name: event_type + type: traces + + outputs: + - name: opentelemetry + match: '*' + host: localhost + port: 443 + metrics_uri: /v1/metrics + logs_uri: /v1/logs + traces_uri: /v1/traces + log_response_payload: true + tls: on + tls.verify: off + logs_body_key: $message + logs_span_id_message_key: span_id + logs_trace_id_message_key: trace_id + logs_severity_text_message_key: loglevel + logs_severity_number_message_key: lognum + # add user-defined labels + add_label: + - app fluent-bit + - color blue ``` + +{% endtab %} +{% tab title="fluent-bit.conf" %} + +```text # Dummy Logs & traces with Node Exporter Metrics export using OpenTelemetry output plugin # ------------------------------------------- # The following example collects host metrics on Linux and dummy logs & traces and delivers @@ -87,3 +138,6 @@ The OpenTelemetry plugin works with logs and only the metrics collected from one add_label app fluent-bit add_label color blue ``` + +{% endtab %} +{% endtabs %} \ No newline at end of file