You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/outputs/elasticsearch.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,36 +13,36 @@ operational Elasticsearch service running in your environment.
13
13
| Key | Description | Default |
14
14
| :--- | :--- | :--- |
15
15
|`Host`| IP address or hostname of the target Elasticsearch instance |`127.0.0.1`|
16
-
|`Port`| TCP port of the target Elasticsearch instance | 9200 |
16
+
|`Port`| TCP port of the target Elasticsearch instance |`9200`|
17
17
|`Path`| Elasticsearch accepts new data on HTTP query path `/_bulk`. You can also serve Elasticsearch behind a reverse proxy on a sub-path. Define the path by adding a path prefix in the indexing HTTP POST URI. | Empty string |
18
18
|`compress`| Set payload compression mechanism. Option available is `gzip`. ||
19
19
|`Buffer_Size`| Specify the buffer size used to read the response from the Elasticsearch HTTP service. Use for debugging purposes where required to read full responses. Response size grows depending of the number of records inserted. To use an unlimited amount of memory, set this value to `False`. Otherwise set the value according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md). |`512KB`|
20
-
|`Pipeline`| Define which pipeline the database should use. For performance reasons, it's strongly suggested to do parsing and filtering on Fluent Bit side, and avoid pipelines. ||
20
+
|`Pipeline`| Define which pipeline the database should use. For performance reasons, it's strongly suggested to do parsing and filtering on Fluent Bit side, and avoid pipelines. |_none_|
21
21
|`AWS_Auth`| Enable AWS Sigv4 Authentication for Amazon OpenSearch Service. |`Off`|
22
-
|`AWS_Region`| Specify the AWS region for Amazon OpenSearch Service. ||
23
-
|`AWS_STS_Endpoint`| Specify the custom STS endpoint to be used with STS API for Amazon OpenSearch Service ||
24
-
|`AWS_Role_ARN`| AWS IAM Role to assume to put records to your Amazon cluster ||
25
-
|`AWS_External_ID`| External ID for the AWS IAM Role specified with `aws_role_arn`||
22
+
|`AWS_Region`| Specify the AWS region for Amazon OpenSearch Service. |_none_|
23
+
|`AWS_STS_Endpoint`| Specify the custom STS endpoint to be used with STS API for Amazon OpenSearch Service |_none_|
24
+
|`AWS_Role_ARN`| AWS IAM Role to assume to put records to your Amazon cluster |_none_|
25
+
|`AWS_External_ID`| External ID for the AWS IAM Role specified with `aws_role_arn`|_none_|
26
26
|`AWS_Service_Name`| Service name to use in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to `aoss`. See the [FAQ](opensearch.md#faq) section on Amazon OpenSearch Serverless for more information. |`es`|
27
-
|`AWS_Profile`| AWS profile name | default |
28
-
|`Cloud_ID`| If using Elastic's Elasticsearch Service you can specify the `cloud_id` of the cluster running. The string has the format `<deployment_name>:<base64_info>`. Once decoded, the `base64_info` string has the format `<deployment_region>$<elasticsearch_hostname>$<kibana_hostname>`. ||
29
-
|`Cloud_Auth`| Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud ||
30
-
|`HTTP_User`| Optional username credential for Elastic X-Pack access ||
31
-
|`HTTP_Passwd`| Password for user defined in `HTTP_User`||
27
+
|`AWS_Profile`| AWS profile name |`default`|
28
+
|`Cloud_ID`| If using Elastic's Elasticsearch Service you can specify the `cloud_id` of the cluster running. The string has the format `<deployment_name>:<base64_info>`. Once decoded, the `base64_info` string has the format `<deployment_region>$<elasticsearch_hostname>$<kibana_hostname>`. |_none_|
29
+
|`Cloud_Auth`| Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud |_none_|
30
+
|`HTTP_User`| Optional username credential for Elastic X-Pack access |_none_|
31
+
|`HTTP_Passwd`| Password for user defined in `HTTP_User`|_none_|
32
32
|`Index`| Index name |`fluent-bit`|
33
33
|`Type`| Type name |`_doc`|
34
34
|`Logstash_Format`| Enable Logstash format compatibility. This option takes a Boolean value: `True/False`, `On/Off`|`Off`|
35
35
|`Logstash_Prefix`| When `Logstash_Format` is enabled, the Index name is composed using a prefix and the date, e.g: If `Logstash_Prefix` is equal to `mydata` your index will become `mydata-YYYY.MM.DD`. The last string appended belongs to the date when the data is being generated. |`logstash`|
36
-
|`Logstash_Prefix_Key`| When included: the value of the key in the record will be evaluated as key reference and overrides `Logstash_Prefix` for index generation. If the key/value isn't found in the record then the `Logstash_Prefix` option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). ||
37
-
|`Logstash_Prefix_Separator`| Set a separator between `Logstash_Prefix` and date.|-|
36
+
|`Logstash_Prefix_Key`| When included: the value of the key in the record will be evaluated as key reference and overrides `Logstash_Prefix` for index generation. If the key/value isn't found in the record then the `Logstash_Prefix` option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). |_none_|
37
+
|`Logstash_Prefix_Separator`| Set a separator between `Logstash_Prefix` and date.|`-`|
38
38
|`Logstash_DateFormat`| Time format based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html) to generate the second part of the Index name. |`%Y.%m.%d`|
39
39
|`Time_Key`| When `Logstash_Format` is enabled, each record will get a new timestamp field. The `Time_Key` property defines the name of that field. |`@timestamp`|
40
40
|`Time_Key_Format`| When `Logstash_Format` is enabled, this property defines the format of the timestamp. |`%Y-%m-%dT%H:%M:%S`|
41
41
|`Time_Key_Nanos`| When `Logstash_Format` is enabled, enabling this property sends nanosecond precision timestamps. |`Off`|
42
42
|`Include_Tag_Key`| When enabled, it append the Tag name to the record. |`Off`|
43
43
|`Tag_Key`| When `Include_Tag_Key` is enabled, this property defines the key name for the tag. |`_flb-key`|
44
44
|`Generate_ID`| When enabled, generate `_id` for outgoing records. This prevents duplicate records when retrying ES. |`Off`|
45
-
|`Id_Key`| If set, `_id` will be the value of the key from incoming record and `Generate_ID` option is ignored. ||
45
+
|`Id_Key`| If set, `_id` will be the value of the key from incoming record and `Generate_ID` option is ignored. |_none_|
46
46
|`Write_Operation`|`Write_operation` can be any of: `create`, `index`, `update`, `upsert`. |`create`|
47
47
|`Replace_Dots`| When enabled, replace field name dots with underscore. Required by Elasticsearch 2.0-2.3. |`Off`|
48
48
|`Trace_Output`| Print all ElasticSearch API request payloads to `stdout` for diagnostics. |`Off`|
@@ -86,7 +86,7 @@ command line or through the configuration file:
86
86
87
87
The **es** plugin can read the parameters from the command line in two ways:
88
88
89
-
- Through the `-p` argument (property)
89
+
- Through the `-p` argument (property).
90
90
- Setting them directly through the service URI.
91
91
92
92
The URI format is the following:
@@ -102,7 +102,7 @@ fluent-bit -i cpu -t cpu -o es://192.168.2.3:9200/my_index/my_type \
102
102
-o stdout -m '*'
103
103
```
104
104
105
-
which is similar to do:
105
+
Which is similar to the following command:
106
106
107
107
```shell copy
108
108
fluent-bit -i cpu -t cpu -o es -p Host=192.168.2.3 -p Port=9200 \
0 commit comments