You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,19 +48,19 @@ For advanced use cases, source records can be transformed by invoking a custom L
48
48
49
49
## Step 3: Specify the destination settings for your Firehose stream [firehose-step-three]
50
50
51
-
1.From the **Destination settings** panel, specify the following settings:
51
+
From the **Destination settings** panel, specify the following settings:
52
52
53
-
***Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the {{ecloud}} Console and select **Connection details**. Here is an example of how it looks like: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`.
54
-
***API key**: Enter the encoded Elastic API key. To create an API key, go to the {{ecloud}} Console, select **Connection details** and click **Create and manage API keys**. If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
55
-
***Content encoding**: For a better network efficiency, leave content encoding set to GZIP.
56
-
***Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases.
57
-
***Parameters**:
53
+
***Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the {{ecloud}} Console and select **Connection details**. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
54
+
***API key**: Enter the encoded Elastic API key. This can be created in Kibana by following the instructions under [API Keys](../../../deploy-manage/api-keys.md). If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
55
+
***Content encoding**: To reduce the data transfer costs, use GZIP encoding.
56
+
***Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60 and 300 seconds should be suitable for most use cases.
57
+
***Parameters**:
58
58
59
-
*`es_datastream_name`: This parameter is optional and can be used to set which data stream documents will be stored. If this parameter is not specified, data is sent to the`logs-awsfirehose-default` data stream by default.
60
-
*`include_cw_extracted_fields`: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields into each record and may significantly increase data volume in Elasticsearch. As such, use of this parameter should be carefully considered and used only when the extracted fields are required for specific filtering and/or aggregation.
61
-
*`set_es_document_id`: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. Default is true. When set to false, a random ID will be used for each document which will help indexing performance.
59
+
*`es_datastream_name`: This parameter is optional and can be used to set which data stream documents will be stored. When unspecified, logs will be stored in`logs-awsfirehose-default` data stream and metrics will be stored in `metrics-aws.cloudwatch-default` data stream.
60
+
*`include_cw_extracted_fields`: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields into each record and may significantly increase data volume in Elasticsearch. As such, use of this parameter should be carefully considered and used only when the extracted fields are required for specific filtering and/or aggregation.
61
+
*`set_es_document_id`: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. Default is true. When set to false, a random ID will be used for each document which will help indexing performance.
62
62
63
-
1. In the **Backup settings** panel, it is recommended to configure S3 backup for failed records. It’s then possible to configure workflows to automatically retry failed records, for example by using [Elastic Serverless Forwarder](asciidocalypse://docs/elastic-serverless-forwarder/docs/reference/ingestion-tools/esf/index.md).
63
+
***Backup settings**: It is recommended to configure S3 backup for failed records. These backups can be used to restore data losses caused by unforeseen service outages.
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
87
-
::::
78
+
* Elastic endpoint URL: The URL that you copied in the previous step.
79
+
* API key: The API key that you created in the previous step.
80
+
* Content encoding: To reduce the data transfer costs, use GZIP encoding.
81
+
* Retry duration: A duration between 60 and 300 seconds should be suitable for most use cases.
82
+
* Backup settings: It is recommended to configure S3 backup for failed records. These backups can then be used to restore failed data ingestion caused by unforeseen service outages.
88
83
89
84
90
85
The Firehose stream is ready to send logs to our Elastic Cloud deployment.
Copy file name to clipboardExpand all lines: solutions/observability/cloud/monitor-cloudtrail-logs.md
+4-8Lines changed: 4 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -91,6 +91,7 @@ You now have a CloudWatch log group with events coming from CloudTrail. For more
91
91
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
92
92
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
93
93
3. Under **Applications** click **Copy endpoint** next to **Elasticsearch**.
94
+
4. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
94
95
95
96
***To create the API key**:
96
97
@@ -102,14 +103,9 @@ You now have a CloudWatch log group with events coming from CloudTrail. For more
102
103
103
104
* Elastic endpoint URL: The URL that you copied in the previous step.
104
105
* API key: The API key that you created in the previous step.
105
-
* Content encoding: gzip
106
-
* Retry duration: 60 (default)
107
-
* Backup settings: failed data only to s3 bucket
108
-
109
-
110
-
::::{important}
111
-
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
112
-
::::
106
+
* Content encoding: To reduce the data transfer costs, use GZIP encoding.
107
+
* Retry duration: A duration between 60 and 300 seconds should be suitable for most use cases.
108
+
* Backup settings: It is recommended to configure S3 backup for failed records. These backups can then be used to restore failed data ingestion caused by unforeseen service outages.
113
109
114
110
115
111
You now have an Amazon Data Firehose delivery specified with:
Copy file name to clipboardExpand all lines: solutions/observability/cloud/monitor-cloudwatch-logs.md
+4-7Lines changed: 4 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,21 +114,18 @@ Take note of the log group name for this Lambda function, as you will need it in
114
114
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
115
115
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
116
116
3. Under **Applications** click **Copy endpoint**next to **Elasticsearch**.
117
+
4. Make sure the endpoint isin the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
117
118
118
119
***To create the API key**:
119
120
120
121
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
121
122
2. Select **Open Kibana**.
122
123
3. Expand the left-hand menu, under **Management** select **Stack management >API Keys**and click **Create API key**. If you are using an API key with**Restrict privileges**, make sure to review the Indices privileges to provide at least `auto_configure`and`write` permissions for the indices you will be using with this delivery stream.
123
124
124
-
***Content encoding**: For a better network efficiency, leave content encoding set to GZIP.
125
-
***Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases.
***Content encoding**: To reduce the data transfer costs, use GZIP encoding.
126
+
***Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60and300 seconds should be suitable for most use cases.
127
127
128
-
129
-
::::{important}
130
-
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name**and**region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
131
-
::::
128
+
5. It is recommended to configure S3 backup for failed records from the **Backup settings** panel. These backups can be used to restore data losses caused by unforeseen service outages.
132
129
133
130
134
131
The Firehose stream is now ready to send logs to your Elastic Cloud deployment.
0 commit comments