Skip to content

Commit 77229cc

Browse files
committed
improve firehose integration documents
Signed-off-by: Kavindu Dodanduwa <[email protected]>
1 parent 73555df commit 77229cc

File tree

4 files changed

+24
-36
lines changed

4 files changed

+24
-36
lines changed

solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -48,19 +48,19 @@ For advanced use cases, source records can be transformed by invoking a custom L
4848

4949
## Step 3: Specify the destination settings for your Firehose stream [firehose-step-three]
5050

51-
1. From the **Destination settings** panel, specify the following settings:
51+
From the **Destination settings** panel, specify the following settings:
5252

53-
* **Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the {{ecloud}} Console and select **Connection details**. Here is an example of how it looks like: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`.
54-
* **API key**: Enter the encoded Elastic API key. To create an API key, go to the {{ecloud}} Console, select **Connection details** and click **Create and manage API keys**. If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
55-
* **Content encoding**: For a better network efficiency, leave content encoding set to GZIP.
56-
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases.
57-
* **Parameters**:
53+
* **Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the {{ecloud}} Console and select **Connection details**. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
54+
* **API key**: Enter the encoded Elastic API key. This can be created in Kibana by following the instructions under [API Keys](../../../deploy-manage/api-keys.md). If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
55+
* **Content encoding**: To reduce the data transfer costs, use GZIP encoding.
56+
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60 and 300 seconds should be suitable for most use cases.
57+
* **Parameters**:
5858

59-
* `es_datastream_name`: This parameter is optional and can be used to set which data stream documents will be stored. If this parameter is not specified, data is sent to the `logs-awsfirehose-default` data stream by default.
60-
* `include_cw_extracted_fields`: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields into each record and may significantly increase data volume in Elasticsearch. As such, use of this parameter should be carefully considered and used only when the extracted fields are required for specific filtering and/or aggregation.
61-
* `set_es_document_id`: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. Default is true. When set to false, a random ID will be used for each document which will help indexing performance.
59+
* `es_datastream_name`: This parameter is optional and can be used to set which data stream documents will be stored. When unspecified, logs will be stored in `logs-awsfirehose-default` data stream and metrics will be stored in `metrics-aws.cloudwatch-default` data stream.
60+
* `include_cw_extracted_fields`: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields into each record and may significantly increase data volume in Elasticsearch. As such, use of this parameter should be carefully considered and used only when the extracted fields are required for specific filtering and/or aggregation.
61+
* `set_es_document_id`: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. Default is true. When set to false, a random ID will be used for each document which will help indexing performance.
6262

63-
1. In the **Backup settings** panel, it is recommended to configure S3 backup for failed records. It’s then possible to configure workflows to automatically retry failed records, for example by using [Elastic Serverless Forwarder](asciidocalypse://docs/elastic-serverless-forwarder/docs/reference/ingestion-tools/esf/index.md).
63+
* **Backup settings**: It is recommended to configure S3 backup for failed records. These backups can be used to restore data losses caused by unforeseen service outages.
6464

6565

6666

solutions/observability/cloud/monitor-aws-network-firewall-logs.md

Lines changed: 6 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,7 @@ Creating a Network Firewall is not trivial and is beyond the scope of this guide
6565
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
6666
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
6767
3. Under **Applications** click **Copy endpoint** next to **Elasticsearch**.
68+
4. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
6869

6970
* **To create the API key**:
7071

@@ -74,17 +75,11 @@ Creating a Network Firewall is not trivial and is beyond the scope of this guide
7475

7576
4. Set up the delivery stream by specifying the following data:
7677

77-
* Elastic endpoint URL
78-
* API key
79-
* Content encoding: gzip
80-
* Retry duration: 60 (default)
81-
* Parameter **es_datastream_name** = `logs-aws.firewall_logs-default`
82-
* Backup settings: failed data only to S3 bucket
83-
84-
85-
::::{important}
86-
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
87-
::::
78+
* Elastic endpoint URL: The URL that you copied in the previous step.
79+
* API key: The API key that you created in the previous step.
80+
* Content encoding: To reduce the data transfer costs, use GZIP encoding.
81+
* Retry duration: A duration between 60 and 300 seconds should be suitable for most use cases.
82+
* Backup settings: It is recommended to configure S3 backup for failed records. These backups can then be used to restore failed data ingestion caused by unforeseen service outages.
8883

8984

9085
The Firehose stream is ready to send logs to our Elastic Cloud deployment.

solutions/observability/cloud/monitor-cloudtrail-logs.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,7 @@ You now have a CloudWatch log group with events coming from CloudTrail. For more
9191
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
9292
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
9393
3. Under **Applications** click **Copy endpoint** next to **Elasticsearch**.
94+
4. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
9495

9596
* **To create the API key**:
9697

@@ -102,14 +103,9 @@ You now have a CloudWatch log group with events coming from CloudTrail. For more
102103

103104
* Elastic endpoint URL: The URL that you copied in the previous step.
104105
* API key: The API key that you created in the previous step.
105-
* Content encoding: gzip
106-
* Retry duration: 60 (default)
107-
* Backup settings: failed data only to s3 bucket
108-
109-
110-
::::{important}
111-
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
112-
::::
106+
* Content encoding: To reduce the data transfer costs, use GZIP encoding.
107+
* Retry duration: A duration between 60 and 300 seconds should be suitable for most use cases.
108+
* Backup settings: It is recommended to configure S3 backup for failed records. These backups can then be used to restore failed data ingestion caused by unforeseen service outages.
113109

114110

115111
You now have an Amazon Data Firehose delivery specified with:

solutions/observability/cloud/monitor-cloudwatch-logs.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -114,21 +114,18 @@ Take note of the log group name for this Lambda function, as you will need it in
114114
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
115115
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
116116
3. Under **Applications** click **Copy endpoint** next to **Elasticsearch**.
117+
4. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
117118

118119
* **To create the API key**:
119120

120121
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
121122
2. Select **Open Kibana**.
122123
3. Expand the left-hand menu, under **Management** select **Stack management > API Keys** and click **Create API key**. If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least `auto_configure` and `write` permissions for the indices you will be using with this delivery stream.
123124

124-
* **Content encoding**: For a better network efficiency, leave content encoding set to GZIP.
125-
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases.
126-
* **es_datastream_name**: `logs-aws.generic-default`
125+
* **Content encoding**: To reduce the data transfer costs, use GZIP encoding.
126+
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60 and 300 seconds should be suitable for most use cases.
127127

128-
129-
::::{important}
130-
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
131-
::::
128+
5. It is recommended to configure S3 backup for failed records from the **Backup settings** panel. These backups can be used to restore data losses caused by unforeseen service outages.
132129

133130

134131
The Firehose stream is now ready to send logs to your Elastic Cloud deployment.

0 commit comments

Comments
 (0)