Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -48,19 +48,19 @@ For advanced use cases, source records can be transformed by invoking a custom L

## Step 3: Specify the destination settings for your Firehose stream [firehose-step-three]

1. From the **Destination settings** panel, specify the following settings:
From the **Destination settings** panel, specify the following settings:

* **Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the {{ecloud}} Console and select **Connection details**. Here is an example of how it looks like: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`.
* **API key**: Enter the encoded Elastic API key. To create an API key, go to the {{ecloud}} Console, select **Connection details** and click **Create and manage API keys**. If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
* **Content encoding**: For a better network efficiency, leave content encoding set to GZIP.
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases.
* **Parameters**:
* **Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the {{ecloud}} Console and select **Connection details**. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.
* **API key**: Enter the encoded Elastic API key. This can be created in Kibana by following the instructions under [API Keys](../../../deploy-manage/api-keys.md). If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
* **Content encoding**: To reduce the data transfer costs, use GZIP encoding.
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60 and 300 seconds should be suitable for most use cases.
* **Parameters**:

* `es_datastream_name`: This parameter is optional and can be used to set which data stream documents will be stored. If this parameter is not specified, data is sent to the `logs-awsfirehose-default` data stream by default.
* `include_cw_extracted_fields`: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields into each record and may significantly increase data volume in Elasticsearch. As such, use of this parameter should be carefully considered and used only when the extracted fields are required for specific filtering and/or aggregation.
* `set_es_document_id`: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. Default is true. When set to false, a random ID will be used for each document which will help indexing performance.
* `es_datastream_name`: This parameter is optional and can be used to set which data stream documents will be stored. If not specified, logs are stored in `logs-awsfirehose-default` data stream and metrics are stored in `metrics-aws.cloudwatch-default` data stream.
* `include_cw_extracted_fields`: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields to each record and may significantly increase data volume in Elasticsearch. Therefore, use this parameter carefully and only when the extracted fields are required for specific filtering and/or aggregation.
* `set_es_document_id`: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. The default is true. When set to false, a random ID is used for each document, which helps indexing performance.

1. In the **Backup settings** panel, it is recommended to configure S3 backup for failed records. It’s then possible to configure workflows to automatically retry failed records, for example by using [Elastic Serverless Forwarder](asciidocalypse://docs/elastic-serverless-forwarder/docs/reference/index.md).
* **Backup settings**: It is recommended to configure S3 backup for failed records. These backups can be used to restore data losses caused by unforeseen service outages.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ Creating a Network Firewall is not trivial and is beyond the scope of this guide
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
3. Under **Applications** click **Copy endpoint** next to **Elasticsearch**.
4. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.

* **To create the API key**:

Expand All @@ -74,17 +75,11 @@ Creating a Network Firewall is not trivial and is beyond the scope of this guide

4. Set up the delivery stream by specifying the following data:

* Elastic endpoint URL
* API key
* Content encoding: gzip
* Retry duration: 60 (default)
* Parameter **es_datastream_name** = `logs-aws.firewall_logs-default`
* Backup settings: failed data only to S3 bucket


::::{important}
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
::::
* Elastic endpoint URL: The URL that you copied in the previous step.
* API key: The API key that you created in the previous step.
* Content encoding: To reduce the data transfer costs, use GZIP encoding.
* Retry duration: A duration between 60 and 300 seconds should be suitable for most use cases.
* Backup settings: It is recommended to configure S3 backup for failed records. These backups can then be used to restore failed data ingestion caused by unforeseen service outages.


The Firehose stream is ready to send logs to our Elastic Cloud deployment.
Expand Down
12 changes: 4 additions & 8 deletions solutions/observability/cloud/monitor-cloudtrail-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ You now have a CloudWatch log group with events coming from CloudTrail. For more
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
3. Under **Applications** click **Copy endpoint** next to **Elasticsearch**.
4. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.

* **To create the API key**:

Expand All @@ -102,14 +103,9 @@ You now have a CloudWatch log group with events coming from CloudTrail. For more

* Elastic endpoint URL: The URL that you copied in the previous step.
* API key: The API key that you created in the previous step.
* Content encoding: gzip
* Retry duration: 60 (default)
* Backup settings: failed data only to s3 bucket


::::{important}
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
::::
* Content encoding: To reduce the data transfer costs, use GZIP encoding.
* Retry duration: A duration between 60 and 300 seconds should be suitable for most use cases.
* Backup settings: It is recommended to configure S3 backup for failed records. These backups can then be used to restore failed data ingestion caused by unforeseen service outages.


You now have an Amazon Data Firehose delivery specified with:
Expand Down
11 changes: 4 additions & 7 deletions solutions/observability/cloud/monitor-cloudwatch-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,21 +114,18 @@ Take note of the log group name for this Lambda function, as you will need it in
1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
2. Find your deployment in the **Hosted deployments** card and select **Manage**.
3. Under **Applications** click **Copy endpoint** next to **Elasticsearch**.
4. Make sure the endpoint is in the following format: `https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com`.

* **To create the API key**:

1. Go to the [Elastic Cloud](https://cloud.elastic.co/) console
2. Select **Open Kibana**.
3. Expand the left-hand menu, under **Management** select **Stack management > API Keys** and click **Create API key**. If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least `auto_configure` and `write` permissions for the indices you will be using with this delivery stream.

* **Content encoding**: For a better network efficiency, leave content encoding set to GZIP.
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases.
* **es_datastream_name**: `logs-aws.generic-default`
* **Content encoding**: To reduce the data transfer costs, use GZIP encoding.
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60 and 300 seconds should be suitable for most use cases.


::::{important}
Verify that your **Elasticsearch endpoint URL** includes `.es.` between the **deployment name** and **region**. Example: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`
::::
5. It is recommended to configure S3 backup for failed records from the **Backup settings** panel. These backups can be used to restore data losses caused by unforeseen service outages.


The Firehose stream is now ready to send logs to your Elastic Cloud deployment.
Expand Down