You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/integrations/amazon-aws/amazon-opensearch.md
+27-27Lines changed: 27 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
10
10
11
11
Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud. An OpenSearch Service domain is synonymous with an OpenSearch cluster. Domains are clusters with the settings, instance types, instance counts, and storage resources that you specify.
12
12
13
-
The Sumo Logic app for Amazon OpenSearch collects CloudWatch logs, CloudWatch metrics and CloudTrail logs, provides a unified logs and metrics app that provides insights into the operations and utilization of your OpenSearch service. The preconfigured dashboards help you monitor the key metrics by domain names and nodes, view the OpenSearch events for activities, and help you plan the capacity of your OpenSearch service.
13
+
The Sumo Logic app for Amazon OpenSearch collects CloudWatch logs, CloudWatch metrics, and CloudTrail logs, providing a unified logs and metrics app that provides insights into the operations and utilization of your OpenSearch service. The preconfigured dashboards help you monitor the key metrics by domain names and nodes, view the OpenSearch events for activities, and help you plan the capacity of your OpenSearch service.
## Collect logs and metrics for the Amazon OpenSearch app
198
198
199
-
This section has instructions for collecting logs and metrics for the Amazon OPenSearch Service app.
199
+
This section has instructions for collecting logs and metrics for the Amazon OpenSearch Service app.
200
200
201
201
### Collect CloudWatch Logs
202
202
@@ -211,10 +211,10 @@ Sumo Logic supports several methods for collecting logs from Amazon CloudWatch.
211
211
- **AWS Kinesis Firehose for Logs**. Configure an [AWS Kinesis Firehose for Logs](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-logs-source/#create-an-aws-kinesis-firehose-for-logssource) (Recommended); or
212
212
- **Lambda Log Forwarder**. Configure a collection of Amazon CloudWatch Logs using our AWS Lambda function using a Sumo Logic provided CloudFormation template, as described in [Amazon CloudWatch Logs](/docs/send-data/collect-from-other-data-sources/amazon-cloudwatch-logs/) or configure collection without using CloudFormation, see [Collect Amazon CloudWatch Logs using a Lambda Function](/docs/send-data/collect-from-other-data-sources/amazon-cloudwatch-logs/collect-with-lambda-function/).<br/>
213
213
214
-
- While configuring the CloudWatch log source, following fields can be added in the source:
214
+
- While configuring the CloudWatch log source, the following fields can be added to the source:
215
215
- Add an **account** field and assign it a value which is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried via the **account** field.
216
216
- Add a **region** field and assign it the value of the respective AWS region where the **OpenSearch** domain exists.
217
-
- Add an **accountId** field and assign it the value of the respective AWS account id which is being used.
217
+
- Add an **accountId** field and assign it the value of the respective AWS account ID that is being used.
@@ -229,7 +229,7 @@ Sumo Logic supports several methods for collecting logs from Amazon CloudWatch.
229
229
* **Source Category**. Enter a source category. For example, enter `aws/observability/CloudTrail/logs`.
230
230
* **Fields**. Add an account field and assign it a value that is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried using the account field.
231
231
* **Access Key ID and Secret Access Key**. Enter your Amazon [Access Key ID and Secret Access Key](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html). Learn how to use Role-based access to AWS [here](/docs/send-data/hosted-collectors/amazon-aws/aws-sources/).
232
-
* **Log File Discovery > Scan Interval**. Use the default of 5 minutes. Alternately, enter the frequency. Sumo Logic will scan your S3 bucket for new data. Learn how to configure **Log File Discovery** [here](/docs/send-data/hosted-collectors/amazon-aws/aws-sources/).
232
+
* **Log File Discovery > Scan Interval**. Use the default of 5 minutes. Alternatively, enter the frequency. Sumo Logic will scan your S3 bucket for new data. Learn how to configure **Log File Discovery** [here](/docs/send-data/hosted-collectors/amazon-aws/aws-sources/).
233
233
* **Enable Timestamp Parsing**. Select the **Extract timestamp information from log file entries** check box.
234
234
* **Time Zone**. Select **Ignore time zone from the log file and instead use**, and select **UTC** from the dropdown.
235
235
* **Timestamp Format**. Select **Automatically detect the format**.
@@ -246,7 +246,7 @@ Sumo Logic supports several methods for collecting logs from Amazon CloudWatch.
246
246
247
247
### Field in Field Schema
248
248
249
-
1. [New UI](/docs/get-started/sumo-logic-ui/). In the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Fields**. You can also click the **Go To...** menu at the top of the screen and select **Fields**. <br/>[Classic UI](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Logs > Fields**.
249
+
1. [New UI](/docs/get-started/sumo-logic-ui/). In the main Sumo Logic menu, select **Data Management**, and then under **Logs** select **Fields**. You can also click the **Go To...** menu at the top of the screen and select **Fields**. <br/>[Classic UI](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Logs > Fields**.
250
250
2. Search for the `domainname` field.
251
251
3. If not present, create it. Learn how to create and manage fields [here](/docs/manage/fields/#manage-fields).
In case, you have a centralized collection of CloudTrail logs and are ingesting them from all accounts into a single Sumo Logic CloudTrail log source, create the following Field Extraction Rule to map a proper AWS account(s) friendly name/alias. Create it if not already present or update it as required.
290
+
In case you have a centralized collection of CloudTrail logs and are ingesting them from all accounts into a single Sumo Logic CloudTrail log source, create the following Field Extraction Rule to map a proper AWS account(s) friendly name/alias. Create it if not already present or update it as required.
* **Parse Expression**: Enter a parse expression to create an “account” field that maps to the alias you set for each subaccount. For example, if you used the “dev” alias for an AWS account with ID "528560886094" and the “prod” alias for an AWS account with ID "567680881046", your parse expression would look like:
295
+
* **Parse Expression**: Enter a parse expression to create an “account” field that maps to the alias you set for each sub-account. For example, if you used the “dev” alias for an AWS account with ID "528560886094" and the “prod” alias for an AWS account with ID "567680881046", your parse expression would look like:
296
296
297
297
```sql
298
298
| json "recipientAccountId"
299
-
// Manually map your aws account id with the AWS account alias you setup earlier for individual child account
299
+
// Manually map your AWS account ID with the AWS account alias you set up earlier for the individual child account
300
300
| "" as account
301
301
| if (recipientAccountId = "528560886094", "dev", account) as account
302
302
| if (recipientAccountId = "567680881046", "prod", account) as account
@@ -378,7 +378,7 @@ The **Amazon OpenSearch - Error Logs - Garbage Collection** dashboard provides a
378
378
Use this dashboard to:
379
379
380
380
* Monitor the average garbage collection time, by overall and domain name, with a trend graph to track changes over time.
381
-
* Analyze average cleanup size and trends, to understand the efficiency of the garbage collection process across different domains.
381
+
* Analyze average cleanup size and trends to understand the efficiency of the garbage collection process across different domains.
382
382
* Compare garbage collection counts across different nodes and domains, helping to identify any imbalances or potential issues in specific parts of the cluster.
383
383
* Visualize JVM memory usage before and after garbage collection, providing insights into the effectiveness of memory management and potential memory leaks.
| `Amazon OpenSearch - Cluster Write Operations Blocked` | This alert is triggered when the cluster is actively blocking write requests. This is a severe state that means applications cannot write new data to the cluster. | Count > = 1 | Count < 1 |
461
+
| `Amazon OpenSearch - High Indexing Latency` | This alert is triggered when the average indexing latency exceeds 1000ms, indicating a potential bottleneck in data ingestion pipelines. | Count > 1000 | Count < = 1000 |
462
+
| `Amazon OpenSearch - High JVM Memory Pressure` | This alert is triggered when JVM Memory Pressure is consistently high, which can lead to long garbage collection pauses, slow performance, and potential out-of-memory errors. | Count > = 95 | Count < 95 |
463
+
| `Amazon OpenSearch - Sustained High CPU Utilization` | This alert is triggered when CPU utilization is consistently high (>= 80%) for 15 minutes, indicating an overloaded cluster that may lead to performance degradation. | Count > = 80 | Count < 80 |
464
+
| `Amazon OpenSearch - Unassigned Primary Shards Detected` | This alert is triggered when the cluster status is critical, indicating that at least one primary shard and its replicas are not allocated. This signifies a non-functional cluster and potential data loss. | Count > = 1 | Count < 1 |
465
+
| `Amazon OpenSearch - Unassigned Replica Shards Detected` | This alert is triggered when all primary shards are active but one or more replica shards are not allocated to a node. This compromises high availability. | Count > = 1 | Count < 1 |
466
+
450
467
## Upgrade/Downgrade the Amazon OpenSearch app (Optional)
451
468
452
469
import AppUpdate from '../../reuse/apps/app-update.md';
@@ -458,20 +475,3 @@ import AppUpdate from '../../reuse/apps/app-update.md';
458
475
import AppUninstall from '../../reuse/apps/app-uninstall.md';
459
476
460
477
<AppUninstall/>
461
-
462
-
## Create monitors for Amazon OpenSearch app
463
-
464
-
import CreateMonitors from '../../reuse/apps/create-monitors.md';
| `Amazon OpenSearch - Cluster Write Operations Blocked` | This alert fires when the cluster is actively blocking write requests. This is a severe state that means applications cannot write new data to the cluster. | Count > = 1 | Count < 1 |
473
-
| `Amazon OpenSearch - High Indexing Latency` | This alert triggers when the average indexing latency exceeds 1000ms, indicating a potential bottleneck in data ingestion pipelines. | Count > 1000 | Count < = 1000 |
474
-
| `Amazon OpenSearch - High JVM Memory Pressure` | This alert fires when JVM Memory Pressure is consistently high, which can lead to long garbage collection pauses, slow performance, and potential out-of-memory errors. | Count > = 95 | Count < 95 |
475
-
| `Amazon OpenSearch - Sustained High CPU Utilization` | This alert fires when CPU utilization is consistently high (>= 80%) for 15 minutes, indicating an overloaded cluster that may lead to performance degradation. | Count > = 80 | Count < 80 |
476
-
| `Amazon OpenSearch - Unassigned Primary Shards Detected` | This alert fires when the cluster status is critical, indicating at least one primary shard and its replicas are not allocated. This signifies a non-functional cluster and potential data loss. | Count > = 1 | Count < 1 |
477
-
| `Amazon OpenSearch - Unassigned Replica Shards Detected` | This alert fires when all primary shards are active but one or more replica shards are not allocated to a node. This compromises high availability. | Count > = 1 | Count < 1 |
0 commit comments