From d440061733e86b7159167c9bbf10bf2f661a4815 Mon Sep 17 00:00:00 2001 From: Himanshu Sharma Date: Tue, 1 Oct 2024 22:21:26 +0530 Subject: [PATCH 1/7] AWS opensearch docs --- .../amazon-aws/amazon-opensearch-service.md | 349 +++++++++++++++++- 1 file changed, 337 insertions(+), 12 deletions(-) diff --git a/docs/integrations/amazon-aws/amazon-opensearch-service.md b/docs/integrations/amazon-aws/amazon-opensearch-service.md index b2a32ceedd..b865b8210c 100644 --- a/docs/integrations/amazon-aws/amazon-opensearch-service.md +++ b/docs/integrations/amazon-aws/amazon-opensearch-service.md @@ -8,20 +8,345 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; Thumbnail icon -Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud. Amazon OpenSearch Service supports OpenSearch and legacy Elasticsearch OSS (up to 7.10, the final open source version of the software). When you create a cluster, you have the option of which search engine to use. For more details, refer to the [AWS documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html). +Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud. An OpenSearch Service domain is synonymous with an OpenSearch cluster. Domains are clusters with the settings, instance types, instance counts, and storage resources that you specify. -## Log and metric types -* [CloudWatch Metrics](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudwatchmetrics.html) -* [CloudWatch Logs](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html) -* [CloudTrail Logs](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudtrailauditing.html) +The Sumo Logic app for Amazon OpenSearch collects CloudWatch logs, CloudWatch metrics and CloudTrail logs, provides a unified logs and metrics app that provides insights into the operations and utilization of your OpenSearch service. The preconfigured dashboards help you monitor the key metrics by domain names and nodes, view the OpenSearch events for activities, and help you plan the capacity of your OpenSearch service. -## Setup -You can collect the logs and metrics for Sumo Logic's Amazon OpenSearch Service integration by following the below steps. +## **Log and Metrics types[​](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/monitoring.html)** -### Configure metrics collection -* Collect **CloudWatch Metrics** with namespace `AWS/ES` using the [AWS Kinesis Firehose for Metrics](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) source. For `AWS/ES` metrics and dimensions, refer to [Amazon OpenSearch Service CloudWatch metrics](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudwatchmetrics.html). +The Sumo Logic app for Amazon OpenSearch uses: -### Configure logs collection -* Collect [Amazon CloudWatch Logs](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html) using [AWS Kinesis Firehose for Logs](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-logs-source/) source. Amazon OpenSearch Service exposes Error logs, Slow logs and Audit logs through Amazon CloudWatch Logs. Search slow logs, indexing slow logs, and error logs are useful for troubleshooting performance and stability issues. [Audit logs](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/audit-logs.html) track user activity for compliance purposes. +* OpenSearch CloudWatch Logs. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html). +* OpenSearch CloudWatch Metrics. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudwatchmetrics.html). +* OpenSearch using AWS CloudTrail. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudtrailauditing.html). -* Collect [AWS CloudTrail Logs](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudtrailauditing.html) using [AWS CloudTrail](/docs/send-data/hosted-collectors/amazon-aws/aws-cloudtrail-source/) source. Amazon OpenSearch Service integrates with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in OpenSearch Service. CloudTrail captures all configuration API calls for OpenSearch Service as events. The captured calls include calls from the OpenSearch Service console, AWS CLI, or an AWS SDK. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for OpenSearch Service. Using the information collected by CloudTrail, you can determine the request that was made to OpenSearch Service, the IP address from which the request was made, who made the request, when it was made, and additional details. \ No newline at end of file +### **Sample cloudTrail log messages** + +``` + +{ + "eventVersion": "1.05", + "userIdentity": {...}, + "eventTime": "2018-08-21T22:00:05Z", + "eventSource": "es.amazonaws.com", + "eventName": "CreateDomain", + "awsRegion": "us-west-1", + "sourceIPAddress": "123.123.123.123", + "userAgent": "signin.amazonaws.com", + "requestParameters": { + "engineVersion": "OpenSearch_1.0", + "clusterConfig": { + "instanceType": "m4.large.search", + "instanceCount": 1 + }, + "snapshotOptions": { + "automatedSnapshotStartHour": 0 + }, + "domainName": "test-domain", + "encryptionAtRestOptions": {}, + "eBSOptions": { + "eBSEnabled": true, + "volumeSize": 10, + "volumeType": "gp2" + }, + "accessPolicies": {...}, + "responseElements": { + "domainStatus": { + "created": true, + "clusterConfig": { + "zoneAwarenessEnabled": false, + "instanceType": "m4.large.search", + "dedicatedMasterEnabled": false, + "instanceCount": 1 + }, + "cognitoOptions": { + "enabled": false + }, + "encryptionAtRestOptions": { + "enabled": false + }, + "advancedOptions": { + "rest.action.multi.allow_explicit_index": "true" + }, + "upgradeProcessing": false, + "snapshotOptions": { + "automatedSnapshotStartHour": 0 + }, + "eBSOptions": { + "eBSEnabled": true, + "volumeSize": 10, + "volumeType": "gp2" + }, + "engineVersion": "OpenSearch_1.0", + "processing": true, + "aRN": "arn:aws:es:us-west-1:123456789012:domain/test-domain", + "domainId": "123456789012/test-domain", + "deleted": false, + "domainName": "test-domain", + "accessPolicies": {...}, + "requestID": "12345678-1234-1234-1234-987654321098", + "eventID": "87654321-4321-4321-4321-987654321098", + "eventType": "AwsApiCall", + "recipientAccountId": "123456789012" +} + +``` + +### **Sample queries** + +Successful Events by Event Name + +``` + +account={{account}} region={{region}} namespace=aws/es "\"eventsource\":\"es.amazonaws.com\"" +| json "userIdentity", "eventSource", "eventName", "awsRegion", "sourceIPAddress", "userAgent", "eventType", "recipientAccountId", "requestParameters", "responseElements", "requestID", "errorCode", "errorMessage" as userIdentity, event_source, event_name, region, src_ip, user_agent, event_type, recipient_account_id, requestParameters, responseElements, request_id, error_code, error_message nodrop +| where event_source = "es.amazonaws.com" +| json field=userIdentity "accountId", "type", "arn", "userName" as accountid, type, arn, username nodrop +| parse field=arn ":assumed-role/*" as user nodrop +| parse field=arn "arn:aws:iam::*:*" as accountid, user nodrop +| json field=requestParameters "domainName" as domainname nodrop +| if (isBlank(accountid), recipient_account_id, accountid) as accountid +| where (tolowercase(domainname) matches tolowercase("{{domainname}}")) or isBlank(domainname) +| if (isEmpty(error_code), "Success", "Failure") as event_status +| if (isEmpty(username), user, username) as user +| count as event_count by event_name +| sort by event_count, event_name asc +``` + +Write Latency by Domain Name (Metrics-based) + +``` + +account={{account}} region={{region}} namespace=aws/es domainname={{domainname}} !nodeid=* metric=WriteLatency statistic = average | avg by domainname +``` + +## **Collecting logs and metrics for the Amazon OpenSearch app** + +### **Collecting Metrics for Amazon OpenSearch** + +1. Configure a [Hosted Collector](https://help.sumologic.com/docs/send-data/hosted-collectors/configure-hosted-collector/). +2. Configure an [Amazon CloudWatch Source for Metrics](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics/) or [AWS Kinesis Firehose for Metrics Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) (Recommended). +3. Namespaces. Select aws/es. +4. Metadata. Add an account field to the source and assign it a value that is a friendly name/alias to your AWS account from which you are collecting metrics. The account field allows you to query metrics. + ![Metadata][image2] +5. Click Save. + +### **Collecting Amazon OpenSearch Events using CloudTrail** + +1. Add an [AWS CloudTrail Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-cloudtrail-source/) to your Hosted Collector. + * Name. Enter a name to display for the new Source. + * Description. Enter an optional description. + * S3 Region. Select the Amazon Region for your cloudTrail S3 bucket. + * Bucket Name. Enter the exact name of your cloudTrail S3 bucket. + * Path Expression. Enter the string that matches the S3 objects you'd like to collect. You can use a wildcard (\*) in this string. + * DO NOT use a [leading forward slash](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-path-expressions/). + * The S3 bucket name is not part of the path. Don’t include the bucket name when you are setting the Path Expression. + * Source Category. Enter a source category. For example, enter `aws/observability/CloudTrail/logs`. + * Fields. Add an account field and assign it a value that is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried using the account field. + ![Fields][image3] + * Access Key ID and Secret Access Key. Enter your Amazon [Access Key ID and Secret Access Key](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html). Learn how to use Role-based access to AWS [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). + * Log File Discovery \-\> Scan Interval. Use the default of 5 minutes. Alternately, enter the frequency. Sumo Logic will scan your S3 bucket for new data. Learn how to configure Log File Discovery [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). + * Enable Timestamp Parsing. Select the Extract timestamp information from log file entries check box. + * Time Zone. Select Ignore time zone from the log file and instead use, and select UTC from the dropdown. + * Timestamp Format. Select Automatically detect the format. + * Enable Multiline Processing. Select the Detect messages spanning multiple lines check box, and select Infer Boundaries. +2. Click Save. + +### **Field in Field Schema** + +1. [Classic UI](https://help.sumologic.com/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select Manage Data \> Logs \> Fields. + [New UI](https://help.sumologic.com/docs/get-started/sumo-logic-ui/). In the top menu select Configuration, and then under Logs select Fields. You can also click the Go To... menu at the top of the screen and select Fields. +2. Search for the `"domainname"` field. +3. If not present, create it. Learn how to create and manage fields [here](https://help.sumologic.com/docs/manage/fields/#manage-fields). + +### **Field Extraction Rule(s)** + +Create a Field Extraction Rule for CloudTrail Logs. Learn how to create a Field Extraction Rule [here](https://help.sumologic.com/docs/manage/field-extractions/create-field-extraction-rule/). + +``` + +Rule Name: AwsObservabilityESCloudTrailLogsFER +Applied at: Ingest Time +Scope (Specific Data): account=* eventname eventsource \"es.amazonaws.com\" +``` + +Parse Expression: + +``` + +| json "userIdentity", "eventSource", "eventName", "awsRegion", "recipientAccountId", "requestParameters", "responseElements" as userIdentity, event_source, event_name, region, recipient_account_id, requestParameters, responseElements nodrop +| where event_source = "es.amazonaws.com" +| json field=userIdentity "accountId", "type", "arn", "userName" as accountid, type, arn, username nodrop +| parse field=arn ":assumed-role/*" as user nodrop +| parse field=arn "arn:aws:iam::*:*" as accountid, user nodrop +| json field=requestParameters "domainName" as domainname +| if (isBlank(accountid), recipient_account_id, accountid) as accountid +| toLowerCase(domainname) as domainname +| "aws/es" as namespace +| fields region, namespace, domainname, accountid +``` + +## **Centralized AWS CloudTrail Log Collection** + +In case, you have a centralized collection of CloudTrail logs and are ingesting them from all accounts into a single Sumo Logic CloudTrail log source, create the following Field Extraction Rule to map a proper AWS account(s) friendly name/alias. Create it if not already present or update it as required. + +* Rule Name: AWS Accounts +* Applied at: Ingest Time +* Scope (Specific Data): `_sourceCategory=aws/observability/cloudtrail/logs` +* Parse Expression: Enter a parse expression to create an “account” field that maps to the alias you set for each sub account. For example, if you used the “dev” alias for an AWS account with ID "528560886094" and the “prod” alias for an AWS account with ID "567680881046", your parse expression would look like: + +``` + +| json "recipientAccountId" +// Manually map your aws account id with the AWS account alias you setup earlier for individual child account +| "" as account +| if (recipientAccountId = "528560886094", "dev", account) as account +| if (recipientAccountId = "567680881046", "prod", account) as account +| fields account +``` + +## **Installing the Amazon OpenSearch app** + +Now that you have set up a collection for Amazon OpenSearch, install the Sumo Logic app to use the pre-configured searches and dashboards that provide visibility into your environment for real-time analysis of overall usage. + +To install the app: + +1. Select App Catalog. +2. In the 🔎 Search Apps field, run a search for your desired app, then select it. +3. Click Install App. + note + Sometimes this button says Add Integration. +4. On the next configuration page, under Select Data Source for your App, complete the following fields: + * Data Source. Select one of the following options: + * Choose Source Category and select a source category from the list; or + * Choose Enter a Custom Data Filter, and [enter a custom source category](https://help.sumologic.com/docs/get-started/apps-integrations/#custom-data-filters) beginning with an underscore. For example, `_sourceCategory=MyCategory`. + * Folder Name. You can retain the existing name or enter a custom name of your choice for the app. + * All Folders (optional). The default location is the Personal folder in your Library. If desired, you can choose a different location and/or click New Folder to add it to a new folder. +5. Click Next. +6. Look for the dialog confirming that your app was installed successfully. + ![app-success.png][image4] + +Post-installation + +Once your app is installed, it will appear in your Personal folder or the folder that you specified. From here, you can share it with other users in your organization. Dashboard panels will automatically start to fill with data matching the time range query received since you created the panel. Results won't be available immediately, but within about 20 minutes, you'll see completed graphs and maps. + +## **Viewing Amazon OpenSearch dashboards** + +### **Amazon Opensearch \- Overview** + +The OpenSearch \- Overview dashboard provides a comprehensive view of the OpenSearch cluster's health, performance, and resource utilization. It offers real-time insights into cluster status, CPU and memory usage, storage metrics, document management, and read/write latencies across different domains. + +Use this dashboard to: + +* Monitor the overall health of OpenSearch clusters with color-coded status indicators (green, yellow, red) and quickly identify the number of clusters in each state. +* Track resource utilization, including average CPU and JVM memory usage, both overall and by individual domain names. +* Analyze storage trends and capacity, with graphs showing free storage space and total storage used over time for different domains. +* Keep tabs on document management activities, including the number of searchable documents and deleted documents per domain. +* Assess system performance by observing read and write latencies across various domain names, helping to identify potential bottlenecks or areas for optimization. + +![][image5] + +### **Amazon Opensearch \- Audit Overview** + +The Amazon Opensearch \- Audit Overview​ dashboard provides insights across CloudTrail events across location, status, and topic names. + +Use this dashboard to: + +* Monitor successful and failed events by location. +* Get trends of events by status, type. +* Monitor successful and error events with error code in detail. +* Get details of domain names and users of both successful and error events. + +![][image6] + +### + +### **Amazon Opensearch \- Domain Name (Cluster)** + +The OpenSearch \- Domain Name (Cluster) dashboard provides a comprehensive view of cluster performance and resource utilization across different domains. It offers insights into node count, CPU and memory usage, request patterns, and storage metrics for OpenSearch clusters. + +Use this dashboard to: +* Monitor the total node count and assess resource utilization with average CPU and JVM memory usage gauges, providing a quick overview of cluster health. +* Compare CPU and JVM memory utilization across different domains (sumo, sumo-es, aws-test) using hexagon visualizations, helping to identify potential resource imbalances. +* Track CPU and system memory utilization trends over time for each domain, allowing for the detection of performance anomalies or resource constraints. +* Analyze OpenSearch request patterns and invalid host header requests by domain, which can help in identifying potential security issues or misconfigurations. +* Keep an eye on cluster health indicators such as index write blocks and automated snapshot failures, ensuring data integrity and backup processes are functioning correctly. +![][image7] + +### **Amazon Opensearch \- Nodes** + +Summary: +The OpenSearch \- Nodes dashboard provides a detailed view of node-level performance metrics for OpenSearch clusters across different domains. It offers insights into search and indexing operations, threadpool activities, and overall cluster health, allowing for granular monitoring and troubleshooting of OpenSearch nodes. +Use this dashboard to: +* Compare search and indexing performance across different nodes and domains, with visualizations for search/indexing rates and latencies, helping identify potential bottlenecks or underperforming nodes. +* Monitor thread pool activities, including search queue times, rejected requests, and write queue metrics, which are crucial for understanding cluster load and capacity issues. +* Track OpenSearch Dashboard health metrics, such as max response time, heap utilization, request totals, and concurrent connections, to ensure optimal performance of the user interface. +* Analyze trends in search and indexing rates over time, allowing for the detection of patterns or anomalies that may impact cluster performance. +* Assess overall cluster health by comparing metrics across different domains, enabling quick identification of domain-specific issues or imbalances. + +![][image8] + +### + +### + +### **Amazon Opensearch \- EBS Volume** + +The OpenSearch \- EBS Volume dashboard provides a comprehensive view of the performance metrics for Amazon Elastic Block Store (EBS) volumes associated with OpenSearch clusters. It displays various key performance indicators such as read and write latency, I/O operations per second (IOPS), throughput, burst balance, and disk queue depth. +Use this dashboard to: +* Monitor read and write latency of EBS volumes to ensure optimal response times for OpenSearch operations. +* Track read and write IOPS to understand the I/O demand on your EBS volumes and identify any performance constraints. +* Analyze read and write throughput to assess the data transfer rates and capacity utilization of your EBS volumes. +* Keep an eye on the burst balance to ensure your EBS volumes have sufficient performance credits for handling sudden spikes in workload. +* Observe the disk queue depth to identify potential I/O congestion and optimize your storage configuration for better performance. +![][image9] + +### **Amazon Opensearch \- Cache** + +The OpenSearch \- Cache dashboard provides insights into cache performance, evictions, capacity, and memory usage, which are crucial for maintaining optimal performance of OpenSearch clusters. + +Use this dashboard to: +* Performance tuning of OpenSearch clusters +* Capacity planning for cache and memory resources +* Troubleshooting cache-related issues +* Ability to correlate cache metrics with overall system performance + +![][image10] + +### **Amazon Opensearch \- Queries** + +The Amazon Opensearch \- Queries provides a comprehensive view of query performance and behavior within an OpenSearch environment. + +Use this dashboard to: +* Monitor and analyze slow query performance in OpenSearch +* Visualize the distribution of queries over time by log type, helping to identify patterns or spikes in slow query occurrences. +* Track query hits and shard usage over time, providing insights into overall system load and resource utilization. +* Identify the top 10 slowest queries, including details such as index name, node ID, execution time, and query source for targeted optimization. + +![][image11] + +### **Amazon Opensearch \- Failed Login and Connections** + +The OpenSearch \- Failed Login and Connections dashboard provides a comprehensive view of login activities, focusing on failed login attempts and authentication errors. It offers insights into the geographical distribution of failed logins, user-specific login failures, cluster-based login issues, and detailed authentication error logs. + +Use this dashboard to: +* Monitor the total number of failed user logins at a glance, with a prominent display of the count. +* Visualize the geographical distribution of failed login attempts on map, helping identify potential security threats or unusual activity patterns from specific regions. +* Analyze the distribution of login request methods +* Track failed logins by specific users and clusters, allowing for quick identification of problematic accounts or system components. +* Review detailed authentication error logs. + +![][image12] + +### **Amazon Opensearch \- Garbage collection** + +The OpenSearch \- Garbage Collection dashboard provides a comprehensive view of garbage collection (GC) activities in AWS OpenSearch Service. It offers insights into GC performance, memory cleanup, and JVM memory usage across different domains. The dashboard helps monitor and optimize the garbage collection process, which is crucial for maintaining the performance and stability of OpenSearch clusters. + +Use this dashboard to: + +* Monitor the average garbage collection time overall and by domain name, with a trend graph to track changes over time. +* Analyze average cleanup size and trends, to understand the efficiency of the garbage collection process across different domains. +* Compare garbage collection counts across different nodes and domains, helping to identify any imbalances or potential issues in specific parts of the cluster. +* Visualize JVM memory usage before and after garbage collection, providing insights into the effectiveness of memory management and potential memory leaks. + +![][image13] \ No newline at end of file From 79722f02ee901b36b9fca630835c01d76d58d4a7 Mon Sep 17 00:00:00 2001 From: Himanshu Sharma Date: Sat, 19 Oct 2024 22:46:22 +0530 Subject: [PATCH 2/7] Updating Opensearch app docs --- .../amazon-aws/amazon-opensearch-service.md | 329 +++++++++++------- 1 file changed, 208 insertions(+), 121 deletions(-) diff --git a/docs/integrations/amazon-aws/amazon-opensearch-service.md b/docs/integrations/amazon-aws/amazon-opensearch-service.md index b865b8210c..359bf92b61 100644 --- a/docs/integrations/amazon-aws/amazon-opensearch-service.md +++ b/docs/integrations/amazon-aws/amazon-opensearch-service.md @@ -12,7 +12,7 @@ Amazon OpenSearch Service is a managed service that makes it easy to deploy, ope The Sumo Logic app for Amazon OpenSearch collects CloudWatch logs, CloudWatch metrics and CloudTrail logs, provides a unified logs and metrics app that provides insights into the operations and utilization of your OpenSearch service. The preconfigured dashboards help you monitor the key metrics by domain names and nodes, view the OpenSearch events for activities, and help you plan the capacity of your OpenSearch service. -## **Log and Metrics types[​](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/monitoring.html)** +## **Log and Metrics types** The Sumo Logic app for Amazon OpenSearch uses: @@ -20,10 +20,63 @@ The Sumo Logic app for Amazon OpenSearch uses: * OpenSearch CloudWatch Metrics. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudwatchmetrics.html). * OpenSearch using AWS CloudTrail. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudtrailauditing.html). -### **Sample cloudTrail log messages** +### **Sample OpenSearch CloudWatch Logs** +
+Click to expand + +```json title="Amazon OpenSearch - Audit Logs - Failed Logins" +{ + "audit_cluster_name":"123456789012:domain", + "audit_node_name":"52f5539f7ec0ff32d343cf6ccfe", + "audit_rest_request_method":"GET", + "audit_category":"FAILED_LOGIN", + "audit_request_origin":"REST", + "audit_node_id":"sSem4l5mS62GrF_16hJG", + "audit_request_layer":"REST", + "audit_rest_request_path":"/_plugins/_security/authinfo", + "@timestamp":"2024-09-20T16:31:50.748+00:00", + "audit_request_effective_user_is_admin":false, + "audit_format_version":4, + "audit_request_remote_address":"123.123.123.123", + "audit_rest_request_headers": + { + "x-opensearch-product-origin":["opensearch-dashboards"], + "Connection":["keep-alive"], + "x-opaque-id":["034d8a88-ec0b-4b89-acbb-62ceb6cb53d9"], + "Host":["localhost:9200"], + "Content-Length":["0"], + "NO_REDACT":["false"] + }, + "audit_request_effective_user":"golfer" + } +``` + +```json title="Amazon OpenSearch - Error Logs - Garbage Collection" +{ + "timestamp":1682935339000, + "message":"[2024-09-21T18:28:01,096][WARN ][o.o.m.j.JvmGcMonitorService] [0cd02afc6f6d01969107ab4daab135b5] [gc][young][552854][3753] duration [1s], collections [1]__PATH__[1.8s], total [1s]__PATH__[21.3m], memory [758.4mb]->[331.7mb]__PATH__[1gb], all_pools {[young] [427mb]->[0b]__PATH__[0b]}{[old] [328.8mb]->[328.8mb]__PATH__[1gb]}{[survivor] [2.6mb]->[2.8mb]__PATH__[0b]}", + "logStream":"flights", + "logGroup":"/aws/OpenSearchService/domains/flights/application-logs" +} ``` +```json title="Amazon OpenSearch - Slow Logs - Queries" +{ + "timestamp":1716444593813, + "message":"[2024-09-20T17:12:48,050][WARN ][index.search.slowlog.query] [0cd02afc6f6d01969107ab4daab135b5] [opensearch_dashboards_sample_data_ecommerce][0] took[208.8micros], took_millis[0], total_hits[0 hits], stats[], search_type[QUERY_THEN_FETCH], total_shards[1], source[{"size":0,"timeout":"60000ms","query":{"match_none":{"boost":1.0}},"_source":{"includes":[],"excludes":[]},"stored_fields":"*","docvalue_fields":[{"field":"customer_birth_date","format":"date_time"},{"field":"order_date","format":"date_time"},{"field":"products.created_on","format":"date_time"}],"script_fields":{},"track_total_hits":2147483647,"aggregations":{"2":{"date_histogram":{"field":"order_date","time_zone":"Asia/Calcutta","fixed_interval":"3h","offset":0,"order":{"_key":"asc"},"keyed":false,"min_doc_count":1},"aggregations":{"3":{"terms":{"field":"category.keyword","size":5,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"1":"desc"},{"_key":"asc"}]},"aggregations":{"1":{"sum":{"field":"total_quantity"}}}}}}}}], id[1eb71e9e-8cb5-4fcd-a96f-7bd063e92068],", + "logStream":"flights", + "logGroup":"/aws/OpenSearchService/domains/flights/search-logs" +} +``` + +
+ +### **Sample OpenSearch CloudTrail Logs** + +
+Click to expand +```json title="CloudTrail" { "eventVersion": "1.05", "userIdentity": {...}, @@ -42,7 +95,7 @@ The Sumo Logic app for Amazon OpenSearch uses: "snapshotOptions": { "automatedSnapshotStartHour": 0 }, - "domainName": "test-domain", + "domainName": "flights", "encryptionAtRestOptions": {}, "eBSOptions": { "eBSEnabled": true, @@ -79,26 +132,48 @@ The Sumo Logic app for Amazon OpenSearch uses: }, "engineVersion": "OpenSearch_1.0", "processing": true, - "aRN": "arn:aws:es:us-west-1:123456789012:domain/test-domain", - "domainId": "123456789012/test-domain", + "aRN": "arn:aws:es:us-west-1:123456789012:domain/domainName", + "domainId": "123456789012/flights", "deleted": false, - "domainName": "test-domain", + "domainName": "flights", "accessPolicies": {...}, "requestID": "12345678-1234-1234-1234-987654321098", "eventID": "87654321-4321-4321-4321-987654321098", "eventType": "AwsApiCall", "recipientAccountId": "123456789012" } - ``` +
### **Sample queries** -Successful Events by Event Name - +```sql title="Average GC Time (Cloud Watch Error Log)" +account=* region=* namespace=aws/es domainname=* "[WARN ][o.o.m.j.JvmGcMonitorService]" +| parse "[*][WARN ][o.o.m.j.JvmGcMonitorService] [*] [gc][young][*][*] duration [*s], collections [*]__PATH__[*], total [*]__PATH__[*], memory [*mb]->[*mb]__PATH__[*], all_pools {*}" as timestamp, node_id, gc_event, gc_event_id, duration, collections, total_duration, total_duration1, total_duration2, memory_before_gc, memory_after_gc, memory_total, pool_details +| parse field=pool_details "[*] [*]->[*]__PATH__[*]" as pool_name, pool_memory_before, pool_memory_after, pool_memory_total +| num(duration) +| sum(duration) as Total_Time, avg(duration) as Avg_Time, max(duration) as Max_Time +| fields Avg_Time ``` - -account={{account}} region={{region}} namespace=aws/es "\"eventsource\":\"es.amazonaws.com\"" +```sql title="Top 5 Slow Queries by Index (Cloud Watch Slow Log)" +account=* region=* namespace=aws/es domainname=* "[index.search.slowlog.query]" +| parse "[*][*][*] [*] [*][*] took[*], took_millis[*], total_hits[*], stats[], search_type[*], total_shards[*], source[*], id[*]" as timestamp,log_level,log_type, node_id, index_name, shard_number, execution_time, execution_time_millis, total_hits, search_type, total_shards, source, id +| where log_type = "index.search.slowlog.query" +| num(execution_time_millis) as execution_time_millis +| count as frequency by domainname, index_name, node_id, execution_time_millis , source +| topk(5, execution_time_millis) by index_name +``` +```sql title="Failed Login by User (Cloud Watch Audit Log)" +account=* region=* namespace=aws/es domainname=* FAILED_LOGIN +| json "audit_cluster_name", "audit_node_id","audit_category","audit_request_origin", "audit_request_remote_address", "audit_request_layer","audit_request_effective_user", "audit_rest_request_path" +| parse field= audit_cluster_name "*:*" as account_id, domain_name +| where (tolowercase(domain_name) matches tolowercase("*")) +| where audit_category = "FAILED_LOGIN" and audit_rest_request_path matches "*plugins/_security/authinfo" +| count as freq by domainname, audit_request_effective_user +| sort by freq, domainname asc, audit_request_effective_user asc +``` +```sql title="Successful Events by Event Name (Cloud Trail Logs)" +account=* region=* namespace=aws/es "\"eventsource\":\"es.amazonaws.com\"" | json "userIdentity", "eventSource", "eventName", "awsRegion", "sourceIPAddress", "userAgent", "eventType", "recipientAccountId", "requestParameters", "responseElements", "requestID", "errorCode", "errorMessage" as userIdentity, event_source, event_name, region, src_ip, user_agent, event_type, recipient_account_id, requestParameters, responseElements, request_id, error_code, error_message nodrop | where event_source = "es.amazonaws.com" | json field=userIdentity "accountId", "type", "arn", "userName" as accountid, type, arn, username nodrop @@ -106,32 +181,38 @@ account={{account}} region={{region}} namespace=aws/es "\"eventsource\":\"es.ama | parse field=arn "arn:aws:iam::*:*" as accountid, user nodrop | json field=requestParameters "domainName" as domainname nodrop | if (isBlank(accountid), recipient_account_id, accountid) as accountid -| where (tolowercase(domainname) matches tolowercase("{{domainname}}")) or isBlank(domainname) | if (isEmpty(error_code), "Success", "Failure") as event_status | if (isEmpty(username), user, username) as user | count as event_count by event_name | sort by event_count, event_name asc ``` +```sql title="Write Latency by Domain Name (Metrics-based)" +account=* region=* namespace=aws/es domainname=* !nodeid=* metric=WriteLatency statistic = average | avg by domainname +``` -Write Latency by Domain Name (Metrics-based) +## **Collect logs and metrics for the Amazon OpenSearch app** -``` +### **Collect Amazon OpenSearch CloudWatch Logs** -account={{account}} region={{region}} namespace=aws/es domainname={{domainname}} !nodeid=* metric=WriteLatency statistic = average | avg by domainname -``` +To enable Amazon OpenSearch CloudWatch Logs, follow the steps mentioned in [AWS Documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html) -## **Collecting logs and metrics for the Amazon OpenSearch app** +:::note +Ensure that when configuring `CloudWatch Logs`, the log group name follows the pattern `/aws/OpenSearchService/domains/DOMAIN_NAME/LOG_TYPE`. +::: -### **Collecting Metrics for Amazon OpenSearch** +Sumo Logic supports several methods for collecting logs from Amazon CloudWatch. You can choose either of them to collect logs: -1. Configure a [Hosted Collector](https://help.sumologic.com/docs/send-data/hosted-collectors/configure-hosted-collector/). -2. Configure an [Amazon CloudWatch Source for Metrics](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics/) or [AWS Kinesis Firehose for Metrics Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) (Recommended). -3. Namespaces. Select aws/es. -4. Metadata. Add an account field to the source and assign it a value that is a friendly name/alias to your AWS account from which you are collecting metrics. The account field allows you to query metrics. - ![Metadata][image2] -5. Click Save. +- **AWS Kinesis Firehose for Logs**. Configure an [AWS Kinesis Firehose for Logs](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-logs-source/#create-an-aws-kinesis-firehose-for-logssource) (Recommended); or +- **Lambda Log Forwarder**. Configure a collection of Amazon CloudWatch Logs using our AWS Lambda function using a Sumo Logic provided CloudFormation template, as described in [Amazon CloudWatch Logs](/docs/send-data/collect-from-other-data-sources/amazon-cloudwatch-logs/) or configure collection without using CloudFormation, see [Collect Amazon CloudWatch Logs using a Lambda Function](/docs/send-data/collect-from-other-data-sources/amazon-cloudwatch-logs/collect-with-lambda-function/).
-### **Collecting Amazon OpenSearch Events using CloudTrail** +- While configuring the CloudWatch log source, following fields can be added in the source: + - Add an **account** field and assign it a value which is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried via the **account** field. + - Add a **region** field and assign it the value of the respective AWS region where the **OpenSearch** domain exists. + - Add an **accountId** field and assign it the value of the respective AWS account id which is being used. + + Fields + +### **Collect Amazon OpenSearch CloudTrail Logs** 1. Add an [AWS CloudTrail Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-cloudtrail-source/) to your Hosted Collector. * Name. Enter a name to display for the new Source. @@ -152,6 +233,15 @@ account={{account}} region={{region}} namespace=aws/es domainname={{domainname}} * Enable Multiline Processing. Select the Detect messages spanning multiple lines check box, and select Infer Boundaries. 2. Click Save. +### **Collect Amazon OpenSearch CloudWatch Metrics** + +1. Configure a [Hosted Collector](https://help.sumologic.com/docs/send-data/hosted-collectors/configure-hosted-collector/). +2. Configure an [Amazon CloudWatch Source for Metrics](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics/) or [AWS Kinesis Firehose for Metrics Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) (Recommended). +3. Namespaces. Select aws/es. +4. Metadata. Add an account field to the source and assign it a value that is a friendly name/alias to your AWS account from which you are collecting metrics. The account field allows you to query metrics. + ![Metadata][image2] +5. Click Save. + ### **Field in Field Schema** 1. [Classic UI](https://help.sumologic.com/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select Manage Data \> Logs \> Fields. @@ -163,30 +253,38 @@ account={{account}} region={{region}} namespace=aws/es domainname={{domainname}} Create a Field Extraction Rule for CloudTrail Logs. Learn how to create a Field Extraction Rule [here](https://help.sumologic.com/docs/manage/field-extractions/create-field-extraction-rule/). -``` - -Rule Name: AwsObservabilityESCloudTrailLogsFER +```sql +Rule Name: AwsObservabilityOpenSearchCloudTrailLogsFER Applied at: Ingest Time -Scope (Specific Data): account=* eventname eventsource \"es.amazonaws.com\" -``` - -Parse Expression: - +Scope (Specific Data): account=* eventName eventSource "es.amazonaws.com" ``` -| json "userIdentity", "eventSource", "eventName", "awsRegion", "recipientAccountId", "requestParameters", "responseElements" as userIdentity, event_source, event_name, region, recipient_account_id, requestParameters, responseElements nodrop +```sql title="Parse Expression" +| json "eventSource", "awsRegion", "recipientAccountId", "requestParameters.domainName" as event_source, region, accountid, domainname nodrop | where event_source = "es.amazonaws.com" -| json field=userIdentity "accountId", "type", "arn", "userName" as accountid, type, arn, username nodrop -| parse field=arn ":assumed-role/*" as user nodrop -| parse field=arn "arn:aws:iam::*:*" as accountid, user nodrop -| json field=requestParameters "domainName" as domainname -| if (isBlank(accountid), recipient_account_id, accountid) as accountid | toLowerCase(domainname) as domainname | "aws/es" as namespace | fields region, namespace, domainname, accountid ``` -## **Centralized AWS CloudTrail Log Collection** +#### Create/Update Field Extraction Rule(s) for OpenSearch CloudWatch logs + +```sql +Rule Name: AwsObservabilityOpenSearchCloudWatchLogsFER +Applied at: Ingest Time +Scope (Specific Data): +account=* region=* _sourceHost=/aws/OpenSearchService/* +``` + +```sql title="Parse Expression" +| if (isEmpty(namespace),"unknown",namespace) as namespace +| if (_sourceHost matches "/aws/OpenSearchService/*", "aws/es", namespace) as namespace +| parse field=_sourceHost "/aws/OpenSearchService/domains/*/*" as domainname,logType nodrop +| tolowercase(domainname) as domainname +| fields namespace, domainname +``` + +### **Centralized AWS CloudTrail Log Collection** In case, you have a centralized collection of CloudTrail logs and are ingesting them from all accounts into a single Sumo Logic CloudTrail log source, create the following Field Extraction Rule to map a proper AWS account(s) friendly name/alias. Create it if not already present or update it as required. @@ -195,8 +293,7 @@ In case, you have a centralized collection of CloudTrail logs and are ingesting * Scope (Specific Data): `_sourceCategory=aws/observability/cloudtrail/logs` * Parse Expression: Enter a parse expression to create an “account” field that maps to the alias you set for each sub account. For example, if you used the “dev” alias for an AWS account with ID "528560886094" and the “prod” alias for an AWS account with ID "567680881046", your parse expression would look like: -``` - +```sql | json "recipientAccountId" // Manually map your aws account id with the AWS account alias you setup earlier for individual child account | "" as account @@ -209,32 +306,26 @@ In case, you have a centralized collection of CloudTrail logs and are ingesting Now that you have set up a collection for Amazon OpenSearch, install the Sumo Logic app to use the pre-configured searches and dashboards that provide visibility into your environment for real-time analysis of overall usage. -To install the app: +import AppInstall from '../../reuse/apps/app-install-v2.md'; + + -1. Select App Catalog. -2. In the 🔎 Search Apps field, run a search for your desired app, then select it. -3. Click Install App. - note - Sometimes this button says Add Integration. -4. On the next configuration page, under Select Data Source for your App, complete the following fields: - * Data Source. Select one of the following options: - * Choose Source Category and select a source category from the list; or - * Choose Enter a Custom Data Filter, and [enter a custom source category](https://help.sumologic.com/docs/get-started/apps-integrations/#custom-data-filters) beginning with an underscore. For example, `_sourceCategory=MyCategory`. - * Folder Name. You can retain the existing name or enter a custom name of your choice for the app. - * All Folders (optional). The default location is the Personal folder in your Library. If desired, you can choose a different location and/or click New Folder to add it to a new folder. -5. Click Next. -6. Look for the dialog confirming that your app was installed successfully. - ![app-success.png][image4] +## **Viewing Amazon OpenSearch dashboards** -Post-installation +### **01. Amazon OpenSearch \- Overview** -Once your app is installed, it will appear in your Personal folder or the folder that you specified. From here, you can share it with other users in your organization. Dashboard panels will automatically start to fill with data matching the time range query received since you created the panel. Results won't be available immediately, but within about 20 minutes, you'll see completed graphs and maps. +The Amazon OpenSearch \- Overview dashboard provides a comprehensive overview of Amazon OpenSearch performance and operational metrics. It displays key information about cluster utilization, user activity, query performance, error logs, and system events. The dashboard is designed to help administrators monitor and optimize their OpenSearch deployment across different domains and regions. -## **Viewing Amazon OpenSearch dashboards** +Use this dashboard to: +* Monitor cluster health and resource utilization by tracking CPU and memory usage across different domain names. +* Identify security issues by analyzing failed user logins and their distribution across domains. +* Optimize query performance by examining slow query statistics and execution times for different index types and domains. + +Fields -### **Amazon Opensearch \- Overview** +### **02. Amazon Opensearch \- Performance Overview** -The OpenSearch \- Overview dashboard provides a comprehensive view of the OpenSearch cluster's health, performance, and resource utilization. It offers real-time insights into cluster status, CPU and memory usage, storage metrics, document management, and read/write latencies across different domains. +The Amazon OpenSearch \- Performance Overview dashboard provides a comprehensive view of the OpenSearch cluster's health, performance, and resource utilization. It offers real-time insights into cluster status, CPU and memory usage, storage metrics, document management, and read/write latencies across different domains. Use this dashboard to: @@ -244,11 +335,11 @@ Use this dashboard to: * Keep tabs on document management activities, including the number of searchable documents and deleted documents per domain. * Assess system performance by observing read and write latencies across various domain names, helping to identify potential bottlenecks or areas for optimization. -![][image5] +Fields -### **Amazon Opensearch \- Audit Overview** +### **03. Amazon OpenSearch \- CloudTrail Audit Events** -The Amazon Opensearch \- Audit Overview​ dashboard provides insights across CloudTrail events across location, status, and topic names. +The Amazon Opensearch \- CloudTrail Audit Overview​ dashboard provides insights across CloudTrail events across location, status, and topic names. Use this dashboard to: @@ -257,13 +348,49 @@ Use this dashboard to: * Monitor successful and error events with error code in detail. * Get details of domain names and users of both successful and error events. -![][image6] +Fields + +### **04. Amazon OpenSearch \- Audit Logs \- Failed Logins** + +The Amazon OpenSearch \- Audit Logs \- Failed Logins dashboard provides a comprehensive view of login activities, focusing on failed login attempts and authentication errors. It offers insights into the geographical distribution of failed logins, user-specific login failures, cluster-based login issues, and detailed authentication error logs. + +Use this dashboard to: +* Monitor the total number of failed user logins at a glance, with a prominent display of the count. +* Visualize the geographical distribution of failed login attempts on map, helping identify potential security threats or unusual activity patterns from specific regions. +* Analyze the distribution of login request methods +* Track failed logins by specific users and clusters, allowing for quick identification of problematic accounts or system components. +* Review detailed authentication error logs. + +Fields + +### **05. Amazon OpenSearch \- Error Logs \- Garbage Collection** -### +The Amazon OpenSearch \- Error Logs \- Garbage Collection dashboard provides a comprehensive view of garbage collection (GC) activities in AWS OpenSearch Service. It offers insights into GC performance, memory cleanup, and JVM memory usage across different domains. The dashboard helps monitor and optimize the garbage collection process, which is crucial for maintaining the performance and stability of OpenSearch clusters. + +Use this dashboard to: + +* Monitor the average garbage collection time overall and by domain name, with a trend graph to track changes over time. +* Analyze average cleanup size and trends, to understand the efficiency of the garbage collection process across different domains. +* Compare garbage collection counts across different nodes and domains, helping to identify any imbalances or potential issues in specific parts of the cluster. +* Visualize JVM memory usage before and after garbage collection, providing insights into the effectiveness of memory management and potential memory leaks. -### **Amazon Opensearch \- Domain Name (Cluster)** +Fields -The OpenSearch \- Domain Name (Cluster) dashboard provides a comprehensive view of cluster performance and resource utilization across different domains. It offers insights into node count, CPU and memory usage, request patterns, and storage metrics for OpenSearch clusters. +### **06. Amazon OpenSearch \- Slow Logs \- Queries** + +The Amazon Opensearch \- Slow Logs \- Queries dashboard provides a comprehensive view of query performance and behavior within an OpenSearch environment. + +Use this dashboard to: +* Monitor and analyze slow query performance in OpenSearch +* Visualize the distribution of queries over time by log type, helping to identify patterns or spikes in slow query occurrences. +* Track query hits and shard usage over time, providing insights into overall system load and resource utilization. +* Identify the top 10 slowest queries, including details such as index name, node ID, execution time, and query source for targeted optimization. + +Fields + +### **07. Amazon OpenSearch \- Domain Name (Cluster) Performance** + +The Amazon OpenSearch \- Domain Name (Cluster) Performance dashboard provides a comprehensive view of cluster performance and resource utilization across different domains. It offers insights into node count, CPU and memory usage, request patterns, and storage metrics for OpenSearch clusters. Use this dashboard to: * Monitor the total node count and assess resource utilization with average CPU and JVM memory usage gauges, providing a quick overview of cluster health. @@ -271,12 +398,13 @@ Use this dashboard to: * Track CPU and system memory utilization trends over time for each domain, allowing for the detection of performance anomalies or resource constraints. * Analyze OpenSearch request patterns and invalid host header requests by domain, which can help in identifying potential security issues or misconfigurations. * Keep an eye on cluster health indicators such as index write blocks and automated snapshot failures, ensuring data integrity and backup processes are functioning correctly. -![][image7] -### **Amazon Opensearch \- Nodes** +Fields + +### **08. Amazon OpenSearch \- Nodes Performance** Summary: -The OpenSearch \- Nodes dashboard provides a detailed view of node-level performance metrics for OpenSearch clusters across different domains. It offers insights into search and indexing operations, threadpool activities, and overall cluster health, allowing for granular monitoring and troubleshooting of OpenSearch nodes. +The Amazon OpenSearch \- Nodes Performance dashboard provides a detailed view of node-level performance metrics for OpenSearch clusters across different domains. It offers insights into search and indexing operations, threadpool activities, and overall cluster health, allowing for granular monitoring and troubleshooting of OpenSearch nodes. Use this dashboard to: * Compare search and indexing performance across different nodes and domains, with visualizations for search/indexing rates and latencies, helping identify potential bottlenecks or underperforming nodes. * Monitor thread pool activities, including search queue times, rejected requests, and write queue metrics, which are crucial for understanding cluster load and capacity issues. @@ -284,26 +412,23 @@ Use this dashboard to: * Analyze trends in search and indexing rates over time, allowing for the detection of patterns or anomalies that may impact cluster performance. * Assess overall cluster health by comparing metrics across different domains, enabling quick identification of domain-specific issues or imbalances. -![][image8] +Fields -### +### **09. Amazon OpenSearch \- EBS Volume Performance** -### - -### **Amazon Opensearch \- EBS Volume** - -The OpenSearch \- EBS Volume dashboard provides a comprehensive view of the performance metrics for Amazon Elastic Block Store (EBS) volumes associated with OpenSearch clusters. It displays various key performance indicators such as read and write latency, I/O operations per second (IOPS), throughput, burst balance, and disk queue depth. +The Amazon OpenSearch \- EBS Volume Performance dashboard provides a comprehensive view of the performance metrics for Amazon Elastic Block Store (EBS) volumes associated with OpenSearch clusters. It displays various key performance indicators such as read and write latency, I/O operations per second (IOPS), throughput, burst balance, and disk queue depth. Use this dashboard to: * Monitor read and write latency of EBS volumes to ensure optimal response times for OpenSearch operations. * Track read and write IOPS to understand the I/O demand on your EBS volumes and identify any performance constraints. * Analyze read and write throughput to assess the data transfer rates and capacity utilization of your EBS volumes. * Keep an eye on the burst balance to ensure your EBS volumes have sufficient performance credits for handling sudden spikes in workload. * Observe the disk queue depth to identify potential I/O congestion and optimize your storage configuration for better performance. -![][image9] -### **Amazon Opensearch \- Cache** +Fields + +### **10. Amazon OpenSearch \- Cache Performance** -The OpenSearch \- Cache dashboard provides insights into cache performance, evictions, capacity, and memory usage, which are crucial for maintaining optimal performance of OpenSearch clusters. +The Amazon OpenSearch \- Cache Performance dashboard provides insights into cache performance, evictions, capacity, and memory usage, which are crucial for maintaining optimal performance of OpenSearch clusters. Use this dashboard to: * Performance tuning of OpenSearch clusters @@ -311,42 +436,4 @@ Use this dashboard to: * Troubleshooting cache-related issues * Ability to correlate cache metrics with overall system performance -![][image10] - -### **Amazon Opensearch \- Queries** - -The Amazon Opensearch \- Queries provides a comprehensive view of query performance and behavior within an OpenSearch environment. - -Use this dashboard to: -* Monitor and analyze slow query performance in OpenSearch -* Visualize the distribution of queries over time by log type, helping to identify patterns or spikes in slow query occurrences. -* Track query hits and shard usage over time, providing insights into overall system load and resource utilization. -* Identify the top 10 slowest queries, including details such as index name, node ID, execution time, and query source for targeted optimization. - -![][image11] - -### **Amazon Opensearch \- Failed Login and Connections** - -The OpenSearch \- Failed Login and Connections dashboard provides a comprehensive view of login activities, focusing on failed login attempts and authentication errors. It offers insights into the geographical distribution of failed logins, user-specific login failures, cluster-based login issues, and detailed authentication error logs. - -Use this dashboard to: -* Monitor the total number of failed user logins at a glance, with a prominent display of the count. -* Visualize the geographical distribution of failed login attempts on map, helping identify potential security threats or unusual activity patterns from specific regions. -* Analyze the distribution of login request methods -* Track failed logins by specific users and clusters, allowing for quick identification of problematic accounts or system components. -* Review detailed authentication error logs. - -![][image12] - -### **Amazon Opensearch \- Garbage collection** - -The OpenSearch \- Garbage Collection dashboard provides a comprehensive view of garbage collection (GC) activities in AWS OpenSearch Service. It offers insights into GC performance, memory cleanup, and JVM memory usage across different domains. The dashboard helps monitor and optimize the garbage collection process, which is crucial for maintaining the performance and stability of OpenSearch clusters. - -Use this dashboard to: - -* Monitor the average garbage collection time overall and by domain name, with a trend graph to track changes over time. -* Analyze average cleanup size and trends, to understand the efficiency of the garbage collection process across different domains. -* Compare garbage collection counts across different nodes and domains, helping to identify any imbalances or potential issues in specific parts of the cluster. -* Visualize JVM memory usage before and after garbage collection, providing insights into the effectiveness of memory management and potential memory leaks. - -![][image13] \ No newline at end of file +Fields \ No newline at end of file From 2b905cb4ec77ed112781a85ae8e9cbfa11a6fe4f Mon Sep 17 00:00:00 2001 From: Himanshu Sharma Date: Sat, 19 Oct 2024 22:54:58 +0530 Subject: [PATCH 3/7] updating dashboard name --- docs/integrations/amazon-aws/amazon-opensearch-service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/integrations/amazon-aws/amazon-opensearch-service.md b/docs/integrations/amazon-aws/amazon-opensearch-service.md index 359bf92b61..567ef8c0ec 100644 --- a/docs/integrations/amazon-aws/amazon-opensearch-service.md +++ b/docs/integrations/amazon-aws/amazon-opensearch-service.md @@ -339,7 +339,7 @@ Use this dashboard to: ### **03. Amazon OpenSearch \- CloudTrail Audit Events** -The Amazon Opensearch \- CloudTrail Audit Overview​ dashboard provides insights across CloudTrail events across location, status, and topic names. +The Amazon Opensearch \- CloudTrail Audit Events dashboard provides insights across CloudTrail events across location, status, and topic names. Use this dashboard to: From c896cf7604f51cb1ad9a1f1b2d50610913fe5b62 Mon Sep 17 00:00:00 2001 From: Jagadisha V <129049263+JV0812@users.noreply.github.com> Date: Mon, 21 Oct 2024 11:03:11 +0530 Subject: [PATCH 4/7] Update amazon-opensearch-service.md --- .../amazon-aws/amazon-opensearch-service.md | 75 ++++++++++--------- 1 file changed, 40 insertions(+), 35 deletions(-) diff --git a/docs/integrations/amazon-aws/amazon-opensearch-service.md b/docs/integrations/amazon-aws/amazon-opensearch-service.md index 567ef8c0ec..828c9b936c 100644 --- a/docs/integrations/amazon-aws/amazon-opensearch-service.md +++ b/docs/integrations/amazon-aws/amazon-opensearch-service.md @@ -16,11 +16,11 @@ The Sumo Logic app for Amazon OpenSearch collects CloudWatch logs, CloudWatch me The Sumo Logic app for Amazon OpenSearch uses: -* OpenSearch CloudWatch Logs. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html). -* OpenSearch CloudWatch Metrics. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudwatchmetrics.html). -* OpenSearch using AWS CloudTrail. For details, see [here](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudtrailauditing.html). +* [OpenSearch CloudWatch Logs](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html) +* [OpenSearch CloudWatch Metrics](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudwatchmetrics.html) +* [OpenSearch using AWS CloudTrail](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudtrailauditing.html) -### **Sample OpenSearch CloudWatch Logs** +### Sample OpenSearch CloudWatch Logs
Click to expand @@ -72,7 +72,7 @@ The Sumo Logic app for Amazon OpenSearch uses:
-### **Sample OpenSearch CloudTrail Logs** +### Sample OpenSearch CloudTrail Logs
Click to expand @@ -145,7 +145,7 @@ The Sumo Logic app for Amazon OpenSearch uses: ```
-### **Sample queries** +### Sample queries ```sql title="Average GC Time (Cloud Watch Error Log)" account=* region=* namespace=aws/es domainname=* "[WARN ][o.o.m.j.JvmGcMonitorService]" @@ -155,6 +155,7 @@ account=* region=* namespace=aws/es domainname=* "[WARN ][o.o.m.j.JvmGcMonitorSe | sum(duration) as Total_Time, avg(duration) as Avg_Time, max(duration) as Max_Time | fields Avg_Time ``` + ```sql title="Top 5 Slow Queries by Index (Cloud Watch Slow Log)" account=* region=* namespace=aws/es domainname=* "[index.search.slowlog.query]" | parse "[*][*][*] [*] [*][*] took[*], took_millis[*], total_hits[*], stats[], search_type[*], total_shards[*], source[*], id[*]" as timestamp,log_level,log_type, node_id, index_name, shard_number, execution_time, execution_time_millis, total_hits, search_type, total_shards, source, id @@ -163,6 +164,7 @@ account=* region=* namespace=aws/es domainname=* "[index.search.slowlog.query]" | count as frequency by domainname, index_name, node_id, execution_time_millis , source | topk(5, execution_time_millis) by index_name ``` + ```sql title="Failed Login by User (Cloud Watch Audit Log)" account=* region=* namespace=aws/es domainname=* FAILED_LOGIN | json "audit_cluster_name", "audit_node_id","audit_category","audit_request_origin", "audit_request_remote_address", "audit_request_layer","audit_request_effective_user", "audit_rest_request_path" @@ -172,6 +174,7 @@ account=* region=* namespace=aws/es domainname=* FAILED_LOGIN | count as freq by domainname, audit_request_effective_user | sort by freq, domainname asc, audit_request_effective_user asc ``` + ```sql title="Successful Events by Event Name (Cloud Trail Logs)" account=* region=* namespace=aws/es "\"eventsource\":\"es.amazonaws.com\"" | json "userIdentity", "eventSource", "eventName", "awsRegion", "sourceIPAddress", "userAgent", "eventType", "recipientAccountId", "requestParameters", "responseElements", "requestID", "errorCode", "errorMessage" as userIdentity, event_source, event_name, region, src_ip, user_agent, event_type, recipient_account_id, requestParameters, responseElements, request_id, error_code, error_message nodrop @@ -186,13 +189,14 @@ account=* region=* namespace=aws/es "\"eventsource\":\"es.amazonaws.com\"" | count as event_count by event_name | sort by event_count, event_name asc ``` + ```sql title="Write Latency by Domain Name (Metrics-based)" account=* region=* namespace=aws/es domainname=* !nodeid=* metric=WriteLatency statistic = average | avg by domainname ``` -## **Collect logs and metrics for the Amazon OpenSearch app** +## Collect logs and metrics for the Amazon OpenSearch app -### **Collect Amazon OpenSearch CloudWatch Logs** +### Collect Amazon OpenSearch CloudWatch Logs To enable Amazon OpenSearch CloudWatch Logs, follow the steps mentioned in [AWS Documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html) @@ -212,18 +216,18 @@ Sumo Logic supports several methods for collecting logs from Amazon CloudWatch. Fields -### **Collect Amazon OpenSearch CloudTrail Logs** +### Collect Amazon OpenSearch CloudTrail Logs 1. Add an [AWS CloudTrail Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-cloudtrail-source/) to your Hosted Collector. - * Name. Enter a name to display for the new Source. - * Description. Enter an optional description. - * S3 Region. Select the Amazon Region for your cloudTrail S3 bucket. - * Bucket Name. Enter the exact name of your cloudTrail S3 bucket. - * Path Expression. Enter the string that matches the S3 objects you'd like to collect. You can use a wildcard (\*) in this string. + * **Name**. Enter a name to display for the new Source. + * **Description**. Enter an optional description. + * **S3 Region**. Select the Amazon Region for your CloudTrail S3 bucket. + * **Bucket Name**. Enter the exact name of your CloudTrail S3 bucket. + * **Path Expression**. Enter the string that matches the S3 objects you'd like to collect. You can use a wildcard (\*) in this string. * DO NOT use a [leading forward slash](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-path-expressions/). * The S3 bucket name is not part of the path. Don’t include the bucket name when you are setting the Path Expression. - * Source Category. Enter a source category. For example, enter `aws/observability/CloudTrail/logs`. - * Fields. Add an account field and assign it a value that is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried using the account field. + * **Source Category**. Enter a source category. For example, enter `aws/observability/CloudTrail/logs`. + * **Fields**. Add an account field and assign it a value that is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried using the account field. ![Fields][image3] * Access Key ID and Secret Access Key. Enter your Amazon [Access Key ID and Secret Access Key](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html). Learn how to use Role-based access to AWS [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). * Log File Discovery \-\> Scan Interval. Use the default of 5 minutes. Alternately, enter the frequency. Sumo Logic will scan your S3 bucket for new data. Learn how to configure Log File Discovery [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). @@ -231,9 +235,9 @@ Sumo Logic supports several methods for collecting logs from Amazon CloudWatch. * Time Zone. Select Ignore time zone from the log file and instead use, and select UTC from the dropdown. * Timestamp Format. Select Automatically detect the format. * Enable Multiline Processing. Select the Detect messages spanning multiple lines check box, and select Infer Boundaries. -2. Click Save. +2. Click **Save**. -### **Collect Amazon OpenSearch CloudWatch Metrics** +### Collect Amazon OpenSearch CloudWatch Metrics 1. Configure a [Hosted Collector](https://help.sumologic.com/docs/send-data/hosted-collectors/configure-hosted-collector/). 2. Configure an [Amazon CloudWatch Source for Metrics](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics/) or [AWS Kinesis Firehose for Metrics Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) (Recommended). @@ -312,7 +316,7 @@ import AppInstall from '../../reuse/apps/app-install-v2.md'; ## **Viewing Amazon OpenSearch dashboards** -### **01. Amazon OpenSearch \- Overview** +### Overview The Amazon OpenSearch \- Overview dashboard provides a comprehensive overview of Amazon OpenSearch performance and operational metrics. It displays key information about cluster utilization, user activity, query performance, error logs, and system events. The dashboard is designed to help administrators monitor and optimize their OpenSearch deployment across different domains and regions. @@ -323,7 +327,7 @@ Use this dashboard to: Fields -### **02. Amazon Opensearch \- Performance Overview** +### Performance Overview The Amazon OpenSearch \- Performance Overview dashboard provides a comprehensive view of the OpenSearch cluster's health, performance, and resource utilization. It offers real-time insights into cluster status, CPU and memory usage, storage metrics, document management, and read/write latencies across different domains. @@ -337,7 +341,7 @@ Use this dashboard to: Fields -### **03. Amazon OpenSearch \- CloudTrail Audit Events** +### CloudTrail Audit Events The Amazon Opensearch \- CloudTrail Audit Events dashboard provides insights across CloudTrail events across location, status, and topic names. @@ -350,7 +354,7 @@ Use this dashboard to: Fields -### **04. Amazon OpenSearch \- Audit Logs \- Failed Logins** +### Audit Logs - Failed Logins The Amazon OpenSearch \- Audit Logs \- Failed Logins dashboard provides a comprehensive view of login activities, focusing on failed login attempts and authentication errors. It offers insights into the geographical distribution of failed logins, user-specific login failures, cluster-based login issues, and detailed authentication error logs. @@ -363,7 +367,7 @@ Use this dashboard to: Fields -### **05. Amazon OpenSearch \- Error Logs \- Garbage Collection** +### Error Logs - Garbage Collection The Amazon OpenSearch \- Error Logs \- Garbage Collection dashboard provides a comprehensive view of garbage collection (GC) activities in AWS OpenSearch Service. It offers insights into GC performance, memory cleanup, and JVM memory usage across different domains. The dashboard helps monitor and optimize the garbage collection process, which is crucial for maintaining the performance and stability of OpenSearch clusters. @@ -376,7 +380,7 @@ Use this dashboard to: Fields -### **06. Amazon OpenSearch \- Slow Logs \- Queries** +### Slow Logs - Queries The Amazon Opensearch \- Slow Logs \- Queries dashboard provides a comprehensive view of query performance and behavior within an OpenSearch environment. @@ -388,7 +392,7 @@ Use this dashboard to: Fields -### **07. Amazon OpenSearch \- Domain Name (Cluster) Performance** +### Domain Name (Cluster) Performance The Amazon OpenSearch \- Domain Name (Cluster) Performance dashboard provides a comprehensive view of cluster performance and resource utilization across different domains. It offers insights into node count, CPU and memory usage, request patterns, and storage metrics for OpenSearch clusters. @@ -401,10 +405,10 @@ Use this dashboard to: Fields -### **08. Amazon OpenSearch \- Nodes Performance** +### Nodes Performance -Summary: The Amazon OpenSearch \- Nodes Performance dashboard provides a detailed view of node-level performance metrics for OpenSearch clusters across different domains. It offers insights into search and indexing operations, threadpool activities, and overall cluster health, allowing for granular monitoring and troubleshooting of OpenSearch nodes. + Use this dashboard to: * Compare search and indexing performance across different nodes and domains, with visualizations for search/indexing rates and latencies, helping identify potential bottlenecks or underperforming nodes. * Monitor thread pool activities, including search queue times, rejected requests, and write queue metrics, which are crucial for understanding cluster load and capacity issues. @@ -414,9 +418,10 @@ Use this dashboard to: Fields -### **09. Amazon OpenSearch \- EBS Volume Performance** +### EBS Volume Performance + +The Amazon OpenSearch \- EBS Volume Performance dashboard provides a comprehensive view of the performance metrics for Amazon Elastic Block Store (EBS) volumes associated with OpenSearch clusters. It displays various key performance indicators such as read and write latency, I/O operations per second (IOPS), throughput, burst balance, and disk queue depth. -The Amazon OpenSearch \- EBS Volume Performance dashboard provides a comprehensive view of the performance metrics for Amazon Elastic Block Store (EBS) volumes associated with OpenSearch clusters. It displays various key performance indicators such as read and write latency, I/O operations per second (IOPS), throughput, burst balance, and disk queue depth. Use this dashboard to: * Monitor read and write latency of EBS volumes to ensure optimal response times for OpenSearch operations. * Track read and write IOPS to understand the I/O demand on your EBS volumes and identify any performance constraints. @@ -426,14 +431,14 @@ Use this dashboard to: Fields -### **10. Amazon OpenSearch \- Cache Performance** +### Cache Performance The Amazon OpenSearch \- Cache Performance dashboard provides insights into cache performance, evictions, capacity, and memory usage, which are crucial for maintaining optimal performance of OpenSearch clusters. Use this dashboard to: -* Performance tuning of OpenSearch clusters -* Capacity planning for cache and memory resources -* Troubleshooting cache-related issues -* Ability to correlate cache metrics with overall system performance +* Performance tuning of OpenSearch clusters. +* Capacity planning for cache and memory resources. +* Troubleshooting cache-related issues. +* Ability to correlate cache metrics with overall system performance. -Fields \ No newline at end of file +Fields From 58a977c6a926fb36b385685e96124b03f475301e Mon Sep 17 00:00:00 2001 From: Jagadisha V <129049263+JV0812@users.noreply.github.com> Date: Mon, 21 Oct 2024 12:37:22 +0530 Subject: [PATCH 5/7] formatting --- .../amazon-aws/amazon-opensearch-service.md | 109 +++++++++--------- 1 file changed, 53 insertions(+), 56 deletions(-) diff --git a/docs/integrations/amazon-aws/amazon-opensearch-service.md b/docs/integrations/amazon-aws/amazon-opensearch-service.md index 828c9b936c..97421c85ab 100644 --- a/docs/integrations/amazon-aws/amazon-opensearch-service.md +++ b/docs/integrations/amazon-aws/amazon-opensearch-service.md @@ -12,7 +12,7 @@ Amazon OpenSearch Service is a managed service that makes it easy to deploy, ope The Sumo Logic app for Amazon OpenSearch collects CloudWatch logs, CloudWatch metrics and CloudTrail logs, provides a unified logs and metrics app that provides insights into the operations and utilization of your OpenSearch service. The preconfigured dashboards help you monitor the key metrics by domain names and nodes, view the OpenSearch events for activities, and help you plan the capacity of your OpenSearch service. -## **Log and Metrics types** +## Log and Metrics types The Sumo Logic app for Amazon OpenSearch uses: @@ -20,7 +20,7 @@ The Sumo Logic app for Amazon OpenSearch uses: * [OpenSearch CloudWatch Metrics](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudwatchmetrics.html) * [OpenSearch using AWS CloudTrail](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-cloudtrailauditing.html) -### Sample OpenSearch CloudWatch Logs +### Sample CloudWatch log messages
Click to expand @@ -72,7 +72,7 @@ The Sumo Logic app for Amazon OpenSearch uses:
-### Sample OpenSearch CloudTrail Logs +### Sample CloudTrail logs message
Click to expand @@ -196,7 +196,9 @@ account=* region=* namespace=aws/es domainname=* !nodeid=* metric=WriteLatency s ## Collect logs and metrics for the Amazon OpenSearch app -### Collect Amazon OpenSearch CloudWatch Logs +This section has instructions for collecting logs and metrics for the Amazon OPenSearch Service app. + +### Collect CloudWatch Logs To enable Amazon OpenSearch CloudWatch Logs, follow the steps mentioned in [AWS Documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createdomain-configure-slow-logs.html) @@ -216,44 +218,40 @@ Sumo Logic supports several methods for collecting logs from Amazon CloudWatch. Fields -### Collect Amazon OpenSearch CloudTrail Logs +### Collect CloudTrail Logs 1. Add an [AWS CloudTrail Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-cloudtrail-source/) to your Hosted Collector. * **Name**. Enter a name to display for the new Source. * **Description**. Enter an optional description. * **S3 Region**. Select the Amazon Region for your CloudTrail S3 bucket. * **Bucket Name**. Enter the exact name of your CloudTrail S3 bucket. - * **Path Expression**. Enter the string that matches the S3 objects you'd like to collect. You can use a wildcard (\*) in this string. - * DO NOT use a [leading forward slash](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-path-expressions/). - * The S3 bucket name is not part of the path. Don’t include the bucket name when you are setting the Path Expression. + * **Path Expression**. Enter the string that matches the S3 objects you'd like to collect. You can use a wildcard (\*) in this string. (DO NOT use a leading forward slash. See [Amazon Path Expressions](/docs/send-data/hosted-collectors/amazon-aws/amazon-path-expressions)). The S3 bucket name is not part of the path. Don’t include the bucket name when you are setting the Path Expression. * **Source Category**. Enter a source category. For example, enter `aws/observability/CloudTrail/logs`. - * **Fields**. Add an account field and assign it a value that is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried using the account field. - ![Fields][image3] - * Access Key ID and Secret Access Key. Enter your Amazon [Access Key ID and Secret Access Key](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html). Learn how to use Role-based access to AWS [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). - * Log File Discovery \-\> Scan Interval. Use the default of 5 minutes. Alternately, enter the frequency. Sumo Logic will scan your S3 bucket for new data. Learn how to configure Log File Discovery [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). - * Enable Timestamp Parsing. Select the Extract timestamp information from log file entries check box. - * Time Zone. Select Ignore time zone from the log file and instead use, and select UTC from the dropdown. - * Timestamp Format. Select Automatically detect the format. - * Enable Multiline Processing. Select the Detect messages spanning multiple lines check box, and select Infer Boundaries. + * **Fields**. Add an account field and assign it a value that is a friendly name/alias to your AWS account from which you are collecting logs. Logs can be queried using the account field. + * **Access Key ID and Secret Access Key**. Enter your Amazon [Access Key ID and Secret Access Key](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html). Learn how to use Role-based access to AWS [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). + * **Log File Discovery > Scan Interval**. Use the default of 5 minutes. Alternately, enter the frequency. Sumo Logic will scan your S3 bucket for new data. Learn how to configure **Log File Discovery** [here](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-sources/). + * **Enable Timestamp Parsing**. Select the **Extract timestamp information from log file entries** check box. + * **Time Zone**. Select **Ignore time zone from the log file and instead use**, and select **UTC** from the dropdown. + * **Timestamp Format**. Select **Automatically detect the format**. + * **Enable Multiline Processing**. Select the **Detect messages spanning multiple lines** check box, and select **Infer Boundaries**. 2. Click **Save**. -### Collect Amazon OpenSearch CloudWatch Metrics +### Collect CloudWatch Metrics 1. Configure a [Hosted Collector](https://help.sumologic.com/docs/send-data/hosted-collectors/configure-hosted-collector/). 2. Configure an [Amazon CloudWatch Source for Metrics](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics/) or [AWS Kinesis Firehose for Metrics Source](https://help.sumologic.com/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) (Recommended). -3. Namespaces. Select aws/es. -4. Metadata. Add an account field to the source and assign it a value that is a friendly name/alias to your AWS account from which you are collecting metrics. The account field allows you to query metrics. - ![Metadata][image2] -5. Click Save. +3. **Namespaces**. Select **aws/es**. +4. **Metadata**. Add an account field to the source and assign it a value that is a friendly name/alias to your AWS account from which you are collecting metrics. The account field allows you to query metrics. +5. Click **Save**. -### **Field in Field Schema** +### Field in Field Schema -1. [Classic UI](https://help.sumologic.com/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select Manage Data \> Logs \> Fields. - [New UI](https://help.sumologic.com/docs/get-started/sumo-logic-ui/). In the top menu select Configuration, and then under Logs select Fields. You can also click the Go To... menu at the top of the screen and select Fields. -2. Search for the `"domainname"` field. +1. [Classic UI](https://help.sumologic.com/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data** \> **Logs** \> **Fields**. + [New UI](https://help.sumologic.com/docs/get-started/sumo-logic-ui/). In the top menu select **Configuration**, and then under **Logs** select **Fields**. You can also click the **Go To...** menu at the top of the screen and select **Fields**. +2. Search for the `domainname` field. 3. If not present, create it. Learn how to create and manage fields [here](https://help.sumologic.com/docs/manage/fields/#manage-fields). -### **Field Extraction Rule(s)** +### Field Extraction Rule(s) Create a Field Extraction Rule for CloudTrail Logs. Learn how to create a Field Extraction Rule [here](https://help.sumologic.com/docs/manage/field-extractions/create-field-extraction-rule/). @@ -288,14 +286,14 @@ account=* region=* _sourceHost=/aws/OpenSearchService/* | fields namespace, domainname ``` -### **Centralized AWS CloudTrail Log Collection** +### Centralized AWS CloudTrail Log Collection In case, you have a centralized collection of CloudTrail logs and are ingesting them from all accounts into a single Sumo Logic CloudTrail log source, create the following Field Extraction Rule to map a proper AWS account(s) friendly name/alias. Create it if not already present or update it as required. -* Rule Name: AWS Accounts -* Applied at: Ingest Time -* Scope (Specific Data): `_sourceCategory=aws/observability/cloudtrail/logs` -* Parse Expression: Enter a parse expression to create an “account” field that maps to the alias you set for each sub account. For example, if you used the “dev” alias for an AWS account with ID "528560886094" and the “prod” alias for an AWS account with ID "567680881046", your parse expression would look like: +* **Rule Name**: AWS Accounts +* **Applied at**: Ingest Time +* **Scope (Specific Data)**: `_sourceCategory=aws/observability/cloudtrail/logs` +* **Parse Expression**: Enter a parse expression to create an “account” field that maps to the alias you set for each sub account. For example, if you used the “dev” alias for an AWS account with ID "528560886094" and the “prod” alias for an AWS account with ID "567680881046", your parse expression would look like: ```sql | json "recipientAccountId" @@ -306,7 +304,7 @@ In case, you have a centralized collection of CloudTrail logs and are ingesting | fields account ``` -## **Installing the Amazon OpenSearch app** +## Installing the Amazon OpenSearch app Now that you have set up a collection for Amazon OpenSearch, install the Sumo Logic app to use the pre-configured searches and dashboards that provide visibility into your environment for real-time analysis of overall usage. @@ -314,11 +312,11 @@ import AppInstall from '../../reuse/apps/app-install-v2.md'; -## **Viewing Amazon OpenSearch dashboards** +## Viewing Amazon OpenSearch dashboards ### Overview -The Amazon OpenSearch \- Overview dashboard provides a comprehensive overview of Amazon OpenSearch performance and operational metrics. It displays key information about cluster utilization, user activity, query performance, error logs, and system events. The dashboard is designed to help administrators monitor and optimize their OpenSearch deployment across different domains and regions. +The **Amazon OpenSearch - Overview** dashboard provides a comprehensive overview of Amazon OpenSearch performance and operational metrics. It displays key information about cluster utilization, user activity, query performance, error logs, and system events. This dashboard is designed to help administrators monitor and optimize their OpenSearch deployment across different domains and regions. Use this dashboard to: * Monitor cluster health and resource utilization by tracking CPU and memory usage across different domain names. @@ -329,39 +327,38 @@ Use this dashboard to: ### Performance Overview -The Amazon OpenSearch \- Performance Overview dashboard provides a comprehensive view of the OpenSearch cluster's health, performance, and resource utilization. It offers real-time insights into cluster status, CPU and memory usage, storage metrics, document management, and read/write latencies across different domains. +The **Amazon OpenSearch - Performance Overview** dashboard provides a comprehensive view of the OpenSearch cluster's health, performance, and resource utilization. It offers real-time insights into cluster status, CPU and memory usage, storage metrics, document management, and read/write latencies across multiple domains. Use this dashboard to: -* Monitor the overall health of OpenSearch clusters with color-coded status indicators (green, yellow, red) and quickly identify the number of clusters in each state. -* Track resource utilization, including average CPU and JVM memory usage, both overall and by individual domain names. +* Monitor the overall health of OpenSearch clusters with color-coded status indicators (green, yellow, or red) and quickly identify the number of clusters in each state. +* Track resource utilization, including average CPU and JVM memory usage, both by overall and individual domain names. * Analyze storage trends and capacity, with graphs showing free storage space and total storage used over time for different domains. -* Keep tabs on document management activities, including the number of searchable documents and deleted documents per domain. -* Assess system performance by observing read and write latencies across various domain names, helping to identify potential bottlenecks or areas for optimization. +* Keep tabs on document management activities, including the number of searchable and deleted documents per domain. +* Assess system performance by observing read and write latencies across various domain names, helping you to identify potential bottlenecks or areas for optimization. Fields ### CloudTrail Audit Events -The Amazon Opensearch \- CloudTrail Audit Events dashboard provides insights across CloudTrail events across location, status, and topic names. +The **Amazon Opensearch - CloudTrail Audit Events** dashboard provides insights across CloudTrail events across location, status, and topic names. Use this dashboard to: - * Monitor successful and failed events by location. -* Get trends of events by status, type. -* Monitor successful and error events with error code in detail. +* Get trends of events by status and type. +* Monitor successful and error events with error codes in detail. * Get details of domain names and users of both successful and error events. Fields ### Audit Logs - Failed Logins -The Amazon OpenSearch \- Audit Logs \- Failed Logins dashboard provides a comprehensive view of login activities, focusing on failed login attempts and authentication errors. It offers insights into the geographical distribution of failed logins, user-specific login failures, cluster-based login issues, and detailed authentication error logs. +The **Amazon OpenSearch - Audit Logs - Failed Logins** dashboard provides a comprehensive view of login activities, focusing on failed login attempts and authentication errors. It offers insights into the geographical distribution of failed logins, user-specific login failures, cluster-based login issues, and detailed authentication error logs. Use this dashboard to: * Monitor the total number of failed user logins at a glance, with a prominent display of the count. -* Visualize the geographical distribution of failed login attempts on map, helping identify potential security threats or unusual activity patterns from specific regions. -* Analyze the distribution of login request methods +* Visualize the geographical distribution of failed login attempts on the map, helping identify potential security threats or unusual activity patterns from specific regions. +* Analyze the distribution of login request methods. * Track failed logins by specific users and clusters, allowing for quick identification of problematic accounts or system components. * Review detailed authentication error logs. @@ -369,11 +366,11 @@ Use this dashboard to: ### Error Logs - Garbage Collection -The Amazon OpenSearch \- Error Logs \- Garbage Collection dashboard provides a comprehensive view of garbage collection (GC) activities in AWS OpenSearch Service. It offers insights into GC performance, memory cleanup, and JVM memory usage across different domains. The dashboard helps monitor and optimize the garbage collection process, which is crucial for maintaining the performance and stability of OpenSearch clusters. +The **Amazon OpenSearch - Error Logs - Garbage Collection** dashboard provides a comprehensive view of Garbage Collection (GC) activities in AWS OpenSearch Service. It offers insights into GC performance, memory cleanup, and JVM memory usage across different domains. The dashboard helps monitor and optimize the garbage collection process, which is crucial for maintaining the performance and stability of OpenSearch clusters. Use this dashboard to: -* Monitor the average garbage collection time overall and by domain name, with a trend graph to track changes over time. +* Monitor the average garbage collection time, by overall and domain name, with a trend graph to track changes over time. * Analyze average cleanup size and trends, to understand the efficiency of the garbage collection process across different domains. * Compare garbage collection counts across different nodes and domains, helping to identify any imbalances or potential issues in specific parts of the cluster. * Visualize JVM memory usage before and after garbage collection, providing insights into the effectiveness of memory management and potential memory leaks. @@ -382,10 +379,10 @@ Use this dashboard to: ### Slow Logs - Queries -The Amazon Opensearch \- Slow Logs \- Queries dashboard provides a comprehensive view of query performance and behavior within an OpenSearch environment. +The **Amazon Opensearch - Slow Logs - Queries** dashboard provides a comprehensive view of query performance and behavior within an OpenSearch environment. Use this dashboard to: -* Monitor and analyze slow query performance in OpenSearch +* Monitor and analyze slow query performance in OpenSearch. * Visualize the distribution of queries over time by log type, helping to identify patterns or spikes in slow query occurrences. * Track query hits and shard usage over time, providing insights into overall system load and resource utilization. * Identify the top 10 slowest queries, including details such as index name, node ID, execution time, and query source for targeted optimization. @@ -394,7 +391,7 @@ Use this dashboard to: ### Domain Name (Cluster) Performance -The Amazon OpenSearch \- Domain Name (Cluster) Performance dashboard provides a comprehensive view of cluster performance and resource utilization across different domains. It offers insights into node count, CPU and memory usage, request patterns, and storage metrics for OpenSearch clusters. +The **Amazon OpenSearch - Domain Name (Cluster) Performance** dashboard provides a comprehensive view of cluster performance and resource utilization across different domains. It offers insights into node count, CPU and memory usage, request patterns, and storage metrics for OpenSearch clusters. Use this dashboard to: * Monitor the total node count and assess resource utilization with average CPU and JVM memory usage gauges, providing a quick overview of cluster health. @@ -407,12 +404,12 @@ Use this dashboard to: ### Nodes Performance -The Amazon OpenSearch \- Nodes Performance dashboard provides a detailed view of node-level performance metrics for OpenSearch clusters across different domains. It offers insights into search and indexing operations, threadpool activities, and overall cluster health, allowing for granular monitoring and troubleshooting of OpenSearch nodes. +The **Amazon OpenSearch - Nodes Performance** dashboard provides a detailed view of node-level performance metrics for OpenSearch clusters across different domains. It offers insights into search and indexing operations, threadpool activities, and overall cluster health, allowing for granular monitoring and troubleshooting of OpenSearch nodes. Use this dashboard to: * Compare search and indexing performance across different nodes and domains, with visualizations for search/indexing rates and latencies, helping identify potential bottlenecks or underperforming nodes. * Monitor thread pool activities, including search queue times, rejected requests, and write queue metrics, which are crucial for understanding cluster load and capacity issues. -* Track OpenSearch Dashboard health metrics, such as max response time, heap utilization, request totals, and concurrent connections, to ensure optimal performance of the user interface. +* Track OpenSearch dashboard health metrics, such as maximum response time, heap utilization, request totals, and concurrent connections, to ensure optimal performance of the user interface. * Analyze trends in search and indexing rates over time, allowing for the detection of patterns or anomalies that may impact cluster performance. * Assess overall cluster health by comparing metrics across different domains, enabling quick identification of domain-specific issues or imbalances. @@ -420,20 +417,20 @@ Use this dashboard to: ### EBS Volume Performance -The Amazon OpenSearch \- EBS Volume Performance dashboard provides a comprehensive view of the performance metrics for Amazon Elastic Block Store (EBS) volumes associated with OpenSearch clusters. It displays various key performance indicators such as read and write latency, I/O operations per second (IOPS), throughput, burst balance, and disk queue depth. +The **Amazon OpenSearch - EBS Volume Performance** dashboard provides a comprehensive view of the performance metrics for Amazon Elastic Block Store (EBS) volumes associated with OpenSearch clusters. It displays various key performance indicators such as read and write latency, I/O operations per second (IOPS), throughput, burst balance, and disk queue depth. Use this dashboard to: * Monitor read and write latency of EBS volumes to ensure optimal response times for OpenSearch operations. * Track read and write IOPS to understand the I/O demand on your EBS volumes and identify any performance constraints. * Analyze read and write throughput to assess the data transfer rates and capacity utilization of your EBS volumes. * Keep an eye on the burst balance to ensure your EBS volumes have sufficient performance credits for handling sudden spikes in workload. -* Observe the disk queue depth to identify potential I/O congestion and optimize your storage configuration for better performance. +* Observe the disk queue depth to identify potential I/O congestion and optimize your storage configuration for better performance. Fields ### Cache Performance -The Amazon OpenSearch \- Cache Performance dashboard provides insights into cache performance, evictions, capacity, and memory usage, which are crucial for maintaining optimal performance of OpenSearch clusters. +The **Amazon OpenSearch - Cache Performance** dashboard provides insights into cache performance, evictions, capacity, and memory usage, which are crucial for maintaining the optimal performance of OpenSearch clusters. Use this dashboard to: * Performance tuning of OpenSearch clusters. From ef07230d70e9efa1601312762ab3e5ae03662031 Mon Sep 17 00:00:00 2001 From: Jagadisha V <129049263+JV0812@users.noreply.github.com> Date: Mon, 21 Oct 2024 15:56:32 +0530 Subject: [PATCH 6/7] renamed the app --- cid-redirects.json | 2 +- .../{amazon-opensearch-service.md => amazon-opensearch.md} | 6 +++--- docs/integrations/amazon-aws/index.md | 7 +++++++ sidebars.ts | 2 +- 4 files changed, 12 insertions(+), 5 deletions(-) rename docs/integrations/amazon-aws/{amazon-opensearch-service.md => amazon-opensearch.md} (98%) diff --git a/cid-redirects.json b/cid-redirects.json index e0986e4762..a6179f4401 100644 --- a/cid-redirects.json +++ b/cid-redirects.json @@ -2529,7 +2529,7 @@ "/cid/20152": "/docs/integrations/amazon-aws/amazon-emr", "/cid/20153": "/docs/integrations/amazon-aws/amazon-eventbridge", "/cid/20154": "/docs/integrations/amazon-aws/amazon-gamelift", - "/cid/20155": "/docs/integrations/amazon-aws/amazon-opensearch-service", + "/cid/20155": "/docs/integrations/amazon-aws/amazon-opensearch", "/cid/20156": "/docs/integrations/amazon-aws/aws-elastic-beanstalk", "/cid/20157": "/docs/integrations/amazon-aws/aws-global-accelerator", "/cid/20158": "/docs/integrations/amazon-aws/aws-ground-station", diff --git a/docs/integrations/amazon-aws/amazon-opensearch-service.md b/docs/integrations/amazon-aws/amazon-opensearch.md similarity index 98% rename from docs/integrations/amazon-aws/amazon-opensearch-service.md rename to docs/integrations/amazon-aws/amazon-opensearch.md index 97421c85ab..fda0825331 100644 --- a/docs/integrations/amazon-aws/amazon-opensearch-service.md +++ b/docs/integrations/amazon-aws/amazon-opensearch.md @@ -1,6 +1,6 @@ --- -id: amazon-opensearch-service -title: Amazon OpenSearch Service +id: amazon-opensearch +title: Amazon OpenSearch description: Learn about the collection process for the Amazon OpenSearch Service. --- @@ -306,7 +306,7 @@ In case, you have a centralized collection of CloudTrail logs and are ingesting ## Installing the Amazon OpenSearch app -Now that you have set up a collection for Amazon OpenSearch, install the Sumo Logic app to use the pre-configured searches and dashboards that provide visibility into your environment for real-time analysis of overall usage. +Now that you have set up a collection for **Amazon OpenSearch**, install the Sumo Logic app to use the [pre-configured dashboards](#viewing-amazon-opensearch-dashboards) that provide visibility into your environment for real-time analysis of overall usage. import AppInstall from '../../reuse/apps/app-install-v2.md'; diff --git a/docs/integrations/amazon-aws/index.md b/docs/integrations/amazon-aws/index.md index abd89a331d..d5577a61b7 100644 --- a/docs/integrations/amazon-aws/index.md +++ b/docs/integrations/amazon-aws/index.md @@ -163,6 +163,13 @@ This guide has documentation for all of the apps that Sumo provides for Amazon a

A guide to our app for Amazon Kinesis - Streams.

+
+
+ Thumbnail icon +

Amazon OpenSearch

+

Learn about the collection process for the Amazon OpenSearch Service.

+
+
Thumbnail icon diff --git a/sidebars.ts b/sidebars.ts index 3ee96f1b72..f42630e19d 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -2028,7 +2028,7 @@ integrations: [ 'integrations/amazon-aws/inspector', 'integrations/amazon-aws/inspector-classic', 'integrations/amazon-aws/kinesis-streams', - 'integrations/amazon-aws/amazon-opensearch-service', + 'integrations/amazon-aws/amazon-opensearch', 'integrations/amazon-aws/rds', 'integrations/amazon-aws/redshift-ulm', 'integrations/amazon-aws/route-53-resolver-security', From 9bf4fc9cdc2cad6d4669631e80445b98ee9a60b7 Mon Sep 17 00:00:00 2001 From: Jagadisha V <129049263+JV0812@users.noreply.github.com> Date: Mon, 21 Oct 2024 16:29:38 +0530 Subject: [PATCH 7/7] path fix --- blog-service/2023/12-31.md | 2 +- docs/integrations/product-list/product-list-a-l.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/blog-service/2023/12-31.md b/blog-service/2023/12-31.md index 8630dc2ded..992bf1bcd7 100644 --- a/blog-service/2023/12-31.md +++ b/blog-service/2023/12-31.md @@ -267,7 +267,7 @@ The new setup guides for AWS services are: - [Amazon EventBridge](/docs/integrations/amazon-aws/amazon-eventbridge/) - [Amazon GameLift](/docs/integrations/amazon-aws/amazon-gamelift/) - [Amazon MSK Prometheus](/docs/send-data/collect-from-other-data-sources/amazon-msk-prometheus-metrics-collection) -- [Amazon OpenSearch Service](/docs/integrations/amazon-aws/amazon-opensearch-service/) +- [Amazon OpenSearch Service](/docs/integrations/amazon-aws/amazon-opensearch/) - [AWS Amplify](/docs/integrations/amazon-aws/aws-amplify/) - [AWS Application Migration Service](/docs/integrations/amazon-aws/aws-application-migration-service/) - [AWS App Runner](/docs/integrations/amazon-aws/aws-apprunner/) diff --git a/docs/integrations/product-list/product-list-a-l.md b/docs/integrations/product-list/product-list-a-l.md index 665a76dd6b..2ecd975c99 100644 --- a/docs/integrations/product-list/product-list-a-l.md +++ b/docs/integrations/product-list/product-list-a-l.md @@ -55,7 +55,7 @@ For descriptions of the different types of integrations Sumo Logic offers, see [ | Thumbnail icon | [Amazon Inspector](https://aws.amazon.com/inspector/) | Apps:
- [Amazon Inspector](/docs/integrations/amazon-aws/inspector/)
- [Amazon Inspector Classic](/docs/integrations/amazon-aws/inspector-classic/)
Automation integration: [AWS Inspector](/docs/platform-services/automation-service/app-central/integrations/aws-inspector/)
Cloud SIEM integration: [Amazon AWS - Inspector](https://github.com/SumoLogic/cloud-siem-content-catalog/blob/master/products/ab4056ab-305e-4362-add8-c15c1f7b8afc.md) | | Thumbnail icon | [Amazon Kinesis](https://aws.amazon.com/kinesis/) | App: [Amazon Kinesis - Streams](/docs/integrations/amazon-aws/kinesis-streams/)
Collectors:
- [AWS Kinesis Firehose for Logs Source](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-logs-source/)
- [AWS Kinesis Firehose for Metrics Source](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) | | Thumbnail icon | [Amazon Prometheus](https://aws.amazon.com/prometheus/) | Collector: [Amazon MSK Prometheus metrics collection](/docs/send-data/collect-from-other-data-sources/amazon-msk-prometheus-metrics-collection/) | -| Thumbnail icon | [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) | App: [Amazon OpenSearch Service](/docs/integrations/amazon-aws/amazon-opensearch-service/) | +| Thumbnail icon | [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) | App: [Amazon OpenSearch Service](/docs/integrations/amazon-aws/amazon-opensearch/) | | Thumbnail icon | [Amazon RDS](https://aws.amazon.com/rds/) | App: [Amazon RDS](/docs/integrations/amazon-aws/rds/)
Community app: [Sumo Logic for RDS Enhanced Monitoring](https://github.com/SumoLogic/sumologic-content/tree/master/Amazon_Web_Services/AWS_RDS/Enhanced-Monitoring) | | Thumbnail icon | [Amazon Redshift](https://aws.amazon.com/pm/redshift/) | App: [Amazon Redshift ULM](/docs/integrations/amazon-aws/redshift-ulm/) | | Thumbnail icon | [Amazon Route53](https://aws.amazon.com/route53/) | App: [Amazon Route53 Resolver Security](/docs/integrations/amazon-aws/route-53-resolver-security/)
Automation integration: [AWS Route53](/docs/platform-services/automation-service/app-central/integrations/aws-route53/)
Cloud SIEM integration: [Amazon AWS - Route53](https://github.com/SumoLogic/cloud-siem-content-catalog/blob/master/products/e2393771-bda2-414a-8661-0a57069287ad.md) |