Skip to content

Commit 613ef52

Browse files
authored
Merge branch 'main' into 404s
2 parents c7a6eaa + 80a6d26 commit 613ef52

File tree

30 files changed

+542
-12
lines changed

30 files changed

+542
-12
lines changed

cid-redirects.json

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1768,7 +1768,10 @@
17681768
"/cid/10321": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/windows",
17691769
"/cid/10322": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/docker",
17701770
"/cid/10323": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/nginx",
1771-
"/cid/10324": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/kafka",
1771+
"/cid/10340": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/kafka",
1772+
"/cid/10341": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/postgresql",
1773+
"/cid/10342": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/mysql",
1774+
"/cid/10343": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/elasticsearch",
17721775
"/cid/10325": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/apache/changelog",
17731776
"/cid/10326": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/linux/changelog",
17741777
"/cid/10327": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/localfile/changelog",
@@ -1780,6 +1783,10 @@
17801783
"/cid/10337": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/docker/changelog",
17811784
"/cid/10338": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/nginx/changelog",
17821785
"/cid/10339": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/kafka/changelog",
1786+
"/cid/10344": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/postgresql/changelog",
1787+
"/cid/10345": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/mysql/changelog",
1788+
"/cid/10346": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/elasticsearch/changelog",
1789+
"/cid/10347": "/docs/send-data/opentelemetry-collector/remote-management/source-templates/st-with-secrets",
17831790
"/cid/10822": "/docs/manage/manage-subscription/create-manage-orgs-flex",
17841791
"/cid/10817": "/docs/integrations/sumo-apps/cse",
17851792
"/cid/10818": "/docs/integrations/sumo-apps/cse",

docs/integrations/databases/opentelemetry/postgresql-opentelemetry.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ This app supports PostgreSQL version 9.6+.
1919

2020
We use the OpenTelemetry collector for PostgreSQL metric collection and for collecting PostgreSQL logs.
2121

22-
The diagram below illustrates the components of the PostgreSQL collection for each database server. OpenTelemetry collector runs on the same host as PostgreSQL, and uses the [PostgreSQL receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/postgresqlreceiver) to obtain PostgreSQL metrics, and the [Sumo Logic OpenTelemetry Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/sumologicexporter) to send the metrics to Sumo Logic. MySQL logs are sent to Sumo Logic through a [filelog receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
22+
The diagram below illustrates the components of the PostgreSQL collection for each database server. OpenTelemetry collector runs on the same host as PostgreSQL, and uses the [PostgreSQL receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/postgresqlreceiver) to obtain PostgreSQL metrics, and the [Sumo Logic OpenTelemetry Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/sumologicexporter) to send the metrics to Sumo Logic. PostgreSQL logs are sent to Sumo Logic through a [filelog receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
2323

2424
<img src='https://sumologic-app-data-v2.s3.amazonaws.com/dashboards/Postgresql-OpenTelemetry/PostgreSQL-Schematics.png' alt="Schematics" />
2525

@@ -107,7 +107,7 @@ import SetupColl from '../../../reuse/apps/opentelemetry/set-up-collector.md';
107107

108108
### Step 2: Configure integration
109109

110-
In this step, you will configure the yaml file required for Mysql collection.
110+
In this step, you will configure the yaml file required for PostgreSQL collection.
111111

112112
Below is the required input:
113113

docs/platform-services/automation-service/automation-service-playbooks.md

Lines changed: 47 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ A filter node filters results from the preceding action based on the condition y
200200
1. [Add an action node](#add-an-action-node-to-a-playbook).
201201
1. Hover your mouse over an action node and click the **+** button. The available nodes are displayed. <br/><img src={useBaseUrl('img/platform-services/automation-service/automations-add-filter-node.png')} alt="Add filter node" style={{border:'1px solid gray'}} width="500"/>
202202
1. Click **Filter**. The filter node configuration dialog displays. <br/><img src={useBaseUrl('img/platform-services/automation-service/automations-add-filter-node-conditions.png')} alt="Add filter node conditions" style={{border:'1px solid gray'}} width="500"/>
203-
1. (Optional) Use **Split by** to select an output if it is a list (array) and you want to evaluate each item separately. Each item in the list is checked against the filter condition. If the condition is true for an item, the item is passed to the next node. (If you do not use the **Split by** field on an output that is a list, then if the condition is true for any item in the list, the entire list moves forward to the next node.)
203+
1. (Optional) Use **Split by** to select an output if it is a list (array) and you want to evaluate each item separately. See ["Split by" field in a filter node](#split-by-field-in-a-filter-node) for more information.
204204
1. Configure the conditions you want to use for filtering.
205205
1. Deselect the **Cartesian product** checkbox.
206206
:::warning
@@ -699,6 +699,52 @@ Following are examples of payloads from different trigger types:
699699
}
700700
```
701701

702+
## Handling arrays in playbooks
703+
704+
An array is a collection of related data values grouped together. When you are handling output from a playbook action, you may want to treat the entire array as a single item you want to pass to the next action, or you may want to treat each element in the array as a separate item. In playbooks, you can do either.
705+
706+
### Arrays in text areas
707+
708+
When you create an action, sometimes you are presented with a text area that includes an "Insert placeholder" icon <img src={useBaseUrl('img/platform-services/automation-service/playbook-insert-placeholder-icon.png')} style={{border:'1px solid gray'}} alt="Insert placeholder icon" width="20"/>. When you click the icon, it allows you to add placeholders to the text area for input or output.
709+
710+
Perform the following steps to add a placeholder to a text area to handle an array in output from a previous action. This allows you to process an array as a single element or multiple elements.
711+
1. [Create a playbook](#create-a-new-playbook) and [add action nodes](#add-an-action-node-to-a-playbook).
712+
1. Edit an action node that displays a text area.
713+
1. In the following example, the **Send Email** action shows text areas for the email's subject, body, and HTML. Click an "Insert placeholder" icon <img src={useBaseUrl('img/platform-services/automation-service/playbook-insert-placeholder-icon.png')} style={{border:'1px solid gray'}} alt="Insert placeholder icon" width="20"/> for one of the fields, for example, **HTML Content**.<br/><img src={useBaseUrl('img/platform-services/automation-service/playbook-variables-in-text-boxes.png')} style={{border:'1px solid gray'}} alt="Insert placeholder icon" width="600"/>
714+
1. Select a value from a previous action. In this example, we'll choose **Get Insight**.<br/><img src={useBaseUrl('img/platform-services/automation-service/playbook-get-value-from-previous-action.png')} style={{border:'1px solid gray'}} alt="Get value from previous action" width="500"/>
715+
1. Select **Outputs**. Only the arrays in the output show these icons: <img src={useBaseUrl('img/platform-services/automation-service/playbooks-output-arrays-icons.png')} style={{border:'1px solid gray'}} alt="Icons on arrays in output" width="60"/> <br/><img src={useBaseUrl('img/platform-services/automation-service/playbook-get-value-from-previous-action-2.png')} style={{border:'1px solid gray'}} alt="Get value from previous action outputs" width="500"/>
716+
1. Click the icon for how you want the array to be handled by the action:
717+
* <img src={useBaseUrl('img/platform-services/automation-service/array-icon-loop.png')} style={{border:'1px solid gray'}} alt="Loop through elements in the array" width="30"/> **Loop**. Loops through the array so that the action is run for each item in the array.
718+
* <img src={useBaseUrl('img/platform-services/automation-service/array-icon-combine.png')} style={{border:'1px solid gray'}} alt="Combine all elements in the array" width="30"/> **Combine**. Combines all items in the array into a single value run by the action.
719+
1. The variable is inserted into the text area preceded by the icon for whether the contents of the array are looped or combined.<br/><img src={useBaseUrl('img/platform-services/automation-service/playbook-array-looped-example.png')} style={{border:'1px solid gray'}} alt="Example of looped array variable" width="700"/>
720+
721+
In this example, the action will be run for each item in the array ("looped").
722+
723+
:::note
724+
The [**Cartesian Product**](#cartesian-product) checkbox is disabled if any variable is selected using the loop feature in the text area.
725+
<img src={useBaseUrl('img/platform-services/automation-service/playbook-cartesian-product-disabled.png')} style={{border:'1px solid gray'}} alt="Cartesian Product checkbox disabled" width="500"/>
726+
:::
727+
728+
### Cartesian product
729+
730+
The **Cartesian product** checkbox appears on nodes you add to playbooks. Clicking this checkbox causes the node to use the [Cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) method to loop through items in arrays processed by the node.
731+
732+
<img src={useBaseUrl('img/platform-services/automation-service/playbooks-cartesian-product-checkbox.png')} style={{border:'1px solid gray'}} alt="Cartesian product checkbox" width="150"/>
733+
734+
For example, suppose one input field is for signal name, and another is for signal value. If you have 2 arrays like this, and each array has 3 items, the Cartesian product evaluation pairs each item from the first set with each item from the second set, which will produce 9 pairs (3x3). Without Cartesian product evaluation, only matching position items are paired, which will produce 3 pairs (equal to the number of items).
735+
736+
:::warning
737+
Use the **Cartesian product** checkbox with caution. For most cases, deselect the **Cartesian product** checkbox when creating playbooks. Large array fields in the input can result in the action being called many times, causing the action to exceed the [actions limit](/docs/platform-services/automation-service/about-automation-service/#actions-limit). Only select this checkbox if you want to evaluate data from array input fields using the Cartesian product method.
738+
:::
739+
740+
### "Split by" field in a filter node
741+
742+
When you [add a filter node](#add-a-filter-node-to-a-playbook), use the **Split by** field to evaluate each item separately in arrays (lists).
743+
744+
<img src={useBaseUrl('img/platform-services/automation-service/playbook-split-by.png')} style={{border:'1px solid gray'}} alt="Split by field" width="700"/>
745+
746+
Each item in arrays is checked against the filter condition. If the condition is true for an item, the item is passed to the next node. If you do not use the **Split by** field on an output that is a list, then if the condition is true for any item in the list, the entire list moves forward to the next node.
747+
702748
## Troubleshoot playbooks
703749

704750
You can run playbooks in automations for [monitors](/docs/alerts/monitors/use-playbooks-with-monitors/), [Cloud SIEM](/docs/cse/automation/automations-in-cloud-siem/), or [Cloud SOAR](/docs/cloud-soar/automation/). If a playbook has a problem when it runs in an automation, an error message often displays in the playbook providing information about the problem.

docs/reuse/cartesian-product.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
Use the **Cartesian product** checkbox with caution. Large array fields in the input can result in the action being called many times, causing the action to exceed the [actions limit](/docs/platform-services/automation-service/about-automation-service/#actions-limit). Only select this checkbox if you want to evaluate data from array input fields using the [Cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) method. For example, suppose one input field is for signal name, and another is for signal value. If you have 2 arrays like this, and each array has 3 items, the Cartesian product evaluation pairs each item from the first set with each item from the second set, which will produce 9 pairs (3x3). Without Cartesian product evaluation, only matching position items are paired, which will produce 3 pairs (equal to the number of items).
1+
Use the **Cartesian product** checkbox with caution. In most cases, you should deselect this checkbox. For more information, see [Cartesian product](/docs/platform-services/automation-service/automation-service-playbooks/#cartesian-product).

docs/search/get-started-with-search/search-page/log-level.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,9 @@ Sumo Logic detects five log levels out of the box: FATAL, ERROR, WARN, INFO, and
5454

5555
Log-Level pattern detection is automatic, meaning you do not need to parse log levels manually or write specific queries to see your distribution of error logs.
5656

57-
If the log message is in JSON format, the log level detection method searches for the presence of keys such as "level", "Level", "loglevel", "logLevel", "Loglevel", "LogLevel", "log_level", "log-level", "Log_Level", "Log_level", "severity", or "_loglevel." If any of these keys are identified in the log message, their corresponding values will be considered and displayed in the results. And if the log message is in a non-JSON format, the log level detection method looks for keywords such as "debug", "info/information", "warn/warning", and "error." If any of these keywords are found in the log message, their corresponding values will be considered and displayed in the results.
57+
If the log message is in JSON format, the log level detection method searches for the presence of keys such as "level", "Level", "loglevel", "logLevel", "Loglevel", "LogLevel", "log_level", "log-level", "Log_Level", "Log_level", "severity", or "_loglevel". If any of these keys are identified in the log message, their corresponding values will be considered and displayed in the results. If any of these specified log level keys are not found in JSON log messages, the log level detection method falls back to a plain text search for terms like "debug", "info/information", "warn/warning", and "error." But this fallback mechanism can result in false positives, especially when these terms appear in other contexts like encoded data fields.
58+
59+
And if the log message is in a non-JSON format, the log level detection method looks for keywords such as "debug", "info/information", "warn/warning", and "error". If any of these keywords are found in the log message, their corresponding values will be considered and displayed in the results.
5860

5961
:::info
6062
If multiple log levels are detected in the message, they will be prioritized in the following order: ERROR > WARN > INFO > DEBUG.

docs/send-data/hosted-collectors/cloud-to-cloud-integration-framework/lastpass-source.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ To configure the LastPass Source:
4444
* ![orange exclamation point.png](/img/reuse/orange-exclamation-point.png) An orange triangle with an exclamation point is shown when the field doesn't exist in the Fields table schema. In this case, an option to automatically add the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo Logic that does not exist in the Fields schema it is ignored, known as dropped.
4545
1. In **CID (Account Number)**, enter your CID account number collected from the LastPass platfrorm.
4646
1. In **API Secret**, enter your API Secret ID collected from the LastPass platfrorm.
47+
1. In **TimeZone**, enter the timezone of admin LastPass account.
4748
1. **Polling Interval**. You have the option to select how often to poll for base entry events. Default is 5 minutes.
4849
1. When you are finished configuring the source, click **Save**.
4950

@@ -67,6 +68,7 @@ Sources can be configured using UTF-8 encoded JSON files with the Collector Ma
6768
| fields | JSON Object | No | `null` | JSON map of key-value fields (metadata) to apply to the Collector or Source. Use the boolean field _siemForward to enable forwarding to SIEM.|`{"_siemForward": false, "fieldA": "valueA"}` |
6869
| cid | Integer | Yes | `null` | The CID account number collected from the LastPass platform. | |
6970
| apiSecret | String | Yes | `null` | The API Secret ID collected from the LastPass platform. | |
71+
| timeZone | String | No | `null` | Timezone of admin LastPass account. |
7072
| pollingIntervalMinutes | Integer | No | 5 | How frequently the integration should poll to LastPass. <br /> **Options**: 5m, 10m, 15m, 30m, 1h, or 24h. | |
7173

7274
### JSON example
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
id: changelog
3+
title: Changelog
4+
sidebar_label: Changelog
5+
description: Changelog for Elasticsearch source template for OpenTelemetry.
6+
---
7+
8+
## [1.0.0] - 2025-01-30
9+
10+
### Added
11+
- Initial version of Elasticsearch source template.

0 commit comments

Comments
 (0)