Skip to content

Commit 51c2918

Browse files
authored
Merge branch 'main' into DOCS-467
2 parents 9834cd6 + b32a898 commit 51c2918

File tree

1,174 files changed

+1427
-1058
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,174 files changed

+1427
-1058
lines changed

.clabot

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,9 @@
167167
"sachin-sumologic",
168168
"Andrew-L-Johnson",
169169
"Ayah-Saleh",
170-
"ishaanahuja29"
170+
"ishaanahuja29",
171+
"raunakmandaokar",
172+
"bradtho"
171173
],
172174
"message": "Thank you for your contribution! As this is an open source project, we require contributors to sign our Contributor License Agreement and do not have yours on file. To proceed with your PR, please [sign your name here](https://forms.gle/YgLddrckeJaCdZYA6) and we'll add you to our approved list of contributors.",
173175
"label": "cla-signed",

blog-cse/2024-10-04-content.md

Lines changed: 221 additions & 0 deletions
Large diffs are not rendered by default.

blog-service/2022/12-31.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -663,7 +663,7 @@ Update - We’ve eased the process of offboarding Sumo Logic users. Now, when yo
663663
---
664664
## February 18, 2022 (Monitors)
665665

666-
Update - The [Monitors page](/docs/alerts/monitors) has a new shortcut to quickly view triggered alerts from a Monitor. Hover your cursor over the Status column of a Monitor and click the icon to open [Alert List](/docs/alerts/monitors/alert-response/#alerts-list).
666+
Update - The [Monitors page](/docs/alerts/monitors) has a new shortcut to quickly view triggered alerts from a Monitor. Hover your cursor over the Status column of a Monitor and click the icon to open [Alert List](/docs/alerts/monitors/alert-response/#alert-list).
667667

668668
---
669669
## February 12, 2022 (Apps)

blog-service/2023/12-31.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -423,7 +423,7 @@ We're excited to introduce a new addition to Sumo Logic account management. Org
423423
Here's how to export detailed child usages:
424424

425425
1. In the left navigation bar, select **Administration > Account**. The Account Overview tab is shown by default.
426-
1. Click on the kebab button and select **Download Detailed Child Usages**, to export/dowload the detailed child usages.<br/><img src={useBaseUrl('img/account/download-detailed-child-usages.png')} alt="download-detailed-child-usages" width="650" style={{border: '1px solid gray'}}/>
426+
1. Click on the kebab button and select **Download Detailed Child Usages**, to export/dowload the detailed child usages.<br/><img src={useBaseUrl('img/manage/account/download-detailed-child-usages.png')} alt="download-detailed-child-usages" width="650" style={{border: '1px solid gray'}}/>
427427

428428

429429
---

blog-service/2024-10-03-manage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Forward raw log data to S3 - Beta (Manage)
2+
title: Forward raw log data to S3 (Manage)
33
image: https://help.sumologic.com/img/sumo-square.png
44
keywords:
55
- data forwarding
@@ -13,6 +13,6 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
1313

1414
We are happy to announce that you can now configure the schema and format of log data forwarded from Sumo Logic to an S3 destination. Previously, forwarding was limited to raw log data along with its metadata and enriched fields. Now, you have the flexibility to choose between forwarding only log data, log data with metadata, or log data with metadata and enriched fields, in either CSV or JSON format. This enhanced flexibility enables you to perform more precise analytics on the data using your preferred tools.
1515

16-
<img src={useBaseUrl('img/data-forwarding/forward-raw-data.png')} alt="Options to forward raw data" style={{border: '1px solid gray'}} width="450"/>
16+
<img src={useBaseUrl('img/manage/data-forwarding/forward-raw-data.png')} alt="Options to forward raw data" style={{border: '1px solid gray'}} width="450"/>
1717

1818
To learn more, see the *Forward data to an S3 forwarding destination* section in our article [Forward Data from Sumo Logic to S3](/docs/manage/data-forwarding/amazon-s3-bucket).

docs/alerts/monitors/alert-grouping.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Alert grouping gives you the flexibility to customize how your alerts and notifi
1010

1111
You could group by `_collector` field, for example, and one alert would be generated per `_collector`. You can also have a monitor generate and resolve more than one alert based on specific conditions. For this example below, let's say you're monitoring the ErrorRate for all of your services and want to get an alert for each service that breaches a specific error threshold. Rather than creating multiple monitors for each service, you can create one single monitor that does this.
1212

13-
<img src={useBaseUrl('img/monitors/alert_grouping.png')} alt="alert-grouping" />
13+
<img src={useBaseUrl('img/alerts/monitors/alert_grouping.png')} alt="alert-grouping" />
1414

1515

1616
## Setup
@@ -26,7 +26,7 @@ Alert grouping works for both logs and metrics monitors.
2626
4. Enter your metrics query, then select your desired alert grouping option.
2727
* **One alert per monitor**. If you only want to receive a single alert for the entire monitor.
2828
* **One alert per time series**. To receive a single alert for each time-series that is present in the metric query
29-
* **One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, user will receive one notification when CPU utilization is higher than the threshold for each unique AWS namespace within an account.<br/><img src={useBaseUrl('img/monitors/setup-metrics.png')} alt="setup-metrics.png" />
29+
* **One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, user will receive one notification when CPU utilization is higher than the threshold for each unique AWS namespace within an account.<br/><img src={useBaseUrl('img/alerts/monitors/setup-metrics.png')} alt="setup-metrics.png" />
3030
5. Configure the rest of your alert condition per standard procedure. Refer to [Monitors](/docs/alerts/monitors) for more details.
3131

3232

@@ -37,7 +37,7 @@ Alert grouping works for both logs and metrics monitors.
3737
3. Select **Logs** as the type of monitor.
3838
4. Enter your logs query, then select your desired alert grouping option:
3939
* **One alert per monitor**. Choose this option if you want to only receive a single alert for the entire monitor.
40-
* **One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, you would receive one alert for each `service` that has error count greater than 50. The input field has an auto-completion dropdown that allows you to select all the applicable fields from your query.<br/><img src={useBaseUrl('img/monitors/setup-logs.png')} alt="setup-logs.png" style={{border: '1px solid gray'}} width="800" />
40+
* **One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, you would receive one alert for each `service` that has error count greater than 50. The input field has an auto-completion dropdown that allows you to select all the applicable fields from your query.<br/><img src={useBaseUrl('img/alerts/monitors/setup-logs.png')} alt="setup-logs.png" style={{border: '1px solid gray'}} width="800" />
4141
5. Configure the rest of your alert condition per standard procedure. Refer to [Monitors](/docs/alerts/monitors) for more details.
4242

4343
The input field has an auto-completion dropdown that allows you to select all the applicable fields from your query.
@@ -56,12 +56,12 @@ Notifications will not be sent for alert groups that already have an active aler
5656
A user wants to create a monitor to track CPU across services, and wants to get notified if any node within a service has CPU > 60%.
5757

5858
* **Query**. `metric=CPU_sys`.
59-
* **Group Condition** service <br/><img src={useBaseUrl('img/monitors/usecase1.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
59+
* **Group Condition** service <br/><img src={useBaseUrl('img/alerts/monitors/usecase1.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
6060
* **Alert Evaluation Logic**. If `CPU_sys` for any node within a service is greater than `60`, then an alert notification will be generated for that service (if it was not already generated).
6161
* **Recovery Evaluation Logic**.
6262
* If `CPU_sys` for all the nodes within a service is less than equal to `60`, then recover the alert for that particular service.
6363
* Chart below shows how the alert and recovery notification would have fired for some hypothetical services under various times (t0–t3).
64-
* Red boxes show that triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase1x.png')} alt="alert-grouping" />
64+
* Red boxes show that triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase1x.png')} alt="alert-grouping" />
6565

6666

6767

@@ -70,49 +70,49 @@ A user wants to create a monitor to track CPU across services, and wants to get
7070
A user wants to create a monitor to track CPU and be notified if any node within a service has CPU > 60%, for a given env.
7171

7272
* **Query**. `metric=CPU_sys`.
73-
* **Group Condition**. service, env <br/><img src={useBaseUrl('img/monitors/usecase2.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
73+
* **Group Condition**. service, env <br/><img src={useBaseUrl('img/alerts/monitors/usecase2.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
7474
* **Alert Evaluation Logic**. If `CPU_sys` for any node within a service,env is greater than `60`, then an alert notification will be generated for that service within a given environment (if it was not already generated).
7575
* **Recovery Evaluation Logic**.
7676
* If `CPU_sys` for all the nodes within a service,env is less than equal to `60`, then recover the alert for that particular service within a given environment.
7777
* Chart below shows how the alert and recovery notification would have fired for some hypothetical service, env under various times (T0 -T3).
78-
* Red boxes shows that triggered the alert, and green boxes shows what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase2x.png')} alt="alert-grouping" />
78+
* Red boxes shows that triggered the alert, and green boxes shows what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase2x.png')} alt="alert-grouping" />
7979

8080
### Logs monitor with multiple alert group fields
8181

8282
A user wants to create a monitor to track errors and be notified if any service in a given env has more than 100 errors.
8383

8484
* **Query**. `error`
85-
* **Group Condition**. service, env<br/><img src={useBaseUrl('img/monitors/usecase3.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
85+
* **Group Condition**. service, env<br/><img src={useBaseUrl('img/alerts/monitors/usecase3.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
8686
* **Alert Evaluation Logic**. If count of `errors` for any service,env is greater than `100`, then an alert notification will be generated for that service within a given environment (if it was not already generated).
8787
* **Recovery Evaluation Logic**.
8888
* If count of errors for any service is less than or equal to `100`, then recover the alert for that particular service within a given environment.
8989
* Chart below shows how the alert and recovery notification would have fired for some hypothetical services under various times (t0–t3).
90-
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase3x.png')} alt="alert-grouping" />
90+
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase3x.png')} alt="alert-grouping" />
9191

9292

9393
### Logs monitor on a field with alert group
9494

9595
A user wants to create a monitor to track latency from log messages, and wants to get notified if any service has more than 2-second latency.
9696

9797
* **Query**. `* | parse Latency:*s as latency` (parse out latency field from logs)
98-
* **Group Condition**. service <br/><img src={useBaseUrl('img/monitors/usecase4.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
98+
* **Group Condition**. service <br/><img src={useBaseUrl('img/alerts/monitors/usecase4.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
9999
* **Alert Evaluation Logic**. If Latency field for any service is greater than 2 seconds, then an alert notification will be generated for that service (if it was not already generated).
100100
* **Recovery Evaluation Logic**.
101101
* If the latency field for any service is less than 2 seconds, then recover the alert for that particular service.
102102
* Chart below shows how the alert and recovery notification would have fired for some hypothetical services under various times (t0–t3)
103-
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase4x.png')} alt="alert-grouping" />
103+
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase4x.png')} alt="alert-grouping" />
104104

105105

106106
### Missing data metrics monitor with alert group
107107

108108
A user wants to get an alert if all hosts from a given service has stopped sending data. User wants one part per service.
109109

110110
* **Query**. `metric=CPU_sys`
111-
* **Group Condition**. service <br/><img src={useBaseUrl('img/monitors/usecase5.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
111+
* **Group Condition**. service <br/><img src={useBaseUrl('img/alerts/monitors/usecase5.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
112112
* **Alert Evaluation Logic**. If all the hosts stop sending data (`CPU_sys` metric is not being sent) then generate an alert for a given service, then an alert notification will be generated for that service (if it was not already generated). The list of hosts for a service will be computed and updated on a periodic basis.
113113
* **Recovery Evaluation Logic**.
114114
* If any of the hosts for a given service start sending the data, then resolve the alert.
115-
* If a host stops sending data for more than 24 hours, then remove that host from the list of hosts for a service. Evaluate again if `missingData` is resolved based on the remaining hosts. If yes, then resolve; if not, then keep it open.<br/><img src={useBaseUrl('img/monitors/usecase5x.png')} alt="alert-grouping" />
115+
* If a host stops sending data for more than 24 hours, then remove that host from the list of hosts for a service. Evaluate again if `missingData` is resolved based on the remaining hosts. If yes, then resolve; if not, then keep it open.<br/><img src={useBaseUrl('img/alerts/monitors/usecase5x.png')} alt="alert-grouping" />
116116

117117

118118
## Sumo Logic recommended monitors
@@ -128,7 +128,7 @@ This alert can be useful if you suspect that one of your collectors has stopped
128128
| round(total_bytes / 1024 / 1024) as total_mbytes
129129
| fields total_mbytes, collector
130130
```
131-
* **Group Condition**. `collector` <br/><img src={useBaseUrl('img/monitors/Suggested-Monitors.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
131+
* **Group Condition**. `collector` <br/><img src={useBaseUrl('img/alerts/monitors/Suggested-Monitors.png')} alt="alert-grouping" style={{border: '1px solid gray'}} width="800" />
132132

133133
## FAQ
134134

@@ -138,11 +138,11 @@ You can select up to a maximum of 10 fields. This applies to both logs and metri
138138

139139
#### My field is not appearing under "One alert per [group]" fields dropdown. Why is that?
140140

141-
This scenario, which is only applicable for logs monitors (not for metrics), can happen if you have [dynamically parsed fields](/docs/search/get-started-with-search/build-search/dynamic-parsing) in your query. The auto-complete system uses a 15-minute time range to parse out all the dynamically parsed fields. If those fields are not present in the last 15-minute query, they will not show up in the dropdown. To resolve this, you could manually type in the name of the field, and it should work fine at runtime.<br/><img src={useBaseUrl('img/monitors/alertsdropdown.png')} alt="alert-grouping" width="350" />
141+
This scenario, which is only applicable for logs monitors (not for metrics), can happen if you have [dynamically parsed fields](/docs/search/get-started-with-search/build-search/dynamic-parsing) in your query. The auto-complete system uses a 15-minute time range to parse out all the dynamically parsed fields. If those fields are not present in the last 15-minute query, they will not show up in the dropdown. To resolve this, you could manually type in the name of the field, and it should work fine at runtime.<br/><img src={useBaseUrl('img/alerts/monitors/alertsdropdown.png')} alt="alert-grouping" width="350" />
142142

143143
#### How does "One alert per [group]" impact alert audit logs?
144144

145-
Each alert generated in Sumo Logic generates an **Alert Created** audit log entry. When an alert is generated for specific grouping condition, the grouping information is captured in the audit log under the **alertingGroup** > **groupKey**.<br/><img src={useBaseUrl('img/monitors/alertauditlogs.png')} alt="alert-grouping" />
145+
Each alert generated in Sumo Logic generates an **Alert Created** audit log entry. When an alert is generated for specific grouping condition, the grouping information is captured in the audit log under the **alertingGroup** > **groupKey**.<br/><img src={useBaseUrl('img/alerts/monitors/alertauditlogs.png')} alt="alert-grouping" />
146146

147147

148148
#### What happens if field(s) used in my alert grouping condition don’t exist or have null values?

docs/alerts/monitors/alert-response-faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ For example, in Slack, you can add the following section to the **Alert Payload*
4141
},
4242
```
4343

44-
![alertResponseURLExample.png](/img/monitors/alertResponseURLExample.png)
44+
![alertResponseURLExample.png](/img/alerts/monitors/alertResponseURLExample.png)
4545

4646
Learn more about [Alert Variables](/docs/alerts/monitors/alert-variables).
4747

0 commit comments

Comments
 (0)