You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .clabot
+3-1Lines changed: 3 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -167,7 +167,9 @@
167
167
"sachin-sumologic",
168
168
"Andrew-L-Johnson",
169
169
"Ayah-Saleh",
170
-
"ishaanahuja29"
170
+
"ishaanahuja29",
171
+
"raunakmandaokar",
172
+
"bradtho"
171
173
],
172
174
"message": "Thank you for your contribution! As this is an open source project, we require contributors to sign our Contributor License Agreement and do not have yours on file. To proceed with your PR, please [sign your name here](https://forms.gle/YgLddrckeJaCdZYA6) and we'll add you to our approved list of contributors.",
Copy file name to clipboardExpand all lines: blog-service/2022/12-31.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -663,7 +663,7 @@ Update - We’ve eased the process of offboarding Sumo Logic users. Now, when yo
663
663
---
664
664
## February 18, 2022 (Monitors)
665
665
666
-
Update - The [Monitors page](/docs/alerts/monitors) has a new shortcut to quickly view triggered alerts from a Monitor. Hover your cursor over the Status column of a Monitor and click the icon to open [Alert List](/docs/alerts/monitors/alert-response/#alerts-list).
666
+
Update - The [Monitors page](/docs/alerts/monitors) has a new shortcut to quickly view triggered alerts from a Monitor. Hover your cursor over the Status column of a Monitor and click the icon to open [Alert List](/docs/alerts/monitors/alert-response/#alert-list).
Copy file name to clipboardExpand all lines: blog-service/2023/12-31.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -423,7 +423,7 @@ We're excited to introduce a new addition to Sumo Logic account management. Org
423
423
Here's how to export detailed child usages:
424
424
425
425
1. In the left navigation bar, select **Administration > Account**. The Account Overview tab is shown by default.
426
-
1. Click on the kebab button and select **Download Detailed Child Usages**, to export/dowload the detailed child usages.<br/><img src={useBaseUrl('img/account/download-detailed-child-usages.png')} alt="download-detailed-child-usages" width="650" style={{border: '1px solid gray'}}/>
426
+
1. Click on the kebab button and select **Download Detailed Child Usages**, to export/dowload the detailed child usages.<br/><img src={useBaseUrl('img/manage/account/download-detailed-child-usages.png')} alt="download-detailed-child-usages" width="650" style={{border: '1px solid gray'}}/>
@@ -13,6 +13,6 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
13
13
14
14
We are happy to announce that you can now configure the schema and format of log data forwarded from Sumo Logic to an S3 destination. Previously, forwarding was limited to raw log data along with its metadata and enriched fields. Now, you have the flexibility to choose between forwarding only log data, log data with metadata, or log data with metadata and enriched fields, in either CSV or JSON format. This enhanced flexibility enables you to perform more precise analytics on the data using your preferred tools.
15
15
16
-
<img src={useBaseUrl('img/data-forwarding/forward-raw-data.png')} alt="Options to forward raw data" style={{border: '1px solid gray'}} width="450"/>
16
+
<img src={useBaseUrl('img/manage/data-forwarding/forward-raw-data.png')} alt="Options to forward raw data" style={{border: '1px solid gray'}} width="450"/>
17
17
18
18
To learn more, see the *Forward data to an S3 forwarding destination* section in our article [Forward Data from Sumo Logic to S3](/docs/manage/data-forwarding/amazon-s3-bucket).
Copy file name to clipboardExpand all lines: docs/alerts/monitors/alert-grouping.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ Alert grouping gives you the flexibility to customize how your alerts and notifi
10
10
11
11
You could group by `_collector` field, for example, and one alert would be generated per `_collector`. You can also have a monitor generate and resolve more than one alert based on specific conditions. For this example below, let's say you're monitoring the ErrorRate for all of your services and want to get an alert for each service that breaches a specific error threshold. Rather than creating multiple monitors for each service, you can create one single monitor that does this.
@@ -26,7 +26,7 @@ Alert grouping works for both logs and metrics monitors.
26
26
4. Enter your metrics query, then select your desired alert grouping option.
27
27
***One alert per monitor**. If you only want to receive a single alert for the entire monitor.
28
28
***One alert per time series**. To receive a single alert for each time-series that is present in the metric query
29
-
***One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, user will receive one notification when CPU utilization is higher than the threshold for each unique AWS namespace within an account.<br/><img src={useBaseUrl('img/monitors/setup-metrics.png')} alt="setup-metrics.png" />
29
+
***One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, user will receive one notification when CPU utilization is higher than the threshold for each unique AWS namespace within an account.<br/><img src={useBaseUrl('img/alerts/monitors/setup-metrics.png')} alt="setup-metrics.png" />
30
30
5. Configure the rest of your alert condition per standard procedure. Refer to [Monitors](/docs/alerts/monitors) for more details.
31
31
32
32
@@ -37,7 +37,7 @@ Alert grouping works for both logs and metrics monitors.
37
37
3. Select **Logs** as the type of monitor.
38
38
4. Enter your logs query, then select your desired alert grouping option:
39
39
***One alert per monitor**. Choose this option if you want to only receive a single alert for the entire monitor.
40
-
***One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, you would receive one alert for each `service` that has error count greater than 50. The input field has an auto-completion dropdown that allows you to select all the applicable fields from your query.<br/><img src={useBaseUrl('img/monitors/setup-logs.png')} alt="setup-logs.png" style={{border: '1px solid gray'}} width="800" />
40
+
***One alert per [group]**. Allows you to receive one notification per each unique value of the grouping field(s). You can pick more than one field for the grouping condition. In the example below, you would receive one alert for each `service` that has error count greater than 50. The input field has an auto-completion dropdown that allows you to select all the applicable fields from your query.<br/><img src={useBaseUrl('img/alerts/monitors/setup-logs.png')} alt="setup-logs.png" style={{border: '1px solid gray'}} width="800" />
41
41
5. Configure the rest of your alert condition per standard procedure. Refer to [Monitors](/docs/alerts/monitors) for more details.
42
42
43
43
The input field has an auto-completion dropdown that allows you to select all the applicable fields from your query.
@@ -56,12 +56,12 @@ Notifications will not be sent for alert groups that already have an active aler
56
56
A user wants to create a monitor to track CPU across services, and wants to get notified if any node within a service has CPU > 60%.
***Alert Evaluation Logic**. If `CPU_sys` for any node within a service is greater than `60`, then an alert notification will be generated for that service (if it was not already generated).
61
61
***Recovery Evaluation Logic**.
62
62
* If `CPU_sys` for all the nodes within a service is less than equal to `60`, then recover the alert for that particular service.
63
63
* Chart below shows how the alert and recovery notification would have fired for some hypothetical services under various times (t0–t3).
64
-
* Red boxes show that triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase1x.png')} alt="alert-grouping" />
64
+
* Red boxes show that triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase1x.png')} alt="alert-grouping" />
65
65
66
66
67
67
@@ -70,49 +70,49 @@ A user wants to create a monitor to track CPU across services, and wants to get
70
70
A user wants to create a monitor to track CPU and be notified if any node within a service has CPU > 60%, for a given env.
***Alert Evaluation Logic**. If `CPU_sys` for any node within a service,env is greater than `60`, then an alert notification will be generated for that service within a given environment (if it was not already generated).
75
75
***Recovery Evaluation Logic**.
76
76
* If `CPU_sys` for all the nodes within a service,env is less than equal to `60`, then recover the alert for that particular service within a given environment.
77
77
* Chart below shows how the alert and recovery notification would have fired for some hypothetical service, env under various times (T0 -T3).
78
-
* Red boxes shows that triggered the alert, and green boxes shows what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase2x.png')} alt="alert-grouping" />
78
+
* Red boxes shows that triggered the alert, and green boxes shows what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase2x.png')} alt="alert-grouping" />
79
79
80
80
### Logs monitor with multiple alert group fields
81
81
82
82
A user wants to create a monitor to track errors and be notified if any service in a given env has more than 100 errors.
***Alert Evaluation Logic**. If count of `errors` for any service,env is greater than `100`, then an alert notification will be generated for that service within a given environment (if it was not already generated).
87
87
***Recovery Evaluation Logic**.
88
88
* If count of errors for any service is less than or equal to `100`, then recover the alert for that particular service within a given environment.
89
89
* Chart below shows how the alert and recovery notification would have fired for some hypothetical services under various times (t0–t3).
90
-
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase3x.png')} alt="alert-grouping" />
90
+
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase3x.png')} alt="alert-grouping" />
91
91
92
92
93
93
### Logs monitor on a field with alert group
94
94
95
95
A user wants to create a monitor to track latency from log messages, and wants to get notified if any service has more than 2-second latency.
96
96
97
97
***Query**. `* | parse Latency:*s as latency` (parse out latency field from logs)
***Alert Evaluation Logic**. If Latency field for any service is greater than 2 seconds, then an alert notification will be generated for that service (if it was not already generated).
100
100
***Recovery Evaluation Logic**.
101
101
* If the latency field for any service is less than 2 seconds, then recover the alert for that particular service.
102
102
* Chart below shows how the alert and recovery notification would have fired for some hypothetical services under various times (t0–t3)
103
-
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/monitors/usecase4x.png')} alt="alert-grouping" />
103
+
* Red boxes show what triggered the alert, and green boxes show what resolved the alerts.<br/><img src={useBaseUrl('img/alerts/monitors/usecase4x.png')} alt="alert-grouping" />
104
104
105
105
106
106
### Missing data metrics monitor with alert group
107
107
108
108
A user wants to get an alert if all hosts from a given service has stopped sending data. User wants one part per service.
***Alert Evaluation Logic**. If all the hosts stop sending data (`CPU_sys` metric is not being sent) then generate an alert for a given service, then an alert notification will be generated for that service (if it was not already generated). The list of hosts for a service will be computed and updated on a periodic basis.
113
113
***Recovery Evaluation Logic**.
114
114
* If any of the hosts for a given service start sending the data, then resolve the alert.
115
-
* If a host stops sending data for more than 24 hours, then remove that host from the list of hosts for a service. Evaluate again if `missingData` is resolved based on the remaining hosts. If yes, then resolve; if not, then keep it open.<br/><img src={useBaseUrl('img/monitors/usecase5x.png')} alt="alert-grouping" />
115
+
* If a host stops sending data for more than 24 hours, then remove that host from the list of hosts for a service. Evaluate again if `missingData` is resolved based on the remaining hosts. If yes, then resolve; if not, then keep it open.<br/><img src={useBaseUrl('img/alerts/monitors/usecase5x.png')} alt="alert-grouping" />
116
116
117
117
118
118
## Sumo Logic recommended monitors
@@ -128,7 +128,7 @@ This alert can be useful if you suspect that one of your collectors has stopped
128
128
| round(total_bytes / 1024 / 1024) as total_mbytes
@@ -138,11 +138,11 @@ You can select up to a maximum of 10 fields. This applies to both logs and metri
138
138
139
139
#### My field is not appearing under "One alert per [group]" fields dropdown. Why is that?
140
140
141
-
This scenario, which is only applicable for logs monitors (not for metrics), can happen if you have [dynamically parsed fields](/docs/search/get-started-with-search/build-search/dynamic-parsing) in your query. The auto-complete system uses a 15-minute time range to parse out all the dynamically parsed fields. If those fields are not present in the last 15-minute query, they will not show up in the dropdown. To resolve this, you could manually type in the name of the field, and it should work fine at runtime.<br/><img src={useBaseUrl('img/monitors/alertsdropdown.png')} alt="alert-grouping" width="350" />
141
+
This scenario, which is only applicable for logs monitors (not for metrics), can happen if you have [dynamically parsed fields](/docs/search/get-started-with-search/build-search/dynamic-parsing) in your query. The auto-complete system uses a 15-minute time range to parse out all the dynamically parsed fields. If those fields are not present in the last 15-minute query, they will not show up in the dropdown. To resolve this, you could manually type in the name of the field, and it should work fine at runtime.<br/><img src={useBaseUrl('img/alerts/monitors/alertsdropdown.png')} alt="alert-grouping" width="350" />
142
142
143
143
#### How does "One alert per [group]" impact alert audit logs?
144
144
145
-
Each alert generated in Sumo Logic generates an **Alert Created** audit log entry. When an alert is generated for specific grouping condition, the grouping information is captured in the audit log under the **alertingGroup** > **groupKey**.<br/><img src={useBaseUrl('img/monitors/alertauditlogs.png')} alt="alert-grouping" />
145
+
Each alert generated in Sumo Logic generates an **Alert Created** audit log entry. When an alert is generated for specific grouping condition, the grouping information is captured in the audit log under the **alertingGroup** > **groupKey**.<br/><img src={useBaseUrl('img/alerts/monitors/alertauditlogs.png')} alt="alert-grouping" />
146
146
147
147
148
148
#### What happens if field(s) used in my alert grouping condition don’t exist or have null values?
0 commit comments