You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/manage/ingestion-volume/log-ingestion.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ Log data may not be kept when sent via HTTP Sources or Cloud Syslog Sources, as
29
29
* Sumo Logic accounts can be upgraded at any time to allow for additional quota. Contact [Sumo Logic Sales](mailto:[email protected]) to customize your account to meet your organization's needs.
30
30
31
31
:::important
32
-
Compressed files are decompressed before they are ingested, so they are ingested at the decompressed file size rate.
32
+
[Compressed files](/docs/send-data/hosted-collectors/http-source/logs-metrics/#compressed-data) are decompressed before they are ingested, so they are ingested at the decompressed file size rate.
Copy file name to clipboardExpand all lines: docs/search/optimize-search-performance.md
+188Lines changed: 188 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,3 +70,191 @@ Here's a quick look at how to choose the right indexed search optimization tool.
70
70
As data enters Sumo Logic, it is first routed to any Partitions for indexing. It is then checked against Scheduled Views, and any data that matches the Scheduled Views is indexed.
71
71
72
72
Data can be in both a Partition and a Scheduled View because the two tools are used differently (and are indexed separately). Although Partitions are indexed first, the process does not slow the indexing of Scheduled Views.
73
+
74
+
## Additional methods to optimize Search performance
75
+
76
+
### Use the smallest time range
77
+
78
+
Always set the search time range to the minimum duration required for your use case. This reduces the data volume and improves the query efficiency. When working with long time ranges, start by building and testing your search on a shorter time range. Once the search is finalized and validated, extend it to cover the entire period needed for your analysis.
79
+
80
+
### Use fields extracted by FERs
81
+
82
+
Instead of relying on the `where` operator, filter the data using fields that are already extracted through the Field Extraction Rules (FERs) in the source expression. This approach is more efficient and improves query performance.
83
+
84
+
**Not recommended approach:**
85
+
86
+
```
87
+
_sourceCategory=foo
88
+
| where field_a="value_a"
89
+
```
90
+
91
+
**Recommended approach:**
92
+
93
+
```
94
+
sourceCategory=foo and field_a=value_a
95
+
```
96
+
97
+
### Move terms from parse statement to source expression
98
+
99
+
Adding the parsing terms in the source expression will help you enhance the search performance. A parse statement without `nodrop` drops the logs that could not parse the desired field. For example, `parse "completed * action" as actionName` will remove logs that do not have **completed** and **action** terms.
While filtering the date, reduce the result set to the smallest possible size before performing aggregate operations such as sum, min, max, and average. Also, use subquery in source expression instead of using `if` or `where` search operators.
120
+
121
+
**Not recommended approach:**
122
+
123
+
```
124
+
_sourceCategory=Prod/User/Eventlog
125
+
| parse "userName: *, " as user
126
+
| count by user
127
+
| where user="john"
128
+
```
129
+
130
+
**Recommended approach:**
131
+
132
+
```
133
+
_sourceCategory=Prod/User/Eventlog userName
134
+
| parse "userName: *, " as user
135
+
| where user="john"
136
+
| count by user
137
+
```
138
+
139
+
### Remove redundant operators
140
+
141
+
Remove the search operators in the query that are not required for the desired results.
142
+
143
+
For example, let’s say you have a `sort` operator before an aggregation, but this sorting does not make any difference to the aggregated results.
144
+
145
+
**Not recommended approach:**
146
+
147
+
```
148
+
_sourceCategory=Prod/User/Eventlog
149
+
| parse "userName: *, " as user
150
+
| parse "evenName: *, " as event
151
+
| count by user
152
+
```
153
+
154
+
**Recommended approach:**
155
+
156
+
```
157
+
_sourceCategory=Prod/User/Eventlog
158
+
| parse "userName: *, " as user
159
+
| count by user
160
+
```
161
+
162
+
### Merge operators
163
+
164
+
If the same operators are used multiple times in different levels of query, if possible, try to merge these similar operators. Also, do not use the same operator multiple times to get the same value. This helps in reducing the number of passes performed on the data thereby improving the search performance.
165
+
166
+
**Example 1:**
167
+
168
+
**Not recommended approach:**
169
+
170
+
```
171
+
_sourceCategory=Prod/User/Eventlog
172
+
| parse "completed * action" as actionName
173
+
| parse "action in * ms" as duration
174
+
| pct(duration, 95) by actionName
175
+
```
176
+
177
+
**Recommended approach:**
178
+
179
+
```
180
+
_sourceCategory=Prod/User/Eventlog
181
+
| parse "completed * action in * ms" as actionName, duration
182
+
| pct(duration, 95) by actionName
183
+
```
184
+
185
+
**Example 2:**
186
+
187
+
**Not recommended approach:**
188
+
189
+
```
190
+
_sourceCategory=Prod/User/Eventlog
191
+
| parse "completed * action" as actionName
192
+
| where toLowerCase(actionName) = "logIn” or toLowerCase(actionName) matches "abc*” or toLowerCase(actionName) contains "xyz"
193
+
```
194
+
195
+
**Recommended approach:**
196
+
197
+
```
198
+
_sourceCategory=Prod/User/Eventlog
199
+
| parse "completed * action" as actionName
200
+
| toLowerCase(actionName) as actionNameLowered
201
+
| where actionNameLowered = "logIn” or actionNameLowered matches "abc*” or actionNameLowered contains "xyz”
202
+
```
203
+
204
+
### Use lookup on the lowest possible dataset
205
+
206
+
Minimize the data processed by the `lookup` operator in the query, as lookup is an expensive operation. It can be done in two ways:
207
+
208
+
- Use the lookup as late as possible in the query assuming that clauses before lookup are doing additional data filtering.
209
+
- Move the lookup after an aggregation to drastically reduce the data processed by lookup, as aggregated data is generally far less than non-aggregated data.
210
+
211
+
**Not recommended approach:**
212
+
213
+
```
214
+
_sourceCategory=Prod/User/Eventlog
215
+
| parse "completed * action in * ms" as actionName, duration
216
+
| lookup actionType from path://"/Library/Users/[email protected]/actionTypes" on actionName
217
+
| where actionName in ("login”, "logout”)
218
+
| count by actionName, actionType
219
+
```
220
+
221
+
**Recommended approach (Option 1):**
222
+
223
+
```
224
+
_sourceCategory=Prod/User/Eventlog
225
+
| parse "completed * action in * ms" as actionName, duration
226
+
| where actionName in ("login”, "logout”)
227
+
| count by actionName
228
+
| lookup actionType from path://"/Library/Users/[email protected]/actionTypes" on actionName
229
+
```
230
+
231
+
**Recommended approach (Option 2):**
232
+
233
+
```
234
+
_sourceCategory=Prod/User/Eventlog
235
+
| parse "completed * action in * ms" as actionName, duration
236
+
| where actionName in ("login”, "logout”)
237
+
| lookup actionType from path://"/Library/Users/[email protected]/actionTypes" on actionName
238
+
| count by actionName, actionType
239
+
```
240
+
241
+
### Avoid multiple parse multi statements
242
+
243
+
A parse `multi` statement causes a single log to produce multiple logs in the results. But if a parse `multi` statement is followed by more parse `multi` statements, it can lead to data explosion and the query may never finish. Even if the query works the results may not be as expected.
244
+
245
+
For example, consider the below query where the assumption is that a single log line contains multiple users and multiple event names.
246
+
247
+
```
248
+
_sourceCategory=Prod/User/Eventlog
249
+
| parse regex "userName: (?<user>[a-z-A-Z]+), " multi
250
+
| parse regex "eventName: (?<event>[a-z-A-Z]+), " multi
251
+
```
252
+
253
+
But if you write the query like that, it will generate a result for every combination of `userName` and `eventName` values. Now suppose you want to count by `eventName`, it will not give you the desired result, since a single `eventName` has been duplicated for every `userName` in the same log. So, the better query would be:
254
+
255
+
```
256
+
_sourceCategory=Prod/User/Eventlog
257
+
| parse regex "userName: (?<user>[a-z-A-Z]+), eventName: (?<event>[a-z-A-Z]+), " multi
0 commit comments