|
| 1 | +OpenObserve does not restrict the number of records returned or the length of the time range in a query. You can query a few minutes or several months of data. |
| 2 | +However, queries that return a large number of results, especially those with small time intervals, breakdown fields, or large text fields, can overload the browser. |
| 3 | +This may cause the UI to become unresponsive or crash, particularly on the **Logs** page or in **dashboard panels**. |
| 4 | + |
| 5 | +Two scenarios where this risk is significant: |
| 6 | + |
| 7 | +## 1. Long Duration Queries with Small Intervals |
| 8 | + |
| 9 | +> **Where this issue can occur**: Logs Search and Dashboards |
| 10 | +<br> |
| 11 | +
|
| 12 | +When you use histogram queries such as `histogram("timestamp", interval)`, the `interval` defines how the data is grouped over time. For example, an `interval` of `5m` groups logs into 5-minute buckets. The longer the time range and the smaller the interval, the more buckets the query will return. |
| 13 | + |
| 14 | +Each bucket becomes a row in the query result. If the query returns a large number of rows, the browser must load and render all of them. This can slow down the UI or cause it to crash. |
| 15 | + |
| 16 | +Example: |
| 17 | + |
| 18 | +```sql linenums="1" |
| 19 | + |
| 20 | +SELECT histogram(_timestamp, '5m') AS log_time_interval, |
| 21 | + COUNT(*) AS total_logs |
| 22 | +FROM "default" |
| 23 | +GROUP BY log_time_interval |
| 24 | +ORDER BY log_time_interval ASC |
| 25 | +``` |
| 26 | + |
| 27 | +The above query returns the number of logs collected every 5 minutes. When run over a 7-day period, it generates more than 2,000 time buckets, each representing a row that the browser must load and render. This volume of data can cause the UI to become unresponsive or crash. |
| 28 | + |
| 29 | +> **Note:** <br> |
| 30 | +>**In Dashboard Panels- Breakdown Fields Multiply the Problem**<br> |
| 31 | +>In dashboard panels, if you add a breakdown to the query (e.g., by `log.level`), it multiplies the number of rows. <br> For example, |
| 32 | +> |
| 33 | +>- You already have 2,000 time buckets |
| 34 | +>- `log.level` has 5 unique values: `INFO`, `ERROR`, `DEBUG`, `WARN`, `TRACE` |
| 35 | +> |
| 36 | +>The result becomes: 2,000 time buckets × 5 breakdown values = 10,000 rows |
| 37 | +> |
| 38 | +>That is 5X more data than without the breakdown. |
| 39 | +> |
| 40 | +>All of it must be fetched, loaded, and rendered in your browser. It ends up crashing the UI. |
| 41 | +
|
| 42 | +## 2. Tables that Include Large Text Fields |
| 43 | + |
| 44 | +> **Where this issue can occur**: Dashboards |
| 45 | +
|
| 46 | + |
| 47 | +When you display logs in a table on a dashboard, avoid including large text fields such as `log.body`. |
| 48 | + |
| 49 | +Over longer time ranges, such as several days, these fields can significantly increase the size of the response. This results in a larger payload, which can cause the UI to become unresponsive or crash. |
| 50 | + |
| 51 | +## Best Practices to Avoid UI Crashes |
| 52 | + |
| 53 | +To keep the user interface responsive and avoid crashes, follow these recommendations when working with large datasets: |
| 54 | + |
| 55 | +**For long-duration queries with small intervals** |
| 56 | + |
| 57 | +- Increase the interval when querying longer time ranges. Example: Use `8h` or `12h` instead of `5m` if you are querying several days of logs. |
| 58 | +- Limit the time range to reduce the number of records returned by the query. |
| 59 | +- Avoid or limit breakdown fields on large datasets. Breakdown fields multiply the number of results significantly. |
| 60 | + |
| 61 | +**For tables that include large text fields** |
| 62 | + |
| 63 | +- Avoid including large fields such as `log.body` in dashboard tables unless absolutely necessary. |
| 64 | +- Preview the field before including it in a large table. |
| 65 | +- Use shorter time ranges when you need to view logs with large messages. |
0 commit comments