|
| 1 | +# WASP - Dashboard |
| 2 | + |
| 3 | +> [!WARNING] |
| 4 | +> The API used to check and create alerts is unstable, and the information related to them may be out of date. |
| 5 | +
|
| 6 | + |
| 7 | +WASP comes with a built-in dashboard that allows you to monitor test runs in real-time. |
| 8 | +The dashboard includes several built-in metrics that integrate seamlessly with the `AlertChecker` component. |
| 9 | + |
| 10 | +It is built using the [Grabana](https://pkg.go.dev/github.com/K-Phoen/grabana) library, which you can use to further customize and extend the dashboard by adding your own rows and panels. |
| 11 | + |
| 12 | +> [!NOTE] |
| 13 | +> To create new dashboards, you need to set certain dashboard-specific variables as described in the [Configuration](../configuration.md) section. |
| 14 | +
|
| 15 | +--- |
| 16 | + |
| 17 | +### Predefined Alerts |
| 18 | + |
| 19 | +WASP comes with predefined alerts for: |
| 20 | +* **99th percentile of the response time** |
| 21 | +* **Response errors** |
| 22 | +* **Response timeouts** |
| 23 | + |
| 24 | +You can use these predefined metrics to add simple alerts to your dashboard for conditions such as: |
| 25 | +* Values above or below a threshold |
| 26 | +* Averages above or below a threshold |
| 27 | +* Percentages of the total above or below a threshold |
| 28 | + |
| 29 | +For a complete list of available conditions, refer to Grabana's [ConditionEvaluator ](https://pkg.go.dev/github.com/K-Phoen/[email protected]/alert#ConditionEvaluator). |
| 30 | + |
| 31 | +--- |
| 32 | + |
| 33 | +### Custom Alerts |
| 34 | + |
| 35 | +Custom alerts can be composed of: |
| 36 | +* The simple conditions mentioned above |
| 37 | +* Arbitrary Loki queries |
| 38 | + |
| 39 | +Custom alerts use Grabana's [timeseries.Alert ](https://pkg.go.dev/github.com/K-Phoen/[email protected]/timeseries#Alert) and must be timeseries-based. |
| 40 | + |
| 41 | +> [!NOTE] |
| 42 | +> Adding a built-in alert will also add a new row to the dashboard to display the monitored metric. |
| 43 | +> In contrast, custom alerts do not automatically add rows to the dashboard to prevent clutter. |
| 44 | +
|
| 45 | +Each generator has its own metrics, matched by the generator name. |
| 46 | + |
| 47 | +--- |
| 48 | + |
| 49 | +### Default Dashboard Panels |
| 50 | + |
| 51 | +The default dashboard includes the following panels: |
| 52 | +* Current RPS/VUs (depending on the generator) |
| 53 | +* Responses per second |
| 54 | +* Total successful requests |
| 55 | +* Total failed requests |
| 56 | +* Total timeout requests |
| 57 | +* RPS/VUs per schedule segment |
| 58 | +* Responses per second |
| 59 | +* Latency quantiles over groups (p99, p90, p50) |
| 60 | +* Response latencies over time |
| 61 | +* Logs size per second |
| 62 | +* Sampling statistics |
| 63 | +* Failed & timed-out responses |
| 64 | +* Logs of the statistics-pushing service |
| 65 | + |
| 66 | + |
| 67 | + |
| 68 | +Where applicable, these panels group results by generator name (`gen_name` label) and call group (`call_group` label). |
| 69 | + |
| 70 | +> [!NOTE] |
| 71 | +> You can read more about using labels in the [Use labels](../how-to/use_labels.md) section. |
| 72 | +
|
| 73 | +--- |
| 74 | + |
| 75 | +### Creating a New Dashboard with Alerts |
| 76 | + |
| 77 | +For a practical example of how to create a new dashboard with alerts, see the [Testing Alerts](../testing_alerts.md) section. |
0 commit comments