|
| 1 | +# BenchSpy - Standard Report |
| 2 | + |
| 3 | +`StandardReport` comes with built-in support for three types of data sources: |
| 4 | +* `WASP Generator` |
| 5 | +* `Loki` |
| 6 | +* `Prometheus` |
| 7 | + |
| 8 | +Each of them allows you to both use pre-defined metrics or use your own. |
| 9 | + |
| 10 | +## Pre-defined (standard) metrics |
| 11 | + |
| 12 | +### WASP generator and Loki |
| 13 | +Both query executors focus on the characteristics of the load generated with WASP. |
| 14 | +The datasets they work on are almost identical, because the former allows you to query load-specific |
| 15 | +data before its sent to Loki. The latter offers you richer querying options (via `LogQL`) and access |
| 16 | +to actual load profile (as opposed to the configured one). |
| 17 | + |
| 18 | +Both query executors have following predefined metrics: |
| 19 | +* median latency |
| 20 | +* 95th percentile latency |
| 21 | +* error rate |
| 22 | + |
| 23 | +Latency is understood as the round time from making a request to receiving a response |
| 24 | +from the Application Under Test. |
| 25 | + |
| 26 | +Error rate is the ratio of failed responses to the total number of responses. This include |
| 27 | +both requests that timed out or returned an error from `Gun` or `Vu` implementation. |
| 28 | + |
| 29 | +### Prometehus |
| 30 | +On the other hand, these standard metrics focus on resource consumption by the application you are testing, |
| 31 | +instead on the load generation. |
| 32 | + |
| 33 | +They include the following: |
| 34 | +* median CPU usage |
| 35 | +* 95th percentil of CPU usage |
| 36 | +* median memory usage |
| 37 | +* 95th percentil of memory usage |
| 38 | + |
| 39 | +In both cases queries focus on `total` consumption, which consists of the sum of what the underlaying system and |
| 40 | +you appplication uses. |
| 41 | + |
| 42 | +### How to use |
| 43 | +As mentioned in the examples, to use predefined metrics you should use the `NewStandardReport` method: |
| 44 | +```go |
| 45 | +report, err := benchspy.NewStandardReport( |
| 46 | + "91ee9e3c903d52de12f3d0c1a07ac3c2a6d141fb", |
| 47 | + // Query executor types for which standard metrics should be generated |
| 48 | + benchspy.WithStandardQueries(benchspy.StandardQueryExecutor_Prometheus, benchspy.StandardQueryExecutor_Loki), |
| 49 | + // Prometheus configuration is required if using standard Prometheus metrics |
| 50 | + benchspy.WithPrometheusConfig(benchspy.NewPrometheusConfig("node[^0]")), |
| 51 | + // WASP generators |
| 52 | + benchspy.WithGenerators(gen), |
| 53 | +) |
| 54 | +require.NoError(t, err, "failed to create the report") |
| 55 | +``` |
| 56 | + |
| 57 | +## Custom metrics |
| 58 | +### WASP Generator |
| 59 | +Since `WASP` stores AUT's responses in each generator you can create custom metrics that leverage them. Here's an example |
| 60 | +of adding a function that returns the number of responses that timed out: |
| 61 | +```go |
| 62 | +var generator *wasp.Generator |
| 63 | + |
| 64 | +var timeouts = func(responses *wasp.SliceBuffer[wasp.Response]) (float64, error) { |
| 65 | + if len(responses.Data) == 0 { |
| 66 | + return 0, nil |
| 67 | + } |
| 68 | + |
| 69 | + timeoutCount := 0.0 |
| 70 | + inTimeCount := 0.0 |
| 71 | + for _, response := range responses.Data { |
| 72 | + if response.Timeout { |
| 73 | + timeoutCount = timeoutCount + 1 |
| 74 | + } else { |
| 75 | + inTimeCount = inTimeCount + 1 |
| 76 | + } |
| 77 | + } |
| 78 | + |
| 79 | + return timeoutCount / (timeoutCount + inTimeCount), nil |
| 80 | +} |
| 81 | + |
| 82 | +generatorExectutor, err := NewGeneratorQueryExecutor(generator, map[string]GeneratorQueryFn{ |
| 83 | + "timeout_ratio": timeouts, |
| 84 | +}) |
| 85 | +require.NoError(t, err, "failed to create WASP Generator Query Executor") |
| 86 | +``` |
| 87 | + |
| 88 | +### Loki |
| 89 | +Using custom `LogQL` queries is even simpler as all you need to do is create a new instance of |
| 90 | +`NewLokiQueryExecutor` with a map of desired queries. |
| 91 | +```go |
| 92 | +var generator *wasp.Generator |
| 93 | + |
| 94 | +lokiQueryExecutor := benchspy.NewLokiQueryExecutor( |
| 95 | + map[string]string{ |
| 96 | + "responses_over_time": fmt.Sprintf("sum(count_over_time({my_label=~\"%s\", test_data_type=~\"responses\", gen_name=~\"%s\"} [1s])) by (node_id, go_test_name, gen_name)", label, gen.Cfg.GenName), |
| 97 | + }, |
| 98 | + generator.Cfg.LokiConfig, |
| 99 | +) |
| 100 | +``` |
| 101 | +> [!NOTE] |
| 102 | +> In order to effectively write `LogQL` queries for WASP you need to be familar with how to label |
| 103 | +> your generators and what `test_data_types` WASP uses. |
| 104 | +
|
| 105 | +### Prometheus |
| 106 | +Adding custom `PromQL` queries is equally straight-forward: |
| 107 | +```go |
| 108 | +promConfig := benchspy.NewPrometheusConfig() |
| 109 | + |
| 110 | +prometheusExecutor, err := benchspy.NewPrometheusQueryExecutor( |
| 111 | + map[string]string{ |
| 112 | + "cpu_rate_by_container": "rate(container_cpu_usage_seconds_total{name=~\"chainlink.*\"}[5m])[30m:1m]", |
| 113 | + }, |
| 114 | + *promConfig, |
| 115 | +) |
| 116 | +require.NoError(t, err) |
| 117 | +``` |
| 118 | + |
| 119 | +### How to use with StandardReport |
| 120 | +Using custom queries with a `StandardReport` is rather simple. Instead of passing `StandardQueryExecutorType` with the |
| 121 | +functional option `WithStandardQueries` you should pass the `QueryExecutors` created above with `WithQueryExecutors` option: |
| 122 | +```go |
| 123 | +report, err := benchspy.NewStandardReport( |
| 124 | + "2d1fa3532656c51991c0212afce5f80d2914e34e", |
| 125 | + benchspy.WithQueryExecutors(generatorExectutor, lokiQueryExecutor, prometheusExecutor), |
| 126 | + benchspy.WithGenerators(gen), |
| 127 | +) |
| 128 | +require.NoError(t, err, "failed to create baseline report") |
0 commit comments