Skip to content

Commit af2a330

Browse files
committed
one more doc
1 parent a189b41 commit af2a330

File tree

4 files changed

+25
-9
lines changed

4 files changed

+25
-9
lines changed

book/src/SUMMARY.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,7 @@
7777
- [Custom Loki metrics](./libs/wasp/benchspy/loki_custom.md)
7878
- [Standard Prometheus metrics](./libs/wasp/benchspy/prometheus_std.md)
7979
- [Custom Prometheus metrics](./libs/wasp/benchspy/prometheus_custom.md)
80+
- [To Loki or not to Loki?](./libs/wasp/benchspy/loki_dillema.md)
8081
- [Reports](./libs/wasp/benchspy/reports/overview.md)
8182
- [Standard Report](./libs/wasp/benchspy/reports/standard_report.md)
8283
- [Adding new QueryExecutor](./libs/wasp/benchspy/reports/new_executor.md)
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# BenchSpy - To Loki or not to Loki?
2+
3+
You might be asking yourself whether you should use `Loki` or `Generator` query executor if all you
4+
need are basic latency metrics.
5+
6+
As a rule of thumb, if all you need is a single number that describes the median latency or error rate
7+
and you are not interested in directly comparing time series, minimum or maximum values or any kinds
8+
of more advanced calculation on raw data, then you should go with the `Generator`.
9+
10+
Why?
11+
12+
Because it returns a single value for each of standard metrics using the same raw data that Loki would use
13+
(it accesses the data stored in the `WASP`'s generator that would later be pushed to Loki).
14+
This way you can run your load test without a Loki instance and save yourself the need of calculating the
15+
median and 95th percentile latency or the error ratio.
Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
# BenchSpy
22

33
BenchSpy (short for benchmark spy) is a WASP-coupled tool that allows for easy comparison of various performance metrics.
4-
It supports three types of data sources:
5-
* `Loki`
6-
* `Prometheus`
7-
* `WASP generators`
84

9-
And can be easily extended to support additional ones.
10-
11-
Since it's main goal is comparison of performance between various releases or versions of applications (for example, to catch performance degradation)
12-
it is `Git`-aware and is able to automatically find the latest relevant performance report.
5+
It's main characteristics are:
6+
* three built-in data sources:
7+
* `Loki`
8+
* `Prometheus`
9+
* `WASP generator`
10+
* standard/pre-defined metrics for each data source
11+
* ease of extensibility with custom metrics
12+
* ability to load latest performance report based on Git history
1313

1414
It doesn't come with any comparation logic, other than making sure that performance reports are comparable (e.g. they mesure the same metrics in the same way),
1515
leaving total freedom to the user.

book/src/libs/wasp/benchspy/simplest_metrics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,6 @@ compareValues(string(benchspy.ErrorRate), 1.0)
4949

5050
And that's it! You have written your first test that uses `WASP` to generate the load and `BenchSpy` to make sure that neither median latency nor 95th latency percentile
5151
nor error rate has changed significantly between the runs. You did that without even needing a Loki instance, but what if you wanted to leverage the power
52-
of LogQL? We will look at that in the [next chapter](./using_loki.md).
52+
of `LogQL`? We will look at that in the [next chapter](./using_loki.md).
5353

5454
You can find the full example [here](...).

0 commit comments

Comments
 (0)