You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that we have seen all possible usages, you might wonder how you should write a test that compares performance between different
4
-
releases of your application.
3
+
Now that we've covered all possible usages, you might wonder how to write a test that compares performance between different releases of your application. Here’s a practical example to guide you through the process.
5
4
6
-
Usually steps to follow would look like this:
7
-
1. Write the performance test.
8
-
2. At the end of the test fetch the report, store it and commit to git.
9
-
3. Modify the previous point, so that it fetches both latest report and creates a new one.
10
-
4. Write your assertions for metrics.
5
+
## Typical Workflow
11
6
12
-
# Writing the performance test
13
-
We will use a simple mock for the application under test. All that it does is wait for `50 ms` before
14
-
returning a 200 response code.
7
+
1. Write a performance test.
8
+
2. At the end of the test, generate a performance report, store it, and commit it to Git.
9
+
3. Modify the test to fetch both the latest report and create a new one.
10
+
4. Write assertions to validate your performance metrics.
11
+
12
+
---
13
+
14
+
## Writing the Performance Test
15
+
16
+
We'll use a simple mock for the application under test. This mock waits for `50 ms` before returning a 200 response code.
Here we generate a new performance report for `v1.0.0`. We will use `Direct` query executor and save the report to a custom directory
35
-
called `test_reports`. We will use this report later to compare the performance of new versions.
33
+
---
34
+
35
+
## Generating the First Report
36
+
37
+
Here, we'll generate a performance report for version `v1.0.0` using the `Direct` query executor. The report will be saved to a custom directory named `test_reports`. This report will later be used to compare the performance of new versions.
require.NoError(t, storeErr, "failed to store current report", path)
54
56
```
55
57
56
-
# Modifying report generation
57
-
Now that we have a baseline report stored for `v1.0.0` lets modify the test, so that we can use it with future releases of our application.
58
-
That means that the code from previous step has to change to:
58
+
---
59
+
60
+
## Modifying Report Generation
61
+
62
+
With the baseline report for `v1.0.0` stored, we'll modify the test to support future releases. The code from the previous step will change as follows:
63
+
59
64
```go
65
+
currentVersion:= os.Getenv("CURRENT_VERSION")
66
+
require.NotEmpty(t, currentVersion, "No current version provided")
require.NoError(t, err, "failed to fetch current report or load the previous one")
71
79
```
72
80
73
-
As you remember this function will load latest report from `test_reports` directory and fetch a current one, in this case for `v1.1.0`.
81
+
This function fetches the current report (for version passed as environment variable `CURRENT_VERSION`) while loading the latest stored report from the `test_reports` directory.
82
+
83
+
---
84
+
85
+
## Adding Assertions
86
+
87
+
Let’s assume you want to ensure that none of the performance metrics degrade by more than **1%** between releases (and that error rate has not changed at all). Here's how you can write assertions using a convenient function for the `Direct` query executor:
74
88
75
-
# Adding assertions
76
-
Let's assume we don't want any of performance metrics to get more than **1% worse** between releases and use a convenience function
Done, you're ready to use `BenchSpy` to make sure that the performance of your application didn't degrade below your chosen thresholds!
99
+
---
100
+
101
+
## Conclusion
102
+
103
+
You’re now ready to use `BenchSpy` to ensure that your application’s performance does not degrade below your specified thresholds!
89
104
90
105
> [!NOTE]
91
-
> You can find a test example, where the performance has degraded significantly [here](https://github.com/smartcontractkit/chainlink-testing-framework/tree/main/wasp/examples/benchspy/direct_query_executor/direct_query_real_case.go).
106
+
> [Here](https://github.com/smartcontractkit/chainlink-testing-framework/tree/main/wasp/examples/benchspy/direct_query_executor/direct_query_real_case.go) you can find an example test where performance has degraded significantly,
107
+
> because mock's latency has been increased from `50ms` to `60ms`.
92
108
>
93
-
> This test passes, because we expect the performance to be worse. This is, of course, the opposite what you should do in case of a real application :-)
109
+
> **This test passes because it is designed to expect performance degradation. Of course, in a real application, your goal should be to prevent such regressions.** 😊
And that's it! You've written your first test that uses `WASP` to generate load and `BenchSpy` to ensure that the median latency, 95th percentile latency, max latency and error rate haven't changed significantly between runs. You accomplished this without even needing a Loki instance. But what if you wanted to leverage the power of `LogQL`? We'll explore that in the [next chapter](./loki_std.md).
0 commit comments