You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/data/markdown/translated-guides/en/07 Testing Guides/02 Automated performance testing.md
+26-9Lines changed: 26 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ Automation often refers to running tests with pass/fail conditions as part of a
52
52
The first step in the process is reviewing your existing or planned tests and understanding each test's purpose. Can the test serve additional purposes if executed regularly? Some common goals are:
53
53
54
54
- Comparing current performance against an existing performance baseline.
55
-
- Understanding the overall trend in key performance metrics.
55
+
- Understanding variances over time in key performance metrics. Observing flat or changing trends.
56
56
- Detecting regressions of new releases.
57
57
- Testing Service Level Objectives (SLOs) on a regular basis.
58
58
- Testing critical areas during the release process.
@@ -64,9 +64,8 @@ When considering a consistent and ongoing purpose for each test, you discover wh
64
64
65
65
Performance tests can generally be divided into two aspects:
66
66
67
-
- Test scenario: What is the test verifying?
68
-
- Test workload: How does the system respond when handling certain traffic?
69
-
67
+
- Test scenario (test case): What is the test verifying?
68
+
- Test workload (test load): How much traffic and which traffic pattern?
70
69
71
70
Your test suite should incorporate a diverse range of tests that can verify critical areas of your system using distinct [load test types](/test-types/load-test-types/).
72
71
@@ -81,8 +80,8 @@ When planning test coverage or automation, consider starting with tests that:
81
80
82
81
- Verify the core functionality crucial to the product and business.
83
82
- Evaluate the performance in scenarios with high traffic.
84
-
-Provide key performance metrics to track trends and compare against baselines.
85
-
- Validate reliability goals or SLOs with [pass/fail criteria](/using-k6/thresholds/).
83
+
-Track key performance metrics to observe their trends and compare against their baselines.
84
+
- Validate reliability goals or SLOs with Pass/Fail criteria.
86
85
87
86
## Model the scenarios and workload
88
87
@@ -147,11 +146,29 @@ Run all the available smoke tests: end-to-end, integration, and unit test types.
147
146
148
147
These environments are available to test upcoming releases, with each organization using them differently as part of their unique release process.
149
148
150
-
As a general rule on pre-release environments, we should run larger tests with quality gates, [Pass/Fail criteria](/using-k6/thresholds/) that validate our SLOs or reliability goals. However, for major releases or changes, do not rely only on quality gates to guarantee the reliability of the entire system.
149
+
As a general rule on pre-release environments, we should run our larger tests with quality gates, Pass/Fail criteria that validate SLOs or reliability goals. In k6, you can use [Thresholds](/using-k6/thresholds/) in `options` as follows:
150
+
151
+
```javascript
152
+
exportconstoptions= {
153
+
thresholds: {
154
+
// http errors should be less than 1%
155
+
http_req_failed: ['rate<0.01'],
156
+
// 90% of requests should be below 600ms
157
+
http_req_duration: ['p(90)<600'],
158
+
// 95% of requests tagged as static content should be below 200ms
// the error rate of my custom metric should be below 5%
161
+
my_custom_metric: ['rate<0.05']
162
+
},
163
+
};
164
+
```
165
+
166
+
167
+
However, it can be challenging to effectively assess all reliability goals. Frequently, you’ll encounter “false positives” and “true negatives” when testing with distinct types of load.
151
168
152
-
It can be challenging to effectively assess all the reliability goals. Frequently, you’ll encounter “false positives” and “true negatives” during your performance testing journey. Only relying on quality gates leads to a wrong sense of security in your release process.
169
+
For larger tests, verifying the release based “only” on a Pass/Fail status can create a false sense of security in your performance testing and release process.
153
170
154
-
In major releases, we recommend having these environments available for a few hours or days to properly test the status of the release. Our recommendations include:
171
+
We recommend keeping the pre-release environment available for a few hours or days to thoroughly test the entire system. Our recommendations include:
155
172
156
173
- Allocating a period of one to several days for validating the release.
157
174
- Executing all the existing average-load, stress, and spike tests.
0 commit comments