You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/data/markdown/translated-guides/en/07 Testing Guides/02 Automated performance testing.md
+4-5Lines changed: 4 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -146,7 +146,7 @@ Run all the available smoke tests: end-to-end, integration, and unit test types.
146
146
147
147
These environments are available to test upcoming releases, with each organization using them differently as part of their unique release process.
148
148
149
-
As a general rule on pre-release environments, we should run our larger tests with quality gates, Pass/Fail criteria that validate SLOs or reliability goals. In k6, you can use[Thresholds](/using-k6/thresholds/) in `options` as follows:
149
+
As a general rule on pre-release environments, we should run our larger tests with quality gates, Pass/Fail criteria that validate SLOs or reliability goals. In k6, you can do that by using[Thresholds](/using-k6/thresholds/) in `options` as follows:
150
150
151
151
```javascript
152
152
exportconstoptions= {
@@ -163,7 +163,6 @@ export const options = {
163
163
};
164
164
```
165
165
166
-
167
166
However, it can be challenging to effectively assess all reliability goals. Frequently, you’ll encounter “false positives” and “true negatives” when testing with distinct types of load.
168
167
169
168
For larger tests, verifying the release based “only” on a Pass/Fail status can create a false sense of security in your performance testing and release process.
@@ -183,19 +182,19 @@ The staging environment is always available and consistently updated with the la
183
182
184
183
In this case, we should choose the tests that assess key performance indicators and schedule them for consistent execution to collect metrics over a period. Start by selecting a few tests and scheduling their runs two to three times per week.
185
184
186
-
Like in the pre-release environment, we suggest executing each test at least twice consecutively; doing so allows us to ignore unreliable tests.
185
+
Like in the pre-release environment, we suggest executing each test at least twice consecutively, allowing us to ignore unreliable tests.
187
186
188
187
As we aim to find performance changes, consider scaling the workload of the test according to the staging infrastructure, which often does not match the scale of the production environment.
189
188
190
189
### Production
191
190
192
191
Typically, the previous testing environments do not perfectly mirror the production environment, with differences in test data, infrastructure resources, and scalability policies.
193
192
194
-
Testing in production provides real-world insights that cannot be achieved in other environments. However, production testing requires a careful approach to handling and storing test data in production and avoiding impacting the actual users.
193
+
Testing in production provides real-world insights that cannot be achieved in other environments. However, production testing requires a careful approach to handling and storing test data in production and avoiding impacting real users.
195
194
196
195
A low-risk common practice is to utilize smoke tests for synthetic testing, also called synthetic monitoring. Testing production with minimal load is safe. Schedule smoke tests every five minutes, establishing Pass/Fail test conditions and an effective alerting mechanism. For instance, if six consecutive test runs fail, send an alert.
197
196
198
-
If release strategies like Blue/Green or Canary deployments are in place, run load tests against the Green or new version to validate the release. It is an ideal moment to see how SLOs behave in production.
197
+
If release strategies like Blue/Green or Canary deployments are in place, run load tests against the Green or new version to validate the release. It's an ideal moment to see how SLOs behave in production.
199
198
200
199
Also, consider scheduling nightly tests or when the system handles less traffic. The goal is not to stress the system, but to consistently gather performance results to compare changes and analyze performance trends. For instance, schedule tests with half of the average traffic level on a weekly basis.
0 commit comments