Skip to content

Commit 11685cd

Browse files
Small updates to APT guide
1 parent 2f6df21 commit 11685cd

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

src/data/markdown/translated-guides/en/07 Testing Guides/02 Automated performance testing.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ Run all the available smoke tests: end-to-end, integration, and unit test types.
146146

147147
These environments are available to test upcoming releases, with each organization using them differently as part of their unique release process.
148148

149-
As a general rule on pre-release environments, we should run our larger tests with quality gates, Pass/Fail criteria that validate SLOs or reliability goals. In k6, you can use [Thresholds](/using-k6/thresholds/) in `options` as follows:
149+
As a general rule on pre-release environments, we should run our larger tests with quality gates, Pass/Fail criteria that validate SLOs or reliability goals. In k6, you can do that by using [Thresholds](/using-k6/thresholds/) in `options` as follows:
150150

151151
```javascript
152152
export const options = {
@@ -163,7 +163,6 @@ export const options = {
163163
};
164164
```
165165

166-
167166
However, it can be challenging to effectively assess all reliability goals. Frequently, you’ll encounter “false positives” and “true negatives” when testing with distinct types of load.
168167

169168
For larger tests, verifying the release based “only” on a Pass/Fail status can create a false sense of security in your performance testing and release process.
@@ -183,19 +182,19 @@ The staging environment is always available and consistently updated with the la
183182

184183
In this case, we should choose the tests that assess key performance indicators and schedule them for consistent execution to collect metrics over a period. Start by selecting a few tests and scheduling their runs two to three times per week.
185184

186-
Like in the pre-release environment, we suggest executing each test at least twice consecutively; doing so allows us to ignore unreliable tests.
185+
Like in the pre-release environment, we suggest executing each test at least twice consecutively, allowing us to ignore unreliable tests.
187186

188187
As we aim to find performance changes, consider scaling the workload of the test according to the staging infrastructure, which often does not match the scale of the production environment.
189188

190189
### Production
191190

192191
Typically, the previous testing environments do not perfectly mirror the production environment, with differences in test data, infrastructure resources, and scalability policies.
193192

194-
Testing in production provides real-world insights that cannot be achieved in other environments. However, production testing requires a careful approach to handling and storing test data in production and avoiding impacting the actual users.
193+
Testing in production provides real-world insights that cannot be achieved in other environments. However, production testing requires a careful approach to handling and storing test data in production and avoiding impacting real users.
195194

196195
A low-risk common practice is to utilize smoke tests for synthetic testing, also called synthetic monitoring. Testing production with minimal load is safe. Schedule smoke tests every five minutes, establishing Pass/Fail test conditions and an effective alerting mechanism. For instance, if six consecutive test runs fail, send an alert.
197196

198-
If release strategies like Blue/Green or Canary deployments are in place, run load tests against the Green or new version to validate the release. It is an ideal moment to see how SLOs behave in production.
197+
If release strategies like Blue/Green or Canary deployments are in place, run load tests against the Green or new version to validate the release. It's an ideal moment to see how SLOs behave in production.
199198

200199
Also, consider scheduling nightly tests or when the system handles less traffic. The goal is not to stress the system, but to consistently gather performance results to compare changes and analyze performance trends. For instance, schedule tests with half of the average traffic level on a weekly basis.
201200

0 commit comments

Comments
 (0)