Skip to content

Commit c55d7e7

Browse files
authored
Merge branch 'main' into patch-1
2 parents 4c70c40 + 7ed08f2 commit c55d7e7

File tree

1 file changed

+30
-13
lines changed

1 file changed

+30
-13
lines changed

src/data/markdown/translated-guides/en/07 Testing Guides/02 Automated performance testing.md

Lines changed: 30 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: 'Automated performance testing'
33
head_title: 'How to Automate Performance Testing: The k6 Guide'
4-
excerpt: 'Performance testing automation is about establishing a repeatable and consistent process that checks reliability issues at distinct phases of the release cycle.'
4+
excerpt: 'Performance testing automation is about establishing a repeatable and consistent process that checks reliability issues at different stages of the development and release cycle.'
55
---
66

77

8-
Performance testing automation is about establishing **a repeatable and consistent process that checks reliability issues** at different stages of development and release cycle.
8+
Performance testing automation is about establishing **a repeatable and consistent process that checks reliability issues** at different stages of the development and release cycle. For instance, you could run performance tests from CI/CD pipelines and nightly jobs, or manually trigger load tests and monitor their impact in real-time.
99

10-
Performance testing automation does not remove the need to run tests manually. For instance, you could run performance tests from CI/CD pipelines and nightly jobs, or manually trigger load tests and monitor their impact in real-time.
10+
In performance testing, automation does not remove the need to run tests manually. It’s about planning performance tests as part of your Software Development Life Cycle (SDLC) for **continuous performance testing**.
1111

12-
This guide provides general recommendations to help you plan and define a strategy for running automated performance tests as part of your Software Development Life Cycle (SDLC) for **continuous performance testing**:
12+
This guide provides general recommendations to help you plan and define a strategy for running automated performance tests:
1313

1414
- Which tests to automate?
1515
- Which environment to test?
@@ -52,7 +52,7 @@ Automation often refers to running tests with pass/fail conditions as part of a
5252
The first step in the process is reviewing your existing or planned tests and understanding each test's purpose. Can the test serve additional purposes if executed regularly? Some common goals are:
5353

5454
- Comparing current performance against an existing performance baseline.
55-
- Understanding the overall trend in key performance metrics.
55+
- Understanding variances over time in key performance metrics. Observing flat or changing trends.
5656
- Detecting regressions of new releases.
5757
- Testing Service Level Objectives (SLOs) on a regular basis.
5858
- Testing critical areas during the release process.
@@ -64,9 +64,8 @@ When considering a consistent and ongoing purpose for each test, you discover wh
6464

6565
Performance tests can generally be divided into two aspects:
6666

67-
- Test scenario: What is the test verifying?
68-
- Test workload: How does the system respond when handling certain traffic?
69-
67+
- Test scenario (test case): What is the test verifying?
68+
- Test workload (test load): How much traffic and which traffic pattern?
7069

7170
Your test suite should incorporate a diverse range of tests that can verify critical areas of your system using distinct [load test types](/test-types/load-test-types/).
7271

@@ -81,8 +80,8 @@ When planning test coverage or automation, consider starting with tests that:
8180

8281
- Verify the core functionality crucial to the product and business.
8382
- Evaluate the performance in scenarios with high traffic.
84-
- Provide key performance metrics to track trends and compare against baselines.
85-
- Validate reliability goals or SLOs with [pass/fail criteria](/using-k6/thresholds/).
83+
- Track key performance metrics to observe their trends and compare against their baselines.
84+
- Validate reliability goals or SLOs with Pass/Fail criteria.
8685

8786
## Model the scenarios and workload
8887

@@ -147,11 +146,29 @@ Run all the available smoke tests: end-to-end, integration, and unit test types.
147146

148147
These environments are available to test upcoming releases, with each organization using them differently as part of their unique release process.
149148

150-
As a general rule on pre-release environments, we should run larger tests with quality gates, [Pass/Fail criteria](/using-k6/thresholds/) that validate our SLOs or reliability goals. However, for major releases or changes, do not rely only on quality gates to guarantee the reliability of the entire system.
149+
As a general rule on pre-release environments, we should run our larger tests with quality gates, Pass/Fail criteria that validate SLOs or reliability goals. In k6, you can use [Thresholds](/using-k6/thresholds/) in `options` as follows:
150+
151+
```javascript
152+
export const options = {
153+
thresholds: {
154+
// http errors should be less than 1%
155+
http_req_failed: ['rate<0.01'],
156+
// 90% of requests should be below 600ms
157+
http_req_duration: ['p(90)<600'],
158+
// 95% of requests tagged as static content should be below 200ms
159+
'http_req_duration{type:staticContent}': ['p(99)<250'],
160+
// the error rate of my custom metric should be below 5%
161+
my_custom_metric: ['rate<0.05']
162+
},
163+
};
164+
```
165+
166+
167+
However, it can be challenging to effectively assess all reliability goals. Frequently, you’ll encounter “false positives” and “true negatives” when testing with distinct types of load.
151168

152-
It can be challenging to effectively assess all the reliability goals. Frequently, you’ll encounter “false positives” and “true negatives” during your performance testing journey. Only relying on quality gates leads to a wrong sense of security in your release process.
169+
For larger tests, verifying the release based “only” on a Pass/Fail status can create a false sense of security in your performance testing and release process.
153170

154-
In major releases, we recommend having these environments available for a few hours or days to properly test the status of the release. Our recommendations include:
171+
We recommend keeping the pre-release environment available for a few hours or days to thoroughly test the entire system. Our recommendations include:
155172

156173
- Allocating a period of one to several days for validating the release.
157174
- Executing all the existing average-load, stress, and spike tests.

0 commit comments

Comments
 (0)