Skip to content

Commit e3b9de2

Browse files
small updates for consistency/clarity
1 parent 8d04561 commit e3b9de2

File tree

4 files changed

+53
-43
lines changed

4 files changed

+53
-43
lines changed

src/data/markdown/docs/05 Examples/02 Tutorials/01 Get started with k6/100 Test-for-functional-behavior.md

Lines changed: 19 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,13 +66,23 @@ After the test finishes, k6 reports the [default result summary](/results-output
6666
...
6767
```
6868
69-
To make sure you're getting the right response, you could log the response body to the console as follows:
69+
As an optional step, you can log the response body to the console to make sure you're getting the right response.
70+
71+
<CodeGroup labels={["api-test.js"]} lineNumbers={[]} showCopyButton={[true]}>
7072
7173
```javascript
72-
const res = http.post(url, payload, params);
73-
console.log(res.body);
74+
export default function () {
75+
...
76+
77+
const res = http.post(url, payload, params);
78+
79+
// Log the request body
80+
console.log(res.body);
81+
}
7482
```
7583
84+
</CodeGroup>
85+
7686
## Add response checks
7787
7888
Once you're sure the request is well-formed, add a [check](/using-k6/checks) that validates whether the system responds with the expected status code.
@@ -111,6 +121,12 @@ Once you're sure the request is well-formed, add a [check](/using-k6/checks) tha
111121
112122
</CodeGroup>
113123
124+
1. Run the script again.
125+
126+
```bash
127+
k6 run api-test.js
128+
```
129+
114130
1. Inspect the result output for your check.
115131
It should look something like this.
116132

src/data/markdown/docs/05 Examples/02 Tutorials/01 Get started with k6/200 Test for performance.md

Lines changed: 19 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ In this tutorial, learn how to:
1111
- Use [thresholds](/using-k6/thresholds) to assert for performance criteria
1212
- Configure load increases through [scenarios](/using-k6/scenarios)
1313

14-
These examples build on the script from the previous section.
14+
These examples build on the script from the [previous section](/examples/tutorials/get-started-with-k6/test-for-functional-behavior/).
1515

1616
## Context: meet service-level objectives
1717

@@ -24,7 +24,8 @@ The service must meet these SLOs under different types of usual traffic.
2424
## Assert for performance with thresholds
2525

2626
To codify the SLOs, add [_thresholds_](/using-k6/thresholds) to test that your system performs to its goal criteria.
27-
Thresholds are set in options.
27+
28+
Thresholds are set in the [`options`](/using-k6/k6-options/) object.
2829

2930

3031
```javascript
@@ -37,7 +38,7 @@ export const options = {
3738
};
3839
```
3940

40-
Add this [`options`](/using-k6/k6-options/) object with thresholds to your script `api-test.js`.
41+
Add this `options` object with thresholds to your script `api-test.js`.
4142

4243

4344
<CodeGroup labels={["api-test.js"]} lineNumbers={[true]} showCopyButton={[true]}
@@ -83,13 +84,12 @@ export default function () {
8384

8485
</CodeGroup>
8586

86-
Run the test as usual:
87+
Run the test.
8788

8889
```bash
8990
k6 run api-test.js
9091
```
9192

92-
9393
Inspect the console output to determine whether performance crossed a threshold.
9494

9595
```
@@ -98,7 +98,7 @@ Inspect the console output to determine whether performance crossed a threshold.
9898
✗ http_req_failed................: 0.00% ✓ 0 ✗ 1
9999
```
100100

101-
101+
The ✓ and ✗ symbols indicate whether the performance thresholds passed or failed.
102102

103103
## Test performance under increasing load
104104

@@ -110,7 +110,7 @@ Scenarios schedule load according to the number of VUs, number of iterations, VU
110110

111111
### Run a smoke test
112112

113-
Start small. Run a [smoke test](/test-types/smoke-testing "a small test to confirm the script works properly") to see your script can handle minimal load.
113+
Start small. Run a [smoke test](/test-types/smoke-testing "a small test to confirm the script works properly") to check that your script can handle a minimal load.
114114

115115
To do so, use the [`--iterations`](/using-k6/k6-options/reference/#iterations) flag with an argument of 10 or fewer.
116116

@@ -123,15 +123,11 @@ Good thing you ran the test early!
123123

124124
### Run a test against an average load
125125

126-
Now increase the load.
127-
128126
Generally, traffic doesn't arrive all at once.
129127
Rather, it gradually increases to a peak load.
130128
To simulate this, testers increase the load in _stages_.
131129

132-
Since this is a learning environment, the stages are still quite short.
133-
Add the following _scenario_ to your options `object` and rerun the test.
134-
Where the smoke test defined the load in terms of iterations, this configuration uses the [`ramping-vus` executor](/using-k6/scenarios/executors/ramping-vus/) to express load through virtual users and duration.
130+
Add the following `scenario` property to your `options` object and rerun the test.
135131

136132
```javascript
137133
export const options = {
@@ -158,6 +154,9 @@ export const options = {
158154
};
159155
```
160156

157+
Since this is a learning environment, the stages are still quite short.
158+
Where the smoke test defined the load in terms of iterations, this configuration uses the [`ramping-vus` executor](/using-k6/scenarios/executors/ramping-vus/) to express load through virtual users and duration.
159+
161160
Run the test with no command-line flags:
162161

163162
```bash
@@ -167,7 +166,6 @@ k6 run api-test.js
167166
The load is small, so the server should perform within thresholds.
168167
However, this test server may be under load by many k6 learners, so the results are unpredictable.
169168

170-
171169
<Blockquote mod="note" title="To visualize results...">
172170

173171
At this point, it'd be nice to have a graphical interface to visualize metrics as they occur.
@@ -184,13 +182,13 @@ In this case, run the test until the availability (error rate) threshold is cros
184182

185183
To do this:
186184

187-
1. Configure the threshold to abort when it fails.
185+
1. Add the `abortOnFail` property to `http_req_failed`.
188186

189187
```javascript
190188
http_req_failed: [{ threshold: "rate<0.01", abortOnFail: true }], // http errors should be less than 1%, otherwise abort the test
191189
```
192190

193-
1. Configure the load to ramp the test up until it fails.
191+
1. Update the `scenarios` property to ramp the test up until it fails.
194192

195193
```javascript
196194
export const options = {
@@ -219,7 +217,6 @@ To do this:
219217
```
220218

221219
Here is the full script.
222-
Copy and run it with `k6 run api-test.js`.
223220

224221
<CodeGroup labels={["api-test.js"]} lineNumbers={[true]} showCopyButton={[true]}
225222
heightTogglers={[true]}>
@@ -281,13 +278,18 @@ export default function () {
281278

282279
</CodeGroup>
283280

281+
Run the test.
282+
283+
```bash
284+
k6 run api-test.js
285+
```
286+
284287
Did the threshold fail? If not, add another stage with a higher target and try again. Repeat until the threshold aborts the test:
285288

286289
```bash
287290
ERRO[0010] thresholds on metrics 'http_req_duration, http_req_failed' were breached; at least one has abortOnFail enabled, stopping test prematurely
288291
```
289292

290-
291293
## Next steps
292294

293295
In this tutorial, you used [thresholds](/using-k6/thresholds/) to assert performance and [Scenarios](/using-k6/scenarios) to schedule different load patterns. To learn more about the usual load patterns and their goals, read [Load Test Types](/test-types/load-test-types/)

src/data/markdown/docs/05 Examples/02 Tutorials/01 Get started with k6/300 Analyze results.md

Lines changed: 14 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -45,16 +45,14 @@ vus_max........................: 20 min=20 max=20
4545
4646
For simplicity to learn about [k6 metric results](/using-k6/metrics/reference/), this tutorial uses the [JSON output](/results-output/real-time/json) and [jq](https://jqlang.github.io/jq/) to filter results.
4747
48+
For other options to analyze test results such as storage and time-series visualizations in real-time, check out:
4849
49-
For other options to analyze test results such as storage and time-series visualizations in real-time, review:
5050
- [Results output](/results-output/overview/)
5151
- [Ways to visualize k6 results](https://k6.io/blog/ways-to-visualize-k6-results/)
5252
53-
54-
5553
## Write time-series results to a JSON file
5654
57-
To output results as JSON lines, use the `--out` flag.
55+
To output results to a JSON file, use the `--out` flag.
5856
5957
```bash
6058
k6 run --out json=results.json api-test.js
@@ -66,14 +64,13 @@ Then run this `jq` command to filter the latency results; `http_req_duration` me
6664
jq '. | select(.type == "Point" and .metric == "http_req_duration")' results.json
6765
```
6866
69-
k6 results have a number of [built-in tags](/using-k6/tags-and-groups/#system-tags).
70-
For example, filter results to only results where the status is 200:
67+
k6 results have a number of [built-in tags](/using-k6/tags-and-groups/#system-tags). For example, filter results to only results where the status is 200.
7168
7269
```bash
7370
jq '. | select(.type == "Point" and .data.tags.status == "200")' results.json
7471
```
7572
76-
And calculate the aggregated value of any metric with any particular tags. For example,
73+
Or calculate the aggregated value of any metric with any particular tags.
7774
7875
<CodeGroup labels={["Average", "Min", "Max"]} lineNumbers={[false]} showCopyButton={[true]} heightTogglers={[true]}>
7976
@@ -93,12 +90,9 @@ jq '. | select(.type == "Point" and .metric == "http_req_duration") | .data.valu
9390
9491
## Apply custom tags
9592
96-
You can also apply [_Tags_](/using-k6/tags-and-groups/#tags) to requests or code blocks.
97-
To do so:
93+
You can also apply [_Tags_](/using-k6/tags-and-groups/#tags) to requests or code blocks. For example, this is how you can add a [`tags`](/using-k6/tags-and-groups/#tags) to the [request params](/javascript-api/k6-http/params/).
9894
99-
Add a [`tags`](/using-k6/tags-and-groups/#tags) object in the [request params](/javascript-api/k6-http/params/). Give the tag a key and value.
100-
101-
```javascript
95+
```javascript
10296
const params = {
10397
headers: {
10498
"Content-Type": "application/json",
@@ -107,9 +101,9 @@ Add a [`tags`](/using-k6/tags-and-groups/#tags) object in the [request params](/
107101
"my-custom-tag": "auth-api",
108102
},
109103
};
110-
```
104+
```
111105
112-
Pass `params` to the `http` request. Create the file `tagged-login.js`:
106+
Create a new script named "tagged-login.js", and add a custom tag to it.
113107
114108
<CodeGroup labels={["tagged-login.js"]} showCopyButton={[true]} heightTogglers={[true]}>
115109
@@ -153,9 +147,8 @@ jq '. | select(.type == "Point" and .metric == "http_req_duration" and .data.tag
153147
154148
## Organize requests in groups
155149
156-
You can also organize your test logic into [Groups](/using-k6/tags-and-groups#groups), test logic inside a `group` tags to all requests and metrics within its block.
157-
Groups can help you to organize the test as a series of logical transactions or blocks.
158-
150+
You can also organize your test logic into [Groups](/using-k6/tags-and-groups#groups). Test logic inside a `group` tags to all requests and metrics within its block.
151+
Groups can help you organize the test as a series of logical transactions or blocks.
159152
160153
### Context: a new test to group test logic
161154
@@ -275,8 +268,7 @@ As an example, create a metric that collects latency results for each group:
275268
1. Create two duration trend metric functions.
276269
1. In each group, add the `duration` time to the trend for requests to `contacts` and the `coin_flip` endpoints.
277270
278-
279-
<CodeGroup labels={["Adding custom metrics"]} lineNumbers={["false"]} showCopyButton={[true]} heightTogglers={[true]}>
271+
<CodeGroup labels={["multiple-flows.js"]} lineNumbers={["false"]} showCopyButton={[true]} heightTogglers={[true]}>
280272
281273
```javascript
282274
//import necessary modules
@@ -326,20 +318,20 @@ export default function () {
326318
327319
</CodeGroup>
328320
329-
Run the test with small number of iterations and output the results to `results.json`:
321+
Run the test with small number of iterations and output the results to `results.json`.
330322
331323
```bash
332324
k6 run multiple-flows.js --out json=results.json --iterations 10
333325
```
334326
335-
Look for the custom trend metrics in the end-of-test console summary:
327+
Look for the custom trend metrics in the end-of-test console summary.
336328
337329
```bash
338330
coinflip_duration..............: avg=119.6438 min=116.481 med=118.4755 max=135.498 p(90)=121.8459 p(95)=123.89565
339331
contacts_duration..............: avg=125.76985 min=116.973 med=120.6735 max=200.507 p(90)=127.9271 p(95)=153.87245
340332
```
341333
342-
You can also query custom metric results from the JSON results. For example, to get the aggregated results as:
334+
You can also query custom metric results from the JSON results. For example, to get the aggregated results as.
343335
344336
345337
<CodeGroup labels={["Average", "Min", "Max"]} lineNumbers={[false]} showCopyButton={[true]} heightTogglers={[true]}>

src/data/markdown/docs/05 Examples/02 Tutorials/01 Get started with k6/400 Reuse and re-run tests.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ To do so, follow these steps:
175175
// Put visits to contact page in one group
176176
contacts(baseUrl);
177177
// Coinflip players in another group
178-
contacts(baseUrl);
178+
coinflip(baseUrl);
179179
}
180180
181181
//define configuration

0 commit comments

Comments
 (0)