Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion exercises/01.setup/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

<callout-success>Proficiency with any tool starts from the proper configuration.</callout-success>

Let's kick things off by getting your more productive with Vitest. Specifically, focusing on the following areas:
Let's kick things off by getting you more productive with Vitest. Specifically, focusing on the following areas:

1. Make you write, iterate, and debug tests faster;
1. Reuse Vitest for testing different code with different requirements;
Expand Down
4 changes: 2 additions & 2 deletions exercises/02.context/01.solution.custom-fixtures/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ export const test = testBase.extend<Fixtures>({
})
```

Here, I am maping over the given `items` and making sure that each cart item has complete values. I am using the `faker` object to generate random values so I don't have to describe the entire cart item if my test case is interested only in some of its properties, like `price` and `quantity`, for example.
Here, I am mapping over the given `items` and making sure that each cart item has complete values. I am using the `faker` object to generate random values so I don't have to describe the entire cart item if my test case is interested only in some of its properties, like `price` and `quantity`, for example.

Finally, to use this custom test context and my fixture, I'll go to the `src/cart-utils.test.ts` test file and import the custom `test` function I've created:

Expand Down Expand Up @@ -154,7 +154,7 @@ The `cart` fixture effectively becomes a _shared state_. If you change its value

<callout-success>Use fixtures to _help create values_ but always make the values _known in the context of the test_. Everything the test needs has to be known and controlled within that test.</callout-success>

Once exception to this rule is _resused value that is never going to change_ within the same test run. For example, if you're testing against different locales of your application, you might want to set the `locale` before the test run and expose its value as a fixture:
Once exception to this rule is _reused value that is never going to change_ within the same test run. For example, if you're testing against different locales of your application, you might want to set the `locale` before the test run and expose its value as a fixture:

```ts highlight=1
test('...', ({ locale }) => {})
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Once you access that fixture from the test context object, Vitest will know that

But what about the tests that _don't_ use that fixture?

Since they never reference it, _Vitest will skip its initalization_. That makes sense. If you don't need a temporary file for this test, there's no need to create and delete the temporary directory. Nothing is going to use it.
Since they never reference it, _Vitest will skip its initialization_. That makes sense. If you don't need a temporary file for this test, there's no need to create and delete the temporary directory. Nothing is going to use it.

<callout-info>In other words, all fixtures are _lazy_ by default. Their implementation won't be called unless you reference that fixture in your test.</callout-info>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ While this testing strategy works, there are two issues with it:
1. **It's quite verbose**. Imagine employing this strategy to verify dozens of scenarios. You are paying 3 LOC for what is, conceptually, a single assertion;
1. **It's distracting**. Parsing the object and validating the parsed result are _technical details_ exclusive to the intention. It's not the intention itself. It has nothing to do with the `fetchUser()` behaviors you're testing.

Luckily, there are ways to redesign this approach to be more declartive and expressive by using a _custom matcher_.
Luckily, there are ways to redesign this approach to be more declarative and expressive by using a _custom matcher_.

## Your task

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

<EpicVideo url="https://www.epicweb.dev/workshops/advanced-vitest-patterns/assertions/03-02-problem" />

_Assymetric matcher_ is the one where the `actual` value is literal while the `expected` value is an _expression_.
_Asymmetric matcher_ is the one where the `actual` value is literal while the `expected` value is an _expression_.

```ts nonumber
// 👇 Literal string
Expand Down Expand Up @@ -48,7 +48,7 @@ expect(user).toEqual({
})
```

> Here, the `user` object is expected to literally match the object with the `id` and `posts` properties. While the expectation toward the `id` property is literal, the `posts` proprety is described as an abstract `Array<{ id: string }>` object.
> Here, the `user` object is expected to literally match the object with the `id` and `posts` properties. While the expectation toward the `id` property is literal, the `posts` property is described as an abstract `Array<{ id: string }>` object.

## `.toMatchSchema()`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ It's a bit harder if those two are different.
expect(new Measurement(1, 'in')).toEqual(new Measurement(2.54, 'cm')) // ❌
```

Semantically, these two measurements _are_ equal. 1 inch is, indeed, 2.54 centimeters. But syntanctically these two measurements produce different class instances that cannot be compared literally:
Semantically, these two measurements _are_ equal. 1 inch is, indeed, 2.54 centimeters. But syntactically these two measurements produce different class instances that cannot be compared literally:

```ts nonumber
// If you unwrap measurements, you can imagine them as plain objects.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Let's iterate over the difference between _equality testers_ and _matchers_ to h
| ------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- |
| Extends the `.toEqual()` logic. | Implement entirely custom logic. |
| Automatically applied recursively (e.g. if your measurement is nested in an object). | Always applied explicitly. Nested usage is enabled through asymmetric matchers (`{ value: expect.myMatcher() }`). |
| Must always be _synchronous_. | Can be both synchronous and asynchronous, utilizing the `.resolves.` and `.rejects.` chaning. |
| Must always be _synchronous_. | Can be both synchronous and asynchronous, utilizing the `.resolves.` and `.rejects.` chaining. |

Custom equality testers, as the name implies, are your go-to choice to help Vitest compare values that cannot otherwise be compared by sheer serialization (like our `Measurement`, or, for example, `Response` instances).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ const response = await fetch('/api/songs')
await expect(response.json()).resolves.toEqual(favoriteSongs)
```

> While fetching the list of songs takes time, that eventuality is represented as a Promise that you can `await`. This guaratees that your test will not continue until the data is fetched. Quite the same applies to reading the response body stream.
> While fetching the list of songs takes time, that eventuality is represented as a Promise that you can `await`. This guarantees that your test will not continue until the data is fetched. Quite the same applies to reading the response body stream.

But not all systems are designed like that. And even the systems that _are_ designed like that may not expose you the right Promises to await.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ This information is available each time you run your tests and is not exclusive
- `environment`, the time it took to set up and tear down your test environment;
- `prepare`, the time it took for Vitest to prepare the test runner.

This overview is a fantastic starting point in indentifying which areas of your test run are the slowest. For instance, I can see that Vitest spent most time _running the tests_:
This overview is a fantastic starting point in identifying which areas of your test run are the slowest. For instance, I can see that Vitest spent most time _running the tests_:

```txt nonumber highlight=4
transform 18ms,
Expand All @@ -72,7 +72,7 @@ environment 0ms,
prepare 32ms
```

> Your test duration summary will likely be different. See which phases took the most time to know where you should direct your further investigation. For example, if the `setup` phase is too slow, it may be because your test setup is too heavy and should be refactored. If `collect` is lagging behind, it may mean that Vitest has trouble scrapping your large monorepo and you should help it locate the test files by providing explicit `include` and `exclude` options in your Vitest configuration.
> Your test duration summary will likely be different. See which phases took the most time to know where you should direct your further investigation. For example, if the `setup` phase is too slow, it may be because your test setup is too heavy and should be refactored. If `collect` is lagging behind, it may mean that Vitest has trouble scraping your large monorepo and you should help it locate the test files by providing explicit `include` and `exclude` options in your Vitest configuration.

With this covered, let's move on to the `vitest-profiler` report.

Expand All @@ -83,18 +83,18 @@ With this covered, let's move on to the `vitest-profiler` report.
- **Main thread**, which is a Node.js process that spawned Vitest. This roughly corresponds to the `prepare`, `collect`, `transform`, and `environment` phases from the Vitest's time metrics;
- **Tests**, which is individual threads/forks that ran your test files. This roughly corresponds to the `tests` time metric.

These separate profiles allows you to take a peek behind the curtain of your test runtime. You can get an idea of what your testing framework is doing and what your tests are doing, and, hopefully, spot the root cause for that nasty parformance degradation.
These separate profiles allow you to take a peek behind the curtain of your test runtime. You can get an idea of what your testing framework is doing and what your tests are doing, and, hopefully, spot the root cause for that nasty performance degradation.

CPU and memory profiles reflect different aspects of your test run:

- **CPU profile** shows you the CPU consumption. This will generally point you toward code that takes too much time to run;
- **Memory (or heap) profile** shows you the memory consumption. This is handy to watch for memory leaks and heaps that can also negavtively impact your test performance.
- **Memory (or heap) profile** shows you the memory consumption. This is handy to watch for memory leaks and heaps that can also negatively impact your test performance.

Next, I will explore each individual profile in more detail.

### Main thread profiles

One of the firts things the profiler reports is a CPU profile for the main thread:
One of the first things the profiler reports is a CPU profile for the main thread:

```txt nonumber highlight=4
Test profiling complete! Generated the following profiles:
Expand All @@ -115,7 +115,7 @@ Here's how the CPU profile for the main thread looks like:

> Now, if this looks intimidating, don't worry. Profiles will often contain a big chunk of pointers and stack traces you don't know or understand because they reflect the state of the _entire process_.

In these profiles, I am interested in spotting _abnormally long execution times_. Luckily, this report is sortred by "Total Time" automatically for me! That being said, I see nothing suspicious in the main thread so I proceed to the other profiles.
In these profiles, I am interested in spotting _abnormally long execution times_. Luckily, this report is sorted by "Total Time" automatically for me! That being said, I see nothing suspicious in the main thread so I proceed to the other profiles.

### Test profiles

Expand Down Expand Up @@ -144,5 +144,5 @@ What I can also do is give you a rough idea about approaching issues based on th
| CPU | Memory |
| --------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Analyze your expensive logic and refactor it where appropriate. | Analyze the problematic logic to see why it leaks memory. |
| Take advantage of asynchronicity and parallelization. | Fix improper memory management (e.g. rougue child processes, unterminated loops, forgotten timers and intervals, etc). |
| Take advantage of asynchronicity and parallelization. | Fix improper memory management (e.g. rogue child processes, unterminated loops, forgotten timers and intervals, etc). |
| Use caching where appropriate. | In your test setup, be cautious about closing test servers or databases. Prefer scoping mocks to individual tests and deleting them completely once the test is done. |
14 changes: 7 additions & 7 deletions exercises/04.performance/02.solution.concurrency/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ test.concurrent(`${i}`, async () => {})

By making our test cases concurrent, we can switch from a test waterfall to a flat test execution:

![A diagram illustrating a test case waterfal without concurrency and a simultaneous test case execution with concurrency enabled](/assets/05-02-with-concurrency.png)
![A diagram illustrating a test case waterfall without concurrency and a simultaneous test case execution with concurrency enabled](/assets/05-02-with-concurrency.png)

Now that our test run at the same time, it is absolutely crucial we provision proper _test isolation_. We don't want tests to be stepping on each other's toes and becoming flaky as a result. This often comes down to eliminating or replacing any shared (or global) state in test cases.
Now that our tests run at the same time, it is absolutely crucial we provision proper _test isolation_. We don't want tests to be stepping on each other's toes and becoming flaky as a result. This often comes down to eliminating or replacing any shared (or global) state in test cases.

For example, the `expect()` function that you use to make assertions _might contain state_. It is essential we scoped that function to individual test case by accessing it from the test context object:
For example, the `expect()` function that you use to make assertions _might contain state_. It is essential we scope that function to individual test case by accessing it from the test context object:

```ts diff remove=1 add=3
test.concurrent(`${i}`, async () => {
Expand Down Expand Up @@ -55,7 +55,7 @@ export default defineConfig({

> 🦉 Bigger doesn't necessarily mean better with concurrency. There is a physical limit to any concurrency dictated by your hardware. If you set a `maxConcurrency` value higher than that limit, concurrent tests will be _queued_ until they have a slot to run.

By fine-tunning `maxConcurrency`, we are able to improve the test performance even further to the whooping 123ms!
By fine-tuning `maxConcurrency`, we are able to improve the test performance even further to the whopping 123ms!

```bash remove=1 add=2
Duration 271ms
Expand All @@ -66,14 +66,14 @@ By fine-tunning `maxConcurrency`, we are able to improve the test performance ev

While concurrency may improve performance, it can also make your tests _flaky_. Keep in mind that the main price you pay for concurrency is _writing isolated tests_.

Here's a few guidelines on how to keep your tests concurrency-friendly:
Here are a few guidelines on how to keep your tests concurrency-friendly:

- **Do not rely on _shared state_ of any kind**. Never have multiple test modify the same data (even the `expect` function can become a shared state!). In practice, you achieve this through actions like:
- **Do not rely on _shared state_ of any kind**. Never have multiple tests modify the same data (even the `expect` function can become a shared state!). In practice, you achieve this through actions like:
- Striving toward self-contained tests (never have one test rely on the result of another);
- Keeping the test setup next to the test itself;
- Binding mocks (e.g. databases or network) to the test.
- **Isolate side effects**. If your test absolutely must perform a side effect, like writing to a file, guarantee that those side effects are isolated and bound to the test (e.g. create a temporary file accessed only by this particular test).
- **Abolish hard-coded values**. If two tests try to establish a mock server at the same port, one is destined to fail. Once again, procude test-specific fixtures (e.g. spawn a mock server at port `0` to get a random available port number).
- **Abolish hard-coded values**. If two tests try to establish a mock server at the same port, one is destined to fail. Once again, produce test-specific fixtures (e.g. spawn a mock server at port `0` to get a random available port number).

It is worth mentioning that due to these considerations, not all test cases can be flipped to concurrent and call it a day. Concurrency can, however, be a useful factor to stress your tests and highlight the shortcomings in their design. You can then address those shortcomings in planned, incremental refactors to benefit from concurrency (among other things) once your tests are properly isolated.

Expand Down
2 changes: 1 addition & 1 deletion exercises/04.performance/04.solution.sharding/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Then, to run all the tests I will run that script in my terminal:
./run-tests.sh
```

This script will split the test suite in four shard and run them in parallel (`&`), producing isolated test reports.
This script will split the test suite into four shards and run them in parallel (`&`), producing isolated test reports.

<callout-success>Sharding is particularly useful in CI. If you have a large test suite that takes 8s to complete, splitting it into four groups and four parallel CI jobs can speed up the tests up to _four times_.</callout-success>

Expand Down
2 changes: 1 addition & 1 deletion exercises/04.performance/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

<EpicVideo url="https://www.epicweb.dev/workshops/advanced-vitest-patterns/performance/04-00-introduction" />

The second most common complain about automated tests (after them being flaky, of course) is that they are _slow_. And while flakiness often has a more unpredictable nature, performance issues with tests are more similar to a snowball. They might be there from the start but you won't notice them while having ten tests. They will become painfully apparent once you have a hundred.
The second most common complaint about automated tests (after them being flaky, of course) is that they are _slow_. And while flakiness often has a more unpredictable nature, performance issues with tests are more similar to a snowball. They might be there from the start but you won't notice them while having ten tests. They will become painfully apparent once you have a hundred.

**I don't want you to be punished for improving the test coverage of your software**. Neither do I want you to be left in the dark when you stare at a 10-minute long test run on CI and begin to question the rationale of your life's choices.

Expand Down
Loading