You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This vignette is a quick reference guide for testing challenging functions. It's organized by problem type rather than technique, so you can quickly skim the whole vignette, spot the problem you're facing, then learn more about useful tools for solving it. In it you'll learn how to overcome the following challenges:
23
+
This vignette is a quick reference guide for testing challenging functions. It's organized by problem type rather than technique, so you can quickly skim the whole vignette, spot the problem you're facing, and then learn more about useful tools for solving it. In it, you'll learn how to overcome the following challenges:
24
24
25
25
* Functions that depend on options and environment variables.
26
26
* Random number generators.
@@ -42,7 +42,7 @@ What happens if you want to test a function that relies on randomness in some wa
42
42
43
43
Under the hood, random number generators generate different numbers each time you call them because they update a special `.Random.seed` variable stored in the global environment. You can temporarily set this seed to a known value to make your random numbers reproducible with `withr::local_seed()`, making random numbers a special case of test fixtures (`vignette("test-fixtures")`).
44
44
45
-
Here's a simple example showing how you might test the basic operation of function that rolls a dice:
45
+
Here's a simple example showing how you might test the basic operation of a function that rolls a die:
46
46
47
47
```{r}
48
48
#| label: random-local-seed
@@ -82,9 +82,9 @@ You can skip a test without it passing or failing if it's not possible to run it
82
82
83
83
## HTTP requests
84
84
85
-
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}. These packages both allow you to interactively record HTTP responses, and then later replay them in tests. This is a specialised type of mocking, which allows your tests to run without the internet and isolates your tests from failures in the underlying service.
85
+
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}. These packages both allow you to interactively record HTTP responses and then later replay them in tests. This is a specialized type of mocking that allows your tests to run without the internet and isolates your tests from failures in the underlying service.
86
86
87
-
If your package is going to CRAN, you **must** either one of these packages or using`skip_on_cran()` for internet-facing tests. Otherwise you are at high risk of failing `R CMD check` if the API you are binding is temporarily down. This sort of failure causes extra work for the CRAN maintainers and extra hassle for you.
87
+
If your package is going to CRAN, you **must** either use one of these packages or use`skip_on_cran()` for internet-facing tests. Otherwise, you are at high risk of failing `R CMD check` if the API you are binding is temporarily down. This sort of failure causes extra work for the CRAN maintainers and extra hassle for you.
88
88
89
89
## User interaction
90
90
@@ -131,7 +131,7 @@ test_that("user must respond y or n", {
131
131
})
132
132
```
133
133
134
-
If you were testing the behavior of some function that used `continue()`, you might choose to mock that function instead of `continue()`. For example, the function below requires user confirmation before overwriting an existing file. In order to focus our tests on the behaviour of just this function, we mock `continue()` to return either `TRUE` or `FALSE` without any user messaging.
134
+
If you were testing the behavior of some function that used `continue()`, you might choose to mock that function instead. For example, the function below requires user confirmation before overwriting an existing file. In order to focus our tests on the behavior of just this function, we mock `continue()` to return either `TRUE` or `FALSE` without any user messaging.
Errors, warnings, and other user-facing text should be tested to ensure they're consistent and actionable. Obviously, you can't test this 100% automatically, but you use snapshots (`vignette("snapshotting")`) to ensure that user-facing messages are clearly shown in PRs and easily reviewed by another human.
162
+
Errors, warnings, and other user-facing text should be tested to ensure they're consistent and actionable. Obviously, you can't test this 100% automatically, but you can use snapshots (`vignette("snapshotting")`) to ensure that user-facing messages are clearly shown in PRs and easily reviewed by another human.
This vignette shows you how to write your own expectations. Custom expectations allow you to extend testthat to meet your own specialised testing needs, creating new `expect_` functions that work exactly the same way as the built-ins. Custom expectations are particularly useful if you want to produce expectations tailored for domain-specific data structures, combining multiple checks into a single expectation, or creating more actionable feedback when an expectation fails. You can use them within your package by putting them in a helper file, or share them with others by exporting them from your package.
20
+
This vignette shows you how to write your own expectations. Custom expectations allow you to extend testthat to meet your own specialized testing needs, creating new `expect_*` functions that work exactly the same way as the built-ins. Custom expectations are particularly useful if you want to produce expectations tailored for domain-specific data structures, combine multiple checks into a single expectation, or create more actionable feedback when an expectation fails. You can use them within your package by putting them in a helper file, or share them with others by exporting them from your package.
21
21
22
-
In this vignette, you'll learn about the three-part structure of expectations, how to test your custom expectations, see a few examples, and, if you're writing a lot of expectation, learn how to reduce repeated code.
22
+
In this vignette, you'll learn about the three-part structure of expectations, how to test your custom expectations, see a few examples, and, if you're writing a lot of expectations, learn how to reduce repeated code.
23
23
24
24
## Expectation basics
25
25
@@ -49,7 +49,7 @@ The first step in any expectation is to use `quasi_label()` to capture a "labele
49
49
50
50
Next you need to check each way that `object` could violate the expectation. In this case, there's only one check, but in more complicated cases there can be multiple checks. In most cases, it's easier to check for violations one by one, using early returns to `fail()`. This makes it easier to write informative failure messages that first describe what was expected and then what was actually seen.
51
51
52
-
Note that you need to use `return(fail())` here. If you don't, your expectation might end up failing multiple times or both failing and succeeeding. You won't see these problems when interactively testing your expectation, but forgetting to `return()` can lead to incorrect fail and pass counts in typical usage. In the next section, you'll learn how to test your expecatation to avoid this issue.
52
+
Note that you need to use `return(fail())` here. If you don't, your expectation might end up failing multiple times or both failing and succeeding. You won't see these problems when interactively testing your expectation, but forgetting to `return()` can lead to incorrect fail and pass counts in typical usage. In the next section, you'll learn how to test your expectation to avoid this issue.
53
53
54
54
Finally, if the object is as expected, call `pass()` with `act$val`. This is good practice because expectation functions are called primarily for their side-effects (triggering a failure), and returning the value allows expectations to be piped together:
Note the variety of error messages. We always print what was expected, and where possible, also display what was actually recieved:
187
+
Note the variety of error messages. We always print what was expected, and where possible, also display what was actually received:
188
188
189
189
* When `object` isn't an object, we can only say what we expected.
190
-
* When `object` is an S4 object, we can report that.
190
+
* When `object` is an S4 object, we can report that.
191
191
* When `inherits()` is `FALSE`, we provide the actual class, since that's most informative.
192
192
193
193
The general principle is to tailor error messages to what the user can act on based on what you know about the input.
@@ -199,13 +199,11 @@ Also note that I check that the `class` argument is a string. If it's not a stri
199
199
expect_s3_class(x1, 1)
200
200
```
201
201
202
-
### An optional class
203
-
204
202
## Repeated code
205
203
206
-
As you write more expectations, you might discover repeated code that you want to extract into a helper. Unfortunately, creating 100% correct helper functions is not straightforward in testthat because `fail()` captures the calling environment in order to give useful tracebacks, and testthat's own expectations don't expose this an argument. Fortunately, getting this right is not critical (you'll just get a slightly suboptimal traceback in the case of failure), so we don't recommend bothering in most cases. We document it here, however, because it's important to get it right in testthat itself.
204
+
As you write more expectations, you might discover repeated code that you want to extract into a helper. Unfortunately, creating 100% correct helper functions is not straightforward in testthat because `fail()` captures the calling environment in order to give useful tracebacks, and testthat's own expectations don't expose this as an argument. Fortunately, getting this right is not critical (you'll just get a slightly suboptimal traceback in the case of failure), so we don't recommend bothering in most cases. We document it here, however, because it's important to get it right in testthat itself.
207
205
208
-
The key challenge is that `fail()` captures a `trace_env` which should be the execution environment of the expectation. This usually works, because the default value of `trace_env` is `caller_env()`. But when you introduce a helper, you'll need to explicitly pass it along:
206
+
The key challenge is that `fail()` captures a `trace_env`, which should be the execution environment of the expectation. This usually works because the default value of `trace_env` is `caller_env()`. But when you introduce a helper, you'll need to explicitly pass it along:
209
207
210
208
```{r}
211
209
expect_length_ <- function(act, n, trace_env = caller_env()) {
* The helper shouldn't be userfacing, so we give it a `_` suffix to make that clear.
227
+
* The helper shouldn't be user-facing, so we give it a `_` suffix to make that clear.
230
228
* It's typically easiest for a helper to take the labeled value produced by `quasi_label()`.
231
229
* Your helper should usually call both `fail()` and `pass()` and be returned from the wrapping expectation.
232
230
233
-
Again, you're probably not writing so many expectations that it makes sense for you to go to this effort, but it is important for testthat to get correct.
231
+
Again, you're probably not writing so many expectations that it makes sense for you to go to this effort, but it is important for testthat to get it right.
0 commit comments