You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: NEWS.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# testthat (development version)
2
2
3
-
* New `vignette("mocking")` explains mocking in more detail (#1265).
4
-
* New `vignette("challenging-functions")` provides an index to other documentation based on the type of challenge you are facing (#1265).
3
+
* New `vignette("mocking")` explains mocking in detail (#1265).
4
+
* New `vignette("challenging-functions")` provides an index to other documentation organised by testing challenges (#1265).
5
5
* When running a test interactively, testthat now reports the number of succeses. The results should also be more useful if you are using nested tests.
6
6
* The hints generated by `expect_snapshot()` and `expect_snapshot_file()` now include the path to the package, if its not in the current working directory (#1577).
7
7
*`expect_snapshot_file()` now clearly errors if the `path` doesnt exist (#2191).
This vignette is a quick reference guide for testing challenging functions. It's organized by problem type rather than technique, so you can quickly skim the whole vignette, spot the problem you're facing, and then learn more about useful tools for solving it. In it, you'll learn how to overcome the following challenges:
24
24
25
-
* Functions that depend on options and environment variables.
25
+
* Functions with implicit inputs, like options and environment variables.
26
26
* Random number generators.
27
27
* Tests that can't be run in some environments.
28
28
* Testing web APIs.
29
+
* Testing graphical output.
29
30
* User interaction.
30
31
* User-facing text.
31
32
* Repeated code.
32
33
33
34
## Options and environment variables
34
35
35
-
If your function depends on options or environment variables, first try refactoring the function to make the [inputs explicit](https://design.tidyverse.org/inputs-explicit.html). If that's not possible, then you can use functions like `withr::local_options()` or `withr::local_envvar()` to temporarily change options and environment values within a test. Learn more in `vignette("test-fixtures")`.
36
+
If your function depends on options or environment variables, first try refactoring the function to make the [inputs explicit](https://design.tidyverse.org/inputs-explicit.html). If that's not possible, use functions like `withr::local_options()` or `withr::local_envvar()` to temporarily change options and environment values within a test. Learn more in `vignette("test-fixtures")`.
36
37
37
38
<!-- FIXME: Consider adding a brief example showing the difference between implicit and explicit approaches - this would make the recommendation more concrete -->
38
39
39
40
## Random numbers
40
41
41
-
What happens if you want to test a function that relies on randomness in some way? If you're writing a random number generator, you probably want to generate large quantities of random numbers and apply some statistical test. But what if your function just happens to use a little bit of pre-existing randomness? How do you make your tests repeatable and reproducible?
42
-
43
-
Under the hood, random number generators generate different numbers each time you call them because they update a special `.Random.seed` variable stored in the global environment. You can temporarily set this seed to a known value to make your random numbers reproducible with `withr::local_seed()`, making random numbers a special case of test fixtures (`vignette("test-fixtures")`).
42
+
What happens if you want to test a function that relies on randomness in some way? If you're writing a random number generator, you probably want to generate a large quantity of random numbers and then apply some statistical test. But what if your function just happens to use a little bit of pre-existing randomness? How do you make your tests repeatable and reproducible? Under the hood, random number generators generate different numbers because they update a special `.Random.seed` variable stored in the global environment. You can temporarily set this seed to a known value to make your random numbers reproducible with `withr::local_seed()`, making random numbers a special case of test fixtures (`vignette("test-fixtures")`).
44
43
45
44
Here's a simple example showing how you might test the basic operation of a function that rolls a die:
46
45
@@ -78,17 +77,21 @@ When should you set the seed and when should you use mocking? As a general rule
78
77
79
78
## Some tests can't be run in some circumstances
80
79
81
-
You can skip a test without it passing or failing if it's not possible to run it in the current environment (e.g., it's OS dependent, it only works interactively, or it shouldn't be tested on CRAN). Learn more in `vignette("skipping")`.
80
+
You can skip a test without it passing or failing if you can't or don't want to run it (e.g., it's OS dependent, it only works interactively, or it shouldn't be tested on CRAN). Learn more in `vignette("skipping")`.
82
81
83
82
## HTTP requests
84
83
85
-
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}. These packages both allow you to interactively record HTTP responses and then later replay them in tests. This is a specialized type of mocking that allows your tests to run without the internet and isolates your tests from failures in the underlying service.
84
+
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}. These packages both allow you to interactively record HTTP responses and then later replay them in tests. This is a specialized type of mocking (`vignette("mocking")`) that works with {httr} and {httr2} to isolates your tests from failures in the underlying API.
85
+
86
+
If your package is going to CRAN, you **must** either use one of these packages or use `skip_on_cran()` for all internet-facing tests. Otherwise, you are at high risk of failing `R CMD check` if the underlying API is temporarily down. This sort of failure causes extra work for the CRAN maintainers and extra hassle for you.
87
+
88
+
## Graphics
86
89
87
-
If your package is going to CRAN, you **must** either use one of these packages or use `skip_on_cran()` for internet-facing tests. Otherwise, you are at high risk of failing `R CMD check` if the API you are binding is temporarily down. This sort of failure causes extra work for the CRAN maintainers and extra hassle for you.
90
+
The only type of testing you can use for graphics is snapshot testing (`vignette("snapshotting")`) via `expect_snapshot_file()`. Graphical snapshot testing is surprisingly challenging because you need pixel-perfect rendering across multiple versions of multiple operating systems, and this is hard, mostly due to imperceptble differences in font rendering. Fortunately we've needed to overcome these challenges in order to test ggplot2, and you can benefit from our experience by using {vdiffr} when testing graphical output.
88
91
89
92
## User interaction
90
93
91
-
If you're testing a function that relies on user feedback from `readline()`, `utils::menu()`, or `utils::askYesNo()` or similar, you can use mocking (`vignette("mocking")`) to have those functions return fixed values within the test. For example, imagine that you have the following function that asks the user if they want to continue:
94
+
If you're testing a function that relies on user feedback (e.g. from `readline()`, `utils::menu()`, or `utils::askYesNo()`), you can use mocking (`vignette("mocking")`) to return fixed values within the test. For example, imagine that you've written the following function that asks the user if they want to continue:
You can test its behavior by mocking `readline()` and using a snapshot test:
114
+
You could test its behavior by mocking `readline()` and using a snapshot test:
112
115
113
116
```{r}
114
117
#| label: mock-readline
@@ -131,7 +134,7 @@ test_that("user must respond y or n", {
131
134
})
132
135
```
133
136
134
-
If you were testing the behavior of some function that used`continue()`, you might choose to mock that function instead. For example, the function below requires user confirmation before overwriting an existing file. In order to focus our tests on the behavior of just this function, we mock `continue()` to return either `TRUE` or `FALSE` without any user messaging.
137
+
If you were testing the behavior of some function that uses`continue()`, you might want to mock `continue()`instead of `readline()`. For example, the function below requires user confirmation before overwriting an existing file. In order to focus our tests on the behavior of just this function, we mock `continue()` to return either `TRUE` or `FALSE` without any user messaging.
Errors, warnings, and other user-facing text should be tested to ensure they're consistent and actionable. Obviously, you can't test this 100% automatically, but you can use snapshots (`vignette("snapshotting")`) to ensure that user-facing messages are clearly shown in PRs and easily reviewed by another human.
165
+
Errors, warnings, and other user-facing text should be tested to ensure they're both actionable and consistent across the package. Obviously, it's not possible to test this automatically, but you can use snapshots (`vignette("snapshotting")`) to ensure that user-facing messages are clearly shown in PRs and easily reviewed by another human.
Copy file name to clipboardExpand all lines: vignettes/custom-expectation.Rmd
+37Lines changed: 37 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -199,6 +199,43 @@ Also note that I check that the `class` argument is a string. If it's not a stri
199
199
expect_s3_class(x1, 1)
200
200
```
201
201
202
+
### Optional `class`
203
+
204
+
A common pattern in testthat's own expectations it to use arguments to control the level of detail in the test. Here it would be nice if we check that an object is an S3 object without checking for a specific class. I think we could do that by renaming `expect_s3_class()` to `expect_s3_object()`. Now `expect_s3_object(x)` would verify that `x` is an S3 object, and `expect_s3_object(x, class = "foo")` to verify that `x` is an S3 object with the given class. The implementation of this is straightforward: we also allow `class` to be `NULL` and then only verify inheritance when non-`NULL`.
205
+
206
+
```{r}
207
+
expect_s3_object <- function(object, class = NULL) {
208
+
if (!rlang::is_string(class) && is.null(class)) {
209
+
rlang::abort("`class` must be a string or NULL.")
210
+
}
211
+
212
+
act <- quasi_label(rlang::enquo(object))
213
+
214
+
if (!is.object(act$val)) {
215
+
msg <- sprintf("Expected %s to be an object.", act$lab)
216
+
return(fail(msg))
217
+
}
218
+
219
+
if (isS4(act$val)) {
220
+
msg <- c(
221
+
sprintf("Expected %s to be an S3 object.", act$lab),
222
+
"Actual OO type: S4"
223
+
)
224
+
return(fail(msg))
225
+
}
226
+
227
+
if (!is.null(class) && !inherits(act$val, class)) {
228
+
msg <- c(
229
+
sprintf("Expected %s to inherit from %s.", act$lab, class),
230
+
sprintf("Actual class: %s", class(act$val))
231
+
)
232
+
return(fail(msg))
233
+
}
234
+
235
+
pass(act$val)
236
+
}
237
+
```
238
+
202
239
## Repeated code
203
240
204
241
As you write more expectations, you might discover repeated code that you want to extract into a helper. Unfortunately, creating 100% correct helper functions is not straightforward in testthat because `fail()` captures the calling environment in order to give useful tracebacks, and testthat's own expectations don't expose this as an argument. Fortunately, getting this right is not critical (you'll just get a slightly suboptimal traceback in the case of failure), so we don't recommend bothering in most cases. We document it here, however, because it's important to get it right in testthat itself.
0 commit comments