Skip to content

Commit 73eb396

Browse files
committed
Final human proofreading
1 parent c0b9f07 commit 73eb396

File tree

6 files changed

+77
-40
lines changed

6 files changed

+77
-40
lines changed

NEWS.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# testthat (development version)
22

3-
* New `vignette("mocking")` explains mocking in more detail (#1265).
4-
* New `vignette("challenging-functions")` provides an index to other documentation based on the type of challenge you are facing (#1265).
3+
* New `vignette("mocking")` explains mocking in detail (#1265).
4+
* New `vignette("challenging-functions")` provides an index to other documentation organised by testing challenges (#1265).
55
* When running a test interactively, testthat now reports the number of succeses. The results should also be more useful if you are using nested tests.
66
* The hints generated by `expect_snapshot()` and `expect_snapshot_file()` now include the path to the package, if its not in the current working directory (#1577).
77
* `expect_snapshot_file()` now clearly errors if the `path` doesnt exist (#2191).

vignettes/challenging-tests.Rmd

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -22,25 +22,24 @@ Sys.setenv(TESTTHAT_PKG = "testthat")
2222

2323
This vignette is a quick reference guide for testing challenging functions. It's organized by problem type rather than technique, so you can quickly skim the whole vignette, spot the problem you're facing, and then learn more about useful tools for solving it. In it, you'll learn how to overcome the following challenges:
2424

25-
* Functions that depend on options and environment variables.
25+
* Functions with implicit inputs, like options and environment variables.
2626
* Random number generators.
2727
* Tests that can't be run in some environments.
2828
* Testing web APIs.
29+
* Testing graphical output.
2930
* User interaction.
3031
* User-facing text.
3132
* Repeated code.
3233

3334
## Options and environment variables
3435

35-
If your function depends on options or environment variables, first try refactoring the function to make the [inputs explicit](https://design.tidyverse.org/inputs-explicit.html). If that's not possible, then you can use functions like `withr::local_options()` or `withr::local_envvar()` to temporarily change options and environment values within a test. Learn more in `vignette("test-fixtures")`.
36+
If your function depends on options or environment variables, first try refactoring the function to make the [inputs explicit](https://design.tidyverse.org/inputs-explicit.html). If that's not possible, use functions like `withr::local_options()` or `withr::local_envvar()` to temporarily change options and environment values within a test. Learn more in `vignette("test-fixtures")`.
3637

3738
<!-- FIXME: Consider adding a brief example showing the difference between implicit and explicit approaches - this would make the recommendation more concrete -->
3839

3940
## Random numbers
4041

41-
What happens if you want to test a function that relies on randomness in some way? If you're writing a random number generator, you probably want to generate large quantities of random numbers and apply some statistical test. But what if your function just happens to use a little bit of pre-existing randomness? How do you make your tests repeatable and reproducible?
42-
43-
Under the hood, random number generators generate different numbers each time you call them because they update a special `.Random.seed` variable stored in the global environment. You can temporarily set this seed to a known value to make your random numbers reproducible with `withr::local_seed()`, making random numbers a special case of test fixtures (`vignette("test-fixtures")`).
42+
What happens if you want to test a function that relies on randomness in some way? If you're writing a random number generator, you probably want to generate a large quantity of random numbers and then apply some statistical test. But what if your function just happens to use a little bit of pre-existing randomness? How do you make your tests repeatable and reproducible? Under the hood, random number generators generate different numbers because they update a special `.Random.seed` variable stored in the global environment. You can temporarily set this seed to a known value to make your random numbers reproducible with `withr::local_seed()`, making random numbers a special case of test fixtures (`vignette("test-fixtures")`).
4443

4544
Here's a simple example showing how you might test the basic operation of a function that rolls a die:
4645

@@ -78,17 +77,21 @@ When should you set the seed and when should you use mocking? As a general rule
7877

7978
## Some tests can't be run in some circumstances
8079

81-
You can skip a test without it passing or failing if it's not possible to run it in the current environment (e.g., it's OS dependent, it only works interactively, or it shouldn't be tested on CRAN). Learn more in `vignette("skipping")`.
80+
You can skip a test without it passing or failing if you can't or don't want to run it (e.g., it's OS dependent, it only works interactively, or it shouldn't be tested on CRAN). Learn more in `vignette("skipping")`.
8281

8382
## HTTP requests
8483

85-
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}. These packages both allow you to interactively record HTTP responses and then later replay them in tests. This is a specialized type of mocking that allows your tests to run without the internet and isolates your tests from failures in the underlying service.
84+
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}. These packages both allow you to interactively record HTTP responses and then later replay them in tests. This is a specialized type of mocking (`vignette("mocking")`) that works with {httr} and {httr2} to isolates your tests from failures in the underlying API.
85+
86+
If your package is going to CRAN, you **must** either use one of these packages or use `skip_on_cran()` for all internet-facing tests. Otherwise, you are at high risk of failing `R CMD check` if the underlying API is temporarily down. This sort of failure causes extra work for the CRAN maintainers and extra hassle for you.
87+
88+
## Graphics
8689

87-
If your package is going to CRAN, you **must** either use one of these packages or use `skip_on_cran()` for internet-facing tests. Otherwise, you are at high risk of failing `R CMD check` if the API you are binding is temporarily down. This sort of failure causes extra work for the CRAN maintainers and extra hassle for you.
90+
The only type of testing you can use for graphics is snapshot testing (`vignette("snapshotting")`) via `expect_snapshot_file()`. Graphical snapshot testing is surprisingly challenging because you need pixel-perfect rendering across multiple versions of multiple operating systems, and this is hard, mostly due to imperceptble differences in font rendering. Fortunately we've needed to overcome these challenges in order to test ggplot2, and you can benefit from our experience by using {vdiffr} when testing graphical output.
8891

8992
## User interaction
9093

91-
If you're testing a function that relies on user feedback from `readline()`, `utils::menu()`, or `utils::askYesNo()` or similar, you can use mocking (`vignette("mocking")`) to have those functions return fixed values within the test. For example, imagine that you have the following function that asks the user if they want to continue:
94+
If you're testing a function that relies on user feedback (e.g. from `readline()`, `utils::menu()`, or `utils::askYesNo()`), you can use mocking (`vignette("mocking")`) to return fixed values within the test. For example, imagine that you've written the following function that asks the user if they want to continue:
9295

9396
```{r}
9497
#| label: continue
@@ -108,7 +111,7 @@ continue <- function(prompt) {
108111
readline <- NULL
109112
```
110113

111-
You can test its behavior by mocking `readline()` and using a snapshot test:
114+
You could test its behavior by mocking `readline()` and using a snapshot test:
112115

113116
```{r}
114117
#| label: mock-readline
@@ -131,7 +134,7 @@ test_that("user must respond y or n", {
131134
})
132135
```
133136

134-
If you were testing the behavior of some function that used `continue()`, you might choose to mock that function instead. For example, the function below requires user confirmation before overwriting an existing file. In order to focus our tests on the behavior of just this function, we mock `continue()` to return either `TRUE` or `FALSE` without any user messaging.
137+
If you were testing the behavior of some function that uses `continue()`, you might want to mock `continue()` instead of `readline()`. For example, the function below requires user confirmation before overwriting an existing file. In order to focus our tests on the behavior of just this function, we mock `continue()` to return either `TRUE` or `FALSE` without any user messaging.
135138

136139
```{r}
137140
#| label: mock-continue
@@ -159,7 +162,7 @@ test_that("save_file() requires confirmation to overwrite file", {
159162

160163
## User-facing text
161164

162-
Errors, warnings, and other user-facing text should be tested to ensure they're consistent and actionable. Obviously, you can't test this 100% automatically, but you can use snapshots (`vignette("snapshotting")`) to ensure that user-facing messages are clearly shown in PRs and easily reviewed by another human.
165+
Errors, warnings, and other user-facing text should be tested to ensure they're both actionable and consistent across the package. Obviously, it's not possible to test this automatically, but you can use snapshots (`vignette("snapshotting")`) to ensure that user-facing messages are clearly shown in PRs and easily reviewed by another human.
163166

164167
## Repeated code
165168

vignettes/custom-expectation.Rmd

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,6 +199,43 @@ Also note that I check that the `class` argument is a string. If it's not a stri
199199
expect_s3_class(x1, 1)
200200
```
201201

202+
### Optional `class`
203+
204+
A common pattern in testthat's own expectations it to use arguments to control the level of detail in the test. Here it would be nice if we check that an object is an S3 object without checking for a specific class. I think we could do that by renaming `expect_s3_class()` to `expect_s3_object()`. Now `expect_s3_object(x)` would verify that `x` is an S3 object, and `expect_s3_object(x, class = "foo")` to verify that `x` is an S3 object with the given class. The implementation of this is straightforward: we also allow `class` to be `NULL` and then only verify inheritance when non-`NULL`.
205+
206+
```{r}
207+
expect_s3_object <- function(object, class = NULL) {
208+
if (!rlang::is_string(class) && is.null(class)) {
209+
rlang::abort("`class` must be a string or NULL.")
210+
}
211+
212+
act <- quasi_label(rlang::enquo(object))
213+
214+
if (!is.object(act$val)) {
215+
msg <- sprintf("Expected %s to be an object.", act$lab)
216+
return(fail(msg))
217+
}
218+
219+
if (isS4(act$val)) {
220+
msg <- c(
221+
sprintf("Expected %s to be an S3 object.", act$lab),
222+
"Actual OO type: S4"
223+
)
224+
return(fail(msg))
225+
}
226+
227+
if (!is.null(class) && !inherits(act$val, class)) {
228+
msg <- c(
229+
sprintf("Expected %s to inherit from %s.", act$lab, class),
230+
sprintf("Actual class: %s", class(act$val))
231+
)
232+
return(fail(msg))
233+
}
234+
235+
pass(act$val)
236+
}
237+
```
238+
202239
## Repeated code
203240

204241
As you write more expectations, you might discover repeated code that you want to extract into a helper. Unfortunately, creating 100% correct helper functions is not straightforward in testthat because `fail()` captures the calling environment in order to give useful tracebacks, and testthat's own expectations don't expose this as an argument. Fortunately, getting this right is not critical (you'll just get a slightly suboptimal traceback in the case of failure), so we don't recommend bothering in most cases. We document it here, however, because it's important to get it right in testthat itself.

0 commit comments

Comments
 (0)