Skip to content

Commit 39ddd7c

Browse files
committed
✨ More proofreading ✨
1 parent 4a091c4 commit 39ddd7c

File tree

6 files changed

+19
-17
lines changed

6 files changed

+19
-17
lines changed

vignettes/challenging-tests.Rmd

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ snapper$start_file("snapshotting.Rmd", "test")
2020
Sys.setenv(TESTTHAT_PKG = "testthat")
2121
```
2222

23-
This vignette is a quick reference guide for testing challenging functions. It's organised by the problem, rather than technique used to solve it, so you can quickly skim the whole vignette, spot the problem you're facing, then learn more about useful tools for solving it.
23+
This vignette is a quick reference guide for testing challenging functions. It's organised by the problem, rather the than technique used to solve it, so you can quickly skim the whole vignette, spot the problem you're facing, then learn more about useful tools for solving it.
2424

2525
## Options and environment variables
2626

@@ -67,7 +67,8 @@ You can skip a test without passing or failing if it's not possible to run it in
6767

6868
## HTTP requests
6969

70-
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}. These packages provide the ability to record and then later reply HTTP requests so that you can test without an active internet connection. If your package is going to CRAN, we highly recommend either using one of these packages or using `skip_on_cran()` for your internet-facing tests. This ensures that your package won't break on CRAN just because the service you're using is temporarily down.
70+
If you're trying to test functions that rely on HTTP requests, we recommend using {vcr} or {httptest2}.
71+
These packages provide the ability to record and then later replay HTTP requests so that you can test without an active internet connection. If your package is going to CRAN, we highly recommend either using one of these packages or using `skip_on_cran()` for your internet-facing tests. This ensures that your package won't break on CRAN just because the service you're using is temporarily down.
7172

7273
## User interaction
7374

@@ -144,4 +145,4 @@ Learn more in `vignette("mocking")`.
144145

145146
## User-facing text
146147

147-
Errors, warnings, and other user-facing text should be tested to ensure they're consistent and actionable. Obviously you can't test this 100% automatically, but you can ensure that such messaging is clearly shown in PRs so another human can take a look. This is point of snapshot tests; learn more in `vignette("snapshotting")`.
148+
Errors, warnings, and other user-facing text should be tested to ensure they're consistent and actionable. Obviously you can't test this 100% automatically, but you can ensure that such messaging is clearly shown in PRs so another human can take a look. This is the point of snapshot tests; learn more in `vignette("snapshotting")`.

vignettes/custom-expectation.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ expect_s3_class(x1, 1)
193193

194194
## Repeated code
195195

196-
As you write more expectations, you might discover repeated code that you want to extract out into a helper. Unfortunately, creating helper functions is not straightforward in testthat because every `fail()` captures the calling environment in order to give maximally useful tracebacks. Because getting this right is not critical (you'll just get a slightly suboptimal traceback in the case of failure), we don't recommend bothering. However, we document it here because it's important to get it right in testthat itself.
196+
As you write more expectations, you might discover repeated code that you want to extract out into a helper. Unfortunately, creating helper functions is not straightforward in testthat because every `fail()` captures the calling environment in order to give maximally useful tracebacks. Because getting this right is not critical (you'll just get a slightly suboptimal traceback in the case of failure), we don't recommend bothering in most cases. However, we document it here because it's important to get it right in testthat itself.
197197

198198
The key challenge is that `fail()` captures a `trace_env` which should be the execution environment of the expectation. This usually works, because the default value of `trace_env` is `caller_env()`. But when you introduce a helper, you'll need to explicitly pass it along:
199199

vignettes/mocking.Rmd

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ check_installed <- function(pkg, min_version = NULL) {
5151
}
5252
```
5353

54-
Now that we've written this function, we want to test it. There a lot of ways we might tackle this, but I think it's reasonable to start by testing the case without `min_version`. To do this we need to come up with a package we know is installed, and a package we know isn't installed:
54+
Now that we've written this function, we want to test it. There are a lot of ways we might tackle this, but I think it's reasonable to start by testing the case without `min_version`. To do this we need to come up with a package we know is installed, and a package we know isn't installed:
5555

5656
```{r}
5757
test_that("check_installed() checks package is installed", {
@@ -84,7 +84,7 @@ test_that("check_installed() checks minimum version", {
8484
})
8585
```
8686

87-
But it's starting to feel like we've accumulated a bunch of potentially fragile hacks. So let's see how we could could make these tests more robust with mocking. First we need to add `requireNamspace` and `packageVersion` bindings in our package. This is needed because `requireNamespace` and `packageVersion` are base functions:
87+
But it's starting to feel like we've accumulated a bunch of potentially fragile hacks. So let's see how we could make these tests more robust with mocking. First we need to add `requireNamespace` and `packageVersion` bindings in our package. This is needed because `requireNamespace` and `packageVersion` are base functions:
8888
```{r}
8989
requireNamespace <- NULL
9090
packageVersion <- NULL
@@ -102,7 +102,7 @@ test_that("check_installed() checks package is installed", {
102102
})
103103
```
104104

105-
For the second test, we mock `requireNamepace()` to return `TRUE`, and then `packageVersion()` to return a fixed version number. Together this simulates that version 2.0.0 of any package is installed and makes our snapshot rely only on the state set within the test.
105+
For the second test, we mock `requireNamespace()` to return `TRUE`, and then `packageVersion()` to return a fixed version number. Together this simulates that version 2.0.0 of any package is installed and makes our snapshot rely only on the state set within the test.
106106

107107
```{r}
108108
test_that("check_installed() checks minimum version", {
@@ -158,7 +158,7 @@ Here we pretend that there are no reverse dependencies (revdeps) for the package
158158

159159
### Managing time
160160

161-
`httr2::req_throttle()` prevents multiple requests from being made too quickly, using a tool called a leaky token bucket. This tool is inextricably tied to real time because you want to allow more requests as time elapses. So how do you test this? I started by using `Sys.sleep()` but this either made my tests both slow (because I'd sleep for a second or two) and unreliable (because sometime more time elapsed than I expected). Eventually I figured out that I could "manually control" time by using a [mocked function](https://github.com/r-lib/httr2/blob/main/tests/testthat/test-req-throttle.R) that returns the value of a variable I control. This allows me to manually advance time and carefully test the implications.
161+
but this either made my tests both slow (because I'd sleep for a second or two) and unreliable (because sometimes more time elapsed than I expected). Eventually I figured out that I could "manually control" time by using a [mocked function](https://github.com/r-lib/httr2/blob/main/tests/testthat/test-req-throttle.R) that returns the value of a variable I control. This allows me to manually advance time and carefully test the implications.
162162

163163
You can see the basic idea with a simpler example. Let's first begin with a function that returns the "unix time", the number of seconds elapsed since midnight on Jan 1 1970. This is easy to compute, but will make some computations simpler later as well as providing a convenient function to mock.
164164

vignettes/skipping.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ skip_if_dangerous <- function() {
7676

7777
## Embedding `skip()` in package functions
7878

79-
Another useful technique that can sometimes be useful is to build a `skip()` directly into a package function.
79+
Another potentially useful technique is to build a `skip()` directly into a package function.
8080
For example take a look at [`pkgdown:::convert_markdown_to_html()`](https://github.com/r-lib/pkgdown/blob/v2.0.7/R/markdown.R#L95-L106), which absolutely, positively cannot work if the Pandoc tool is unavailable:
8181

8282
```{r eval = FALSE}

vignettes/snapshotting.Rmd

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ snapper$start_file("snapshotting.Rmd", "test")
4242

4343
## Basic workflow
4444

45-
We'll illustrate the basic workflow with a simple function that generates an HTML heading.
45+
We'll illustrate the basic workflow with a simple function that generates an HTML bullets.
4646
It can optionally include an `id` attribute, which allows you to construct a link directly to that heading.
4747

4848
```{r}
@@ -237,7 +237,8 @@ test_that("actionable feedback if some or all arguments named", {
237237

238238
Sometimes part of the output varies in ways that you can't easily control. In many cases, it's convenient to use mocking (`vignette("mocking")`) to ensure that every run of the function always produces the same output. In other cases, it's easier to manipulate the text output with a regular expression or similar. That's the job of the `transform` argument which should be passed a function that takes a character vector of lines, and returns a modified vector.
239239

240-
This type of problem often crops up when you are testing a function that gives feedback about a path. In your tests, you'll typically use a temporary path (e.g. from `withr::local_tempfile()`) so if you display the path in a snapshot, it will be different every time. For example, consider this "safe" version of `writeLines()` that requires to explicitly opt-in to overwriting an existing file:
240+
This type of problem often crops up when you are testing a function that gives feedback about a path. In your tests, you'll typically use a temporary path (e.g. from `withr::local_tempfile()`) so if you display the path in a snapshot, it will be different every time.
241+
For example, consider this "safe" version of `writeLines()` that requires you to explicitly opt-in to overwriting an existing file:
241242

242243
```{r}
243244
safe_write_lines <- function(lines, path, overwrite = FALSE) {
@@ -314,7 +315,7 @@ These are sound defaults that we have found useful to minimise spurious differen
314315

315316
### Snapshotting graphics
316317

317-
If you need to test graphical output, {vdiffr}. vdiffr is used to test ggplot2, and incorporates everything we know about high-quality graphics tests that minimise false positives.
318+
If you need to test graphical output, use {vdiffr}. vdiffr is used to test ggplot2, and incorporates everything we know about high-quality graphics tests that minimise false positives. Graphics testing is still often fragile, but using vdiffr means you will avoid all the problems we know how to avoid.
318319

319320
### Snapshotting values
320321

@@ -384,7 +385,7 @@ This section describes some of the previous attempts and why we believe the new
384385

385386
- It's relatively coarse grained, which means tests that use it tend to keep growing and growing.
386387

387-
- `expect_known_output()` is finer grained version of `verify_output()` that captures output from a single function.
388+
- `expect_known_output()` is a finer grained version of `verify_output()` that captures output from a single function.
388389
The requirement to produce a path for each individual expectation makes it even more painful to use.
389390

390391
- `expect_known_value()` and `expect_known_hash()` have all the disadvantages of `expect_known_output()`, but also produce binary output meaning that you can't easily review test differences in pull requests.

vignettes/test-fixtures.Rmd

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ test_that("print() respects digits option", {
7171
})
7272
```
7373

74-
If you write a lot of code like this in your tests, you might decide you want a helper function or **test fixture** that reduces the duplication. Fortunately withr's local functions us to solve this problem by providing an `.local_envir` or `envir` argument that controls when cleanup occurs. The exact details of how this works are rather complicated, but fortunately there's a common pattern you can use without understanding all the details. Your helper function should always have an `env` argument that defaults to `parent.frame()`, which you pass to the `.local_envir` argument of `local_()`:
74+
If you write a lot of code like this in your tests, you might decide you want a helper function or **test fixture** that reduces the duplication. Fortunately withr's local functions allow us to solve this problem by providing an `.local_envir` or `envir` argument that controls when cleanup occurs. The exact details of how this works are rather complicated, but fortunately there's a common pattern you can use without understanding all the details. Your helper function should always have an `env` argument that defaults to `parent.frame()`, which you pass to the `.local_envir` argument of `local_()`:
7575

7676
```{r}
7777
local_digits <- function(sig_digits, env = parent.frame()) {
@@ -113,7 +113,7 @@ Notice how `pi` prints differently before and after the call to `sloppy()`. Call
113113

114114
### `on.exit()`
115115

116-
The first function you need to know about is base R's `on.exit()`. `on.exit()` calls the code to supplied to its first argument when the current function exits, regardless of whether it returns a value or errors. You can use `on.exit()` to clean up after yourself by ensuring that every mess-making function call is paired with an `on.exit()` call that cleans up.
116+
The first function you need to know about is base R's `on.exit()`. `on.exit()` calls the code supplied to its first argument when the current function exits, regardless of whether it returns a value or errors. You can use `on.exit()` to clean up after yourself by ensuring that every mess-making function call is paired with an `on.exit()` call that cleans up.
117117

118118
We can use this idea to turn `sloppy()` into `neat()`:
119119

@@ -145,7 +145,7 @@ pi
145145

146146
There are three main drawbacks to `on.exit()`:
147147

148-
- You should always call it with `add = TRUE` and `after = FALSE`. These ensure that the call is **added** to the list of deferred tasks (instead of replaces) and is added to the **front** of the stack (not the back, so that cleanup occurs in reverse order to setup). These arguments only matter if you're using multiple `on.exit()` calls, but it's a good habit to always use them to avoid potential problems down the road.
148+
- You should always call it with `add = TRUE` and `after = FALSE`. These ensure that the call is **added** to the list of deferred tasks (instead of replacing them) and is added to the **front** of the stack (not the back, so that cleanup occurs in reverse order to setup). These arguments only matter if you're using multiple `on.exit()` calls, but it's a good habit to always use them to avoid potential problems down the road.
149149

150150
- It doesn't work outside a function or test. If you run the following code in the global environment, you won't get an error, but the cleanup code will never be run:
151151

@@ -293,7 +293,7 @@ test_that("message2() output depends on verbose option", {
293293

294294
One place that we use test fixtures extensively is in the usethis package ([usethis.r-lib.org](https://usethis.r-lib.org)), which provides functions for looking after the files and folders in R projects, especially packages. Many of these functions only make sense in the context of a package, which means to test them, we also have to be working inside an R package. We need a way to quickly spin up a minimal package in a temporary directory, then test some functions against it, then destroy it.
295295

296-
To solve this problem we create a test fixture, which we place in `R/test-helpers.R` so that's it's available for both testing and interactive experimentation:
296+
To solve this problem we create a test fixture, which we place in `R/test-helpers.R` so that it's available for both testing and interactive experimentation:
297297

298298
```{r, eval = FALSE}
299299
local_create_package <- function(dir = file_temp(), env = parent.frame()) {

0 commit comments

Comments
 (0)