You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Generating TOCs with `doctoc`](#generating-tocs-with-doctoc)
35
-
-[Testing CI workflows with `act`](#testing-ci-workflows-with-act)
28
+
-[Debugging tests](#debugging-tests)
29
+
-[Performance testing](#performance-testing)
30
+
-[Benchmarking](#benchmarking)
31
+
-[Profiling](#profiling)
36
32
37
33
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
38
34
@@ -161,10 +157,20 @@ Like the scripts and tutorials, each notebook is configured to create and (attem
161
157
162
158
## Tests
163
159
164
-
To run the tests you will need `pytest` and a few plugins, including [`pytest-xdist`](https://pytest-xdist.readthedocs.io/en/latest/) and [`pytest-benchmark`](https://pytest-benchmark.readthedocs.io/en/latest/index.html). Test dependencies are specified in the `test` extras group in `setup.cfg` (with pip, use `pip install ".[test]"`). Test dependencies are included in the Conda environment `etc/environment`.
160
+
To run the tests you will need `pytest` and a few plugins, including [`pytest-xdist`](https://pytest-xdist.readthedocs.io/en/latest/), [`pytest-dotenv`](https://github.com/quiqua/pytest-dotenv), and [`pytest-benchmark`](https://pytest-benchmark.readthedocs.io/en/latest/index.html). Test dependencies are specified in the `test` extras group in `setup.cfg` (with pip, use `pip install ".[test]"`). Test dependencies are included in the Conda environment `etc/environment`.
165
161
166
162
**Note:** to prepare your code for a pull request, you will need a few more packages specified in the `lint` extras group in `setup.cfg` (also included by default for Conda). See the docs on [submitting a pull request](CONTRIBUTING.md) for more info.
167
163
164
+
### Configuring tests
165
+
166
+
Some tests require environment variables. Currently the following variables are required:
167
+
168
+
-`GITHUB_TOKEN`
169
+
170
+
The `GITHUB_TOKEN` variable is needed because the [`get-modflow`](docs/get_modflow.md) utility invokes the GitHub API — to avoid rate-limiting, requests to the GitHub API should bear an [authentication token](https://github.com/settings/tokens). A token is automatically provided to GitHub Actions CI jobs via the [`github` context's](https://docs.github.com/en/actions/learn-github-actions/contexts#github-context)`token` attribute, however a personal access token is needed to run the tests locally. To create a personal access token, go to [GitHub -> Settings -> Developer settings -> Personal access tokens -> Tokens (classic)](https://github.com/settings/tokens). The `get-modflow` utility automatically detects and uses the `GITHUB_TOKEN` environment variable if available.
171
+
172
+
Environment variables can be set as usual, but a more convenient way to store variables for all future sessions is to create a text file called `.env` in the `autotest` directory, containing variables in `NAME=VALUE` format, one on each line. [`pytest-dotenv`](https://github.com/quiqua/pytest-dotenv) will detect and add these to the environment provided to the test process. All `.env` files in the project are ignored in `.gitignore` so there is no danger of checking in secrets unless the file is misnamed.
173
+
168
174
### Running tests
169
175
170
176
Tests must be run from the `autotest` directory. To run a single test script in verbose mode:
@@ -209,15 +215,19 @@ This should complete in under a minute on most machines. Smoke testing aims to c
209
215
210
216
**Note:** most the `regression` and `example` tests are `slow`, but there are some other slow tests, e.g. in `test_export.py`, and some regression tests and examples are fast.
211
217
218
+
### Writing tests
219
+
220
+
Test functions and files should be named informatively, with related tests grouped in the same file. The test suite runs on GitHub Actions in parallel, so tests should not access the working space of other tests, example scripts, tutorials or notebooks. A number of shared test fixtures are [imported](conftest.py) from [`modflow-devtools`](https://github.com/MODFLOW-USGS/modflow-devtools). These include keepable temporary directory fixtures and miscellanous utilities (see `modflow-devtools` repository README for more information on fixture usage). New tests should use these facilities where possible. See also the [contribution guidelines](CONTRIBUTING.md) before submitting a pull request.
221
+
212
222
### Debugging tests
213
223
214
-
To debug a failed test it can be helpful to inspect its output, which is cleaned up automatically by default. To run a failing test and keep its output, use the `--keep` option to provide a save location:
224
+
To debug a failed test it can be helpful to inspect its output, which is cleaned up automatically by default. `modflow-devtools` provides temporary directory fixtures that allow optionally keeping test outputs in a specified location. To run a test and keep its output, use the `--keep` option to provide a save location:
215
225
216
226
pytest test_export.py --keep exports_scratch
217
227
218
-
This will retain the test directories created by the test, which allows files to be evaluated for errors. Any tests using the function-scoped `tmpdir` and related fixtures (e.g. `class_tmpdir`, `module_tmpdir`) defined in `conftest.py` are compatible with this mechanism.
228
+
This will retain any files created by the test in `exports_scratch` in the current working directory. Any tests using the function-scoped `function_tmpdir` and related fixtures (e.g. `class_tmpdir`, `module_tmpdir`) defined in `modflow_devtools/fixtures` are compatible with this mechanism.
219
229
220
-
There is also a `--keep-failed <dir>` option which preserves the outputs of failed tests in the given location, however this option is only compatible with function-scoped temporary directories (the `tmpdir` fixture defined in `conftest.py`).
230
+
There is also a `--keep-failed <dir>` option which preserves the outputs of failed tests in the given location, however this option is only compatible with function-scoped temporary directories (the `function_tmpdir` fixture).
221
231
222
232
### Performance testing
223
233
@@ -281,159 +291,3 @@ Benchmark results are only printed to `stdout` by default. To save results to a
281
291
Profiling is [distinct](https://stackoverflow.com/a/39381805/6514033) from benchmarking in evaluating a program's call stack in detail, while benchmarking just invokes a function repeatedly and computes summary statistics. Profiling is also accomplished with `pytest-benchmark`: use the `--benchmark-cprofile` option when running tests which use the `benchmark` fixture described above. The option's value is the column to sort results by. For instance, to sort by total time, use `--benchmark-cprofile="tottime"`. See the `pytest-benchmark`[docs](https://pytest-benchmark.readthedocs.io/en/stable/usage.html#commandline-options) for more information.
282
292
283
293
By default, `pytest-benchmark` will only print profiling results to `stdout`. If the `--benchmark-autosave` flag is provided, performance profile data will be included in the JSON files written to the `.benchmarks` save directory as described in the benchmarking section above.
284
-
285
-
### Writing tests
286
-
287
-
Test functions and files should be named informatively, with related tests grouped in the same file. The test suite runs on GitHub Actions in parallel, so tests must not pollute the working space of other tests, example scripts, tutorials or notebooks. A number of shared test fixtures are provided in `autotest/conftest.py`. New tests should use these facilities where possible, to standardize conventions, help keep maintenance minimal, and prevent shared test state and proliferation of untracked files. See also the [contribution guidelines](CONTRIBUTING.md) before submitting a pull request.
288
-
289
-
#### Keepable temporary directories
290
-
291
-
The `tmpdir` fixtures defined in `conftest.py` provide a path to a temporary directory which is automatically created before test code runs and automatically removed afterwards. (The builtin `pytest``temp_path` fixture can also be used, but is not compatible with the `--keep` command line argument detailed above.)
292
-
293
-
For instance, using temporary directory fixtures for various scopes:
294
-
295
-
```python
296
-
from pathlib import Path
297
-
import inspect
298
-
299
-
deftest_tmpdirs(tmpdir, module_tmpdir):
300
-
# function-scoped temporary directory
301
-
assertisinstance(tmpdir, Path)
302
-
assert tmpdir.is_dir()
303
-
assert inspect.currentframe().f_code.co_name in tmpdir.stem
304
-
305
-
# module-scoped temp dir (accessible to other tests in the script)
306
-
assert module_tmpdir.is_dir()
307
-
assert"autotest"in module_tmpdir.stem
308
-
```
309
-
310
-
These fixtures can be substituted transparently for `pytest`'s built-in `tmp_path`, with the additional benefit that when `pytest` is invoked with the `--keep` argument, e.g. `pytest --keep temp`, outputs will automatically be saved to subdirectories of `temp` named according to the test case, class or module. (As described above, this is useful for debugging a failed test by inspecting its outputs, which would otherwise be cleaned up.)
311
-
312
-
#### Locating example data
313
-
314
-
Shared fixtures and utility functions are also provided for locating example data on disk. The `example_data_path` fixture resolves to `examples/data` relative to the project root, regardless of the location of the test script (as long as it's somewhere in the `autotest` directory).
315
-
316
-
```python
317
-
deftest_with_data(tmpdir, example_data_path):
318
-
model_path = example_data_path /"freyberg"
319
-
# load model...
320
-
```
321
-
322
-
This is preferable to manually handling relative paths as if the location of the example data changes in the future, only a single fixture in `conftest.py` will need to be updated rather than every test case individually.
323
-
324
-
An equivalent function `get_example_data_path()` is also provided in `conftest.py`. This is useful to dynamically generate data for test parametrization. (Due to a [longstanding `pytest` limitation](https://github.com/pytest-dev/pytest/issues/349), fixtures cannot be used to generate test parameters.)
325
-
326
-
#### Locating the project root
327
-
328
-
A similar `get_project_root_path()` function is also provided, doing what it says on the tin:
329
-
330
-
```python
331
-
from autotest.conftest import get_project_root_path, get_example_data_path
332
-
333
-
deftest_get_paths():
334
-
example_data = get_example_data_path()
335
-
project_root = get_project_root_path()
336
-
337
-
assert example_data.parent.parent == project_root
338
-
```
339
-
340
-
Note that this function expects tests to be run from the `autotest` directory, as mentioned above.
341
-
342
-
#### Conditionally skipping tests
343
-
344
-
Several `pytest` markers are provided to conditionally skip tests based on executable availability, Python package environment or operating system.
345
-
346
-
To skip tests if one or more executables are not available on the path:
347
-
348
-
```python
349
-
from shutil import which
350
-
from autotest.conftest import requires_exe
351
-
352
-
@requires_exe("mf6")
353
-
deftest_mf6():
354
-
assert which("mf6")
355
-
356
-
@requires_exe("mf6", "mp7")
357
-
deftest_mf6_and_mp7():
358
-
assert which("mf6")
359
-
assert which("mp7")
360
-
```
361
-
362
-
To skip tests if one or more Python packages are not available:
363
-
364
-
```python
365
-
from autotest.conftest import requires_pkg
366
-
367
-
@requires_pkg("pandas")
368
-
deftest_needs_pandas():
369
-
import pandas as pd
370
-
371
-
@requires_pkg("pandas", "shapefile")
372
-
deftest_needs_pandas():
373
-
import pandas as pd
374
-
from shapefile import Reader
375
-
```
376
-
377
-
To mark tests requiring or incompatible with particular operating systems:
378
-
379
-
```python
380
-
import os
381
-
import platform
382
-
from autotest.conftest import requires_platform, excludes_platform
383
-
384
-
@requires_platform("Windows")
385
-
deftest_needs_windows():
386
-
assert platform.system() =="Windows"
387
-
388
-
@excludes_platform("Darwin", ci_only=True)
389
-
deftest_breaks_osx_ci():
390
-
if"CI"in os.environ:
391
-
assert platform.system() !="Darwin"
392
-
```
393
-
394
-
Platforms must be specified as returned by `platform.system()`.
395
-
396
-
Both these markers accept a `ci_only` flag, which indicates whether the policy should only apply when the test is running on GitHub Actions CI.
397
-
398
-
There is also a `@requires_github` marker, which will skip decorated tests if the GitHub API is unreachable.
399
-
400
-
### Miscellaneous
401
-
402
-
A few other useful tools for FloPy development include:
403
-
404
-
-[`doctoc`](https://www.npmjs.com/package/doctoc): automatically generate table of contents sections for markdown files
405
-
-[`act`](https://github.com/nektos/act): test GitHub Actions workflows locally (requires Docker)
406
-
407
-
#### Generating TOCs with `doctoc`
408
-
409
-
The [`doctoc`](https://www.npmjs.com/package/doctoc) tool can be used to automatically generate table of contents sections for markdown files. `doctoc` is distributed with the [Node Package Manager](https://docs.npmjs.com/cli/v7/configuring-npm/install). With Node installed use `npm install -g doctoc` to install `doctoc` globally. Then just run `doctoc <file>`, e.g.:
410
-
411
-
```shell
412
-
doctoc DEVELOPER.md
413
-
```
414
-
415
-
This will insert HTML comments surrounding an automatically edited region, scanning for headers and creating an appropriately indented TOC tree. Subsequent runs are idempotent, updating if the file has changed or leaving it untouched if not.
416
-
417
-
To run `doctoc` for all markdown files in a particular directory (recursive), use `doctoc some/path`.
418
-
419
-
#### Testing CI workflows with `act`
420
-
421
-
The [`act`](https://github.com/nektos/act) tool uses Docker to run containerized CI workflows in a simulated GitHub Actions environment. [Docker Desktop](https://www.docker.com/products/docker-desktop/) is required for Mac or Windows and [Docker Engine](https://docs.docker.com/engine/) on Linux.
422
-
423
-
With Docker installed and running, run `act -l` from the project root to see available CI workflows. To run all workflows and jobs, just run `act`. To run a particular workflow use `-W`:
424
-
425
-
```shell
426
-
act -W .github/workflows/commit.yml
427
-
```
428
-
429
-
To run a particular job within a workflow, add the `-j` option:
430
-
431
-
```shell
432
-
act -W .github/workflows/commit.yml -j build
433
-
```
434
-
435
-
**Note:** GitHub API rate limits are easy to exceed, especially with job matrices. Authenticated GitHub users have a much higher rate limit: use `-s GITHUB_TOKEN=<your token>` when invoking `act` to provide a personal access token. Note that this will log your token in shell history — leave the value blank for a prompt to enter it more securely.
436
-
437
-
The `-n` flag can be used to execute a dry run, which doesn't run anything, just evaluates workflow, job and step definitions. See the [docs](https://github.com/nektos/act#example-commands) for more.
438
-
439
-
**Note:**`act` can only run Linux-based container definitions, so Mac or Windows workflows or matrix OS entries will be skipped.
0 commit comments