Skip to content

Commit 9bfe665

Browse files
authored
Merge pull request #318 from mabruzzo/pytest-argparse
Improve ergonomics of pytest machinery
2 parents ba4753c + 65a4468 commit 9bfe665

File tree

10 files changed

+625
-117
lines changed

10 files changed

+625
-117
lines changed

doc/source/tests/pytest_framework.rst

Lines changed: 79 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ simulation is run with two versions of ``enzo-e`` and their results are compared
99
This is useful for testing problems with no analytical solution or generally
1010
verifying that results from commonly run simulations don't drift.
1111

12+
It is also useful in for testing problems that do have analytic solutions (the answer test might quantify how close a simulation result is to the analytic expected solution).
13+
While such tests do exist in the ctest-framework, they often involve more boiler-plate code.
14+
1215
`pytest <https://docs.pytest.org/>`__ is a Python-based framework for detecting
1316
and running a series of tests within a source code repository. When running
1417
``pytest``, the user can provide a directory in which ``pytest`` will look for
@@ -41,6 +44,8 @@ other useful answer testing functionality are located in the source in
4144
`test/answer_tests/answer_testing.py`. All answer tests are located in the
4245
other files within the `test/answer_tests` directory.
4346

47+
Some other functionality, that may be reused in other unrelated scripts provided in the Enzo-E repository, are provided in the ``test_utils`` subdirectory.
48+
4449
Running the Answer Test Suite
4550
-----------------------------
4651

@@ -60,16 +65,49 @@ To generate test answers, use the highest numbered gold standard tag.
6065
Configuring the Answer Test Suite
6166
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6267

63-
Before the answer tests can be run, a few environment variables must be set to
64-
configure behavior.
68+
The behavior of the test can be configured by passing command line arguments to ``pytest`` or by setting environment variables (or by mixing both).
69+
70+
When invoking ``pytest``, the command line flags discussed here should be passed **after** the path to the `test/answer_tests` directory has been provided.
71+
For the sake of example (the meaning of flags are explained below), one might invoke:
72+
73+
.. code-block:: bash
6574
66-
* ``TEST_RESULTS_DIR``: points to a directory in which answers will be stored
67-
* ``CHARM_PATH``: points to the directory in which ``charmrun`` is located
68-
* ``ENZO_PATH``: points to the ``enzo-e`` binary to use.
69-
If this is not specified, this defaults to the ``<PATH/TO/ENZO-E/REPO>/build/bin/enzo-e``
70-
* ``GENERATE_TEST_RESULTS``: "true" to generate test results, "false" to compare with existing results.
71-
* ``GRACKLE_INPUT_DATA_DIR``: points to the directory where ``Grackle`` input files are installed.
72-
If not specified, then all tests involving ``Grackle`` will be skipped.
75+
$ pytest test/answer_tests \
76+
--build-dir ./build \
77+
--answer-store
78+
79+
The following table lists command line flags, and where applicable, the environment variables that they are interchangable with.
80+
In cases where both are set, the command line argument is given precedence.
81+
82+
.. list-table:: Configuring pytest behavior
83+
:widths: 10 10 30
84+
:header-rows: 1
85+
86+
* - flag
87+
- env var
88+
- description
89+
* - ``--build-dir``
90+
- N/A
91+
- points to the build-directory where the target enzo-e binary was built (that binary has the path: BUILD_DIR/bin/enzo-e).
92+
The path to the charmrun launcher will be inferred from the `BUILD_DIR/CMakeCache.txt` file, but can be overwritten by the ``--charm`` flag or the ``CHARM_PATH`` environment variable.
93+
This precedence was chosen in case a user causes a change to relevant cached build-variables, but have not rebuilt Enzo-E (i.e. `CMakeCache.txt` may not be valid for the binary).
94+
When this flag isn't specified, the test infrastructure searches for the enzo-e binary at ENZOE_ROOT/build/bin/enzo-e, but doesn't try to infer charmrun's location from `CMakeCache.txt`.
95+
* - ``--local-dir``
96+
- ``TEST_RESULTS_DIR``
97+
- points to a directory in which answers will be stored/loaded
98+
* - ``--charm``
99+
- ``CHARM_PATH``
100+
- points to the directory in which ``charmrun`` is located
101+
* - ``--answer-store``
102+
- ``GENERATE_TEST_RESULTS``
103+
- When the command line flag is specified, test results are generated. Otherwise, results are compared against existing results (unless the environment variable is specified).
104+
The environment variable can be be set to ``"true"`` to generate test results or ``"false"`` to compare with existing results.
105+
* - ``--grackle-input-data-dir``
106+
- ``GRACKLE_INPUT_DATA_DIR``
107+
- points to the directory where ``Grackle`` input files are installed.
108+
If not specified, then all tests involving ``Grackle`` will be skipped.
109+
110+
Earlier versions of the tests also required the ``"USE_DOUBLE"`` environment variable to be set to ``"true"`` or ``"false"`` to indicate whether the code had been compiled in double or single precision.
73111

74112
.. code-block:: bash
75113
@@ -83,21 +121,16 @@ First, check out the highest numbered gold standard tag and compile ``enzo-e``.
83121

84122
.. code-block:: bash
85123
86-
$ git checkout gold-standard-1
124+
# in the future, you will need to subsitute 004 for a higher number
125+
$ git checkout gold-standard-004
87126
$ ...compile enzo-e
88127
89-
Then, configure the test suite to generate answers by setting
90-
GENERATE_TEST_RESULTS to true.
91-
92-
.. code-block:: bash
93-
94-
$ export GENERATE_TEST_RESULTS=true
95-
96-
Finally, run the test suite by calling ``pytest`` with the answer test directory.
128+
Then, run the test suite by calling ``pytest`` with the answer test directory (make sure to configure behavior correctly with command-line arguments or environment variables).
129+
In the following snippet, we assume you are currently at the root of the Enzo-E repository and that you will replace ``<build-dir>`` with the directory where you build enzo-e (this is commonly ``./build``)
97130

98131
.. code-block:: bash
99132
100-
$ pytest test/answer_tests
133+
$ pytest test/answer_tests --local-dir=~/enzoe_tests --build-dir=<build-dir> --answer-store
101134
========================== test session starts ===========================
102135
platform linux -- Python 3.9.13, pytest-7.1.2, pluggy-1.0.0
103136
rootdir: /home/circleci/enzo-e
@@ -116,19 +149,16 @@ Comparing Test Answers
116149

117150
Once test answers have been generated, the above steps need not be repeated until
118151
the gold standard tag has been updated. Now, any later version of the code can be
119-
run with the test suite to check for problems. Set the GENERATE_TEST_RESULTS
120-
environment variable to false to configure the test suite to compare with existing
121-
answers.
152+
run with the test suite to check for problems. To configure the test suite to compare with existing answers, omit the ``--answer-store`` flag and ensure that the ``GENERATE_TEST_RESULTS`` variable is either unset or set to ``"false"``.
122153

123154
.. code-block:: bash
124155
125156
$ git checkout main
126157
$ ...compile enzo-e
127-
$ export GENERATE_TEST_RESULTS=false
128-
$ pytest test/answer_tests
158+
$ pytest test/answer_tests --local-dir=~/enzoe_tests --build-dir=<build-dir>
129159
130-
Getting More Output from Pytest
131-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
160+
Helpful Tips
161+
^^^^^^^^^^^^
132162

133163
By default, most output printed by ``enzo-e`` or the test scripts will be swallowed
134164
by ``pytest``. When tests fail, the Python traceback may be shown, but not much
@@ -139,7 +169,18 @@ variables when this flag is given.
139169

140170
.. code-block:: bash
141171
142-
$ pytest -s test/answer_tests
172+
$ pytest -s test/answer_tests # other args...
173+
174+
When debugging an issue it's sometimes helpful to force pytest to run a subset of tests.
175+
This can be accomplished with the ``-k`` flag.
176+
For example, to only run a subset of tests with ``"grackle"`` in the test name, one might execute
177+
178+
.. code-block:: bash
179+
180+
$ pytest test/answer_tests -k "grackle" # other args...
181+
182+
When investigating a failing test or prototyping a brand-new test, it can sometimes be helpful to run the tests against multiple versions of enzo-e.
183+
Rather than rebuilding Enzo-E each time you want to do that, you can instead build the different versions of Enzo-E in separate build-directories, and direct ``pytest`` to use the different builds with the ``--build-dir`` flag.
143184

144185
Creating New Answer Tests
145186
-------------------------
@@ -291,6 +332,17 @@ in single or double precision, and adjust the tolerance on the tests accordingly
291332
def test_hllc_cloud(self):
292333
...
293334
335+
.. note::
336+
337+
The above code is primarily for the sake of example.
338+
In practice, we now automatically detect the code's precision from the enzo-e executable.
339+
340+
Alternatively, additional configuration options can be configured through new command-line flags, which are introduced and parsed by the `conftest.py` file in the `answer_test` directory.
341+
This is generally more robust than adding environment variables (since the flags are more easily discovered and are more explicit).
342+
But, in practice it's made slightly more complicated by the fact that flags are parsed with pytest hooks.
343+
Flags added in this way work best with ``pytest`` fixtures, while our tests mostly leverage features from `Python's unittest module <https://docs.python.org/3/library/unittest.html>`_.
344+
345+
294346
Caveats
295347
^^^^^^^
296348

test/answer_tests/answer_testing.py

Lines changed: 59 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -1,69 +1,66 @@
11
import copy
2+
from dataclasses import dataclass
23
import numpy as np
34
import os
45
import pytest
56
import shutil
6-
import signal
7-
import subprocess
8-
import sys
97
import tempfile
10-
import time
8+
from typing import Optional
119
import yt
1210

1311
from numpy.testing import assert_array_equal
1412
from unittest import TestCase
1513
from yt.funcs import ensure_dir
1614
from yt.testing import assert_rel_equal
1715

16+
from test_utils.enzoe_driver import EnzoEDriver
17+
1818
_base_file = os.path.basename(__file__)
1919

20-
# If GENERATE_TEST_RESULTS="true", just generate test results.
21-
generate_results = os.environ.get("GENERATE_TEST_RESULTS", "false").lower() == "true"
22-
yt.mylog.info(f"{_base_file}: generate_results = {generate_results}")
23-
24-
_results_dir = os.environ.get("TEST_RESULTS_DIR", "~/enzoe_test_results")
25-
test_results_dir = os.path.abspath(os.path.expanduser(_results_dir))
26-
yt.mylog.info(f"{_base_file}: test_results_dir = {test_results_dir}")
27-
if generate_results:
28-
ensure_dir(test_results_dir)
29-
else:
30-
if not os.path.exists(test_results_dir):
20+
@dataclass(frozen = True)
21+
class TestOptions:
22+
enzoe_driver: EnzoEDriver
23+
uses_double_prec: bool
24+
generate_results: bool
25+
test_results_dir: str
26+
grackle_input_data_dir : Optional[str]
27+
28+
_CACHED_OPTS = None
29+
30+
def set_cached_opts(**kwargs):
31+
global _CACHED_OPTS
32+
if _CACHED_OPTS is not None:
33+
raise RuntimeError("Can't call set_cached_opts more than once")
34+
35+
_CACHED_OPTS = TestOptions(**kwargs)
36+
yt.mylog.info(
37+
f"{_base_file}: generate_results = {_CACHED_OPTS.generate_results}")
38+
39+
yt.mylog.info(
40+
f"{_base_file}: test_results_dir = {_CACHED_OPTS.test_results_dir}")
41+
if _CACHED_OPTS.generate_results:
42+
ensure_dir(_CACHED_OPTS.test_results_dir)
43+
elif not os.path.exists(_CACHED_OPTS.test_results_dir):
3144
raise RuntimeError(
32-
f"Test results directory not found: {test_results_dir}.")
33-
34-
# Set the path to charmrun
35-
_charm_path = os.environ.get("CHARM_PATH", "")
36-
if not _charm_path:
37-
raise RuntimeError(
38-
f"Specify path to charm with CHARM_PATH environment variable.")
39-
charmrun_path = os.path.join(_charm_path, "charmrun")
40-
yt.mylog.info(f"{_base_file}: charmrun_path = {charmrun_path}")
41-
if not os.path.exists(charmrun_path):
42-
raise RuntimeError(
43-
f"No charmrun executable found in {_charm_path}.")
45+
f"Test results dir not found: {_CACHED_OPTS.test_results_dir}.")
4446

45-
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../.."))
47+
if ((_CACHED_OPTS.grackle_input_data_dir is not None) and
48+
(not os.path.exists(_CACHED_OPTS.grackle_input_data_dir))):
49+
raise RuntimeError(
50+
"grackle input data dir not found: "
51+
f"{_CACHED_OPTS.grackle_input_data_dir}")
4652

47-
# Set the path to the enzo-e binary
48-
_enzo_path = os.environ.get("ENZO_PATH", "")
49-
if _enzo_path:
50-
enzo_path = os.path.abspath(_enzo_path)
51-
else:
52-
enzo_path = os.path.join(src_path, "build/bin/enzo-e")
53-
yt.mylog.info(f"{_base_file}: enzo_path = {enzo_path}")
54-
if not os.path.exists(enzo_path):
55-
raise RuntimeError(
56-
f"No enzo-e executable found in {enzo_path}.")
53+
yt.mylog.info(
54+
f"{_base_file}: use_double = {_CACHED_OPTS.uses_double_prec}")
5755

58-
input_dir = "input"
56+
def cached_opts():
57+
if _CACHED_OPTS is None:
58+
raise RuntimeError("set_cached_opts was never called")
59+
return _CACHED_OPTS
5960

60-
# check for data path to grackle
61-
_grackle_input_data_dir = os.environ.get("GRACKLE_INPUT_DATA_DIR", None)
62-
if ((_grackle_input_data_dir is not None) and
63-
(not os.path.exists(_grackle_input_data_dir))):
64-
raise RuntimeError("GRACKLE_INPUT_DATA_DIR points to a non-existent "
65-
f"directory: {_grackle_input_data_dir}")
61+
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../.."))
6662

63+
input_dir = "input"
6764

6865
_grackle_tagged_tests = set()
6966

@@ -76,9 +73,15 @@ def uses_grackle(cls):
7673
"""
7774
_grackle_tagged_tests.add(cls.__name__)
7875

76+
has_grackle = cached_opts().enzoe_driver.query_has_grackle()
77+
has_grackle_inputs = cached_opts().grackle_input_data_dir is not None
78+
79+
skip_reason = "Enzo-E is not built with Grackle"
80+
if has_grackle and (not has_grackle_inputs):
81+
skip_reason = "the grackle input data dir was not specified"
82+
7983
wrapper_factory = pytest.mark.skipif(
80-
_grackle_input_data_dir is None,
81-
reason = "GRACKLE_INPUT_DATA_DIR is not defined"
84+
(not has_grackle) or (not has_grackle_inputs), reason = skip_reason
8285
)
8386
return wrapper_factory(cls)
8487

@@ -94,40 +97,22 @@ def setup_symlinks(self):
9497

9598
if self.__class__.__name__ in _grackle_tagged_tests:
9699
# make symlinks to each grackle input file
97-
with os.scandir(_grackle_input_data_dir) as it:
100+
with os.scandir(cached_opts().grackle_input_data_dir) as it:
98101
for entry in it:
99102
if not entry.is_file():
100103
continue
101104
os.symlink(entry.path,os.path.join(self.tmpdir, entry.name))
102105

103-
def run_simulation(self):
104-
pfile = os.path.join(input_dir, self.parameter_file)
105-
command = f"{charmrun_path} ++local +p{self.ncpus} {enzo_path} {pfile}"
106-
proc = subprocess.Popen(
107-
command, shell=True, close_fds=True,
108-
preexec_fn=os.setsid)
109-
110-
stime = time.time()
111-
while proc.poll() is None:
112-
if time.time() - stime > self.max_runtime:
113-
os.killpg(proc.pid, signal.SIGUSR1)
114-
raise RuntimeError(
115-
f"Simulation {self.__class__.__name__} exceeded max runtime of "
116-
f"{self.max_runtime} seconds.")
117-
time.sleep(1)
118-
119-
if proc.returncode != 0:
120-
raise RuntimeError(
121-
f"Simulation {self.__class__.__name__} exited with nonzero return "
122-
f"code {proc.returncode}.")
123-
124106
def setUp(self):
125107
self.curdir = os.getcwd()
126108
self.tmpdir = tempfile.mkdtemp()
127109

128110
self.setup_symlinks()
129111
os.chdir(self.tmpdir)
130-
self.run_simulation()
112+
cached_opts().enzoe_driver.run(
113+
parameter_fname = os.path.join(input_dir, self.parameter_file),
114+
max_runtime = self.max_runtime, ncpus = self.ncpus,
115+
sim_name = f"Simulation {self.__class__.__name__}")
131116

132117
def tearDown(self):
133118
os.chdir(self.curdir)
@@ -145,18 +130,19 @@ def real_answer_test(func):
145130
def wrapper(*args):
146131
# name the file after the function
147132
filename = "%s.h5" % func.__name__
148-
result_filename = os.path.join(test_results_dir, filename)
133+
result_filename = os.path.join(cached_opts().test_results_dir,
134+
filename)
149135

150136
# check that answers exist
151-
if not generate_results:
137+
if not cached_opts().generate_results:
152138
assert os.path.exists(result_filename), \
153139
"Result file, %s, not found!" % result_filename
154140

155141
data = func(*args)
156142
fn = yt.save_as_dataset({}, filename=filename, data=data)
157143

158144
# if generating, move files to results dir
159-
if generate_results:
145+
if cached_opts().generate_results:
160146
shutil.move(filename, result_filename)
161147
# if comparing, run the comparison
162148
else:

0 commit comments

Comments
 (0)