-
-
Notifications
You must be signed in to change notification settings - Fork 674
Use pytest for more TestSuite tests #40971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
7e6b8e0
to
49afbb0
Compare
Documentation preview for this PR (built with commit a424b8e; changes) is ready! 🎉 |
R3 = QpLC(2) | ||
R4 = QpLF(2) | ||
|
||
for R in (R1, R2, R3, R4): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use paramatrized tests instead of the internal for loop?
1848691
to
4b603c1
Compare
The last run had a strange failure in dev_tools.py, I just re-pushed the same branch to see if it's a fluke before I waste any time. |
No such luck:
Naturally it works fine when I re-run it on my machine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I still have two questions, but they are not major and don't need to be 'resolved' before merging.
PP = PositiveIntegerSemigroup() | ||
|
||
# fewer max_runs since these are kind of slow | ||
TestSuite(PP).run(verbose=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the verbose = True option needed for the functionality? If not, I think, I would prefer the quiet version, or maybe couple it to pytests verbosity level via pytestconfig.get_verbosity() > 0 (or 1?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pytest already swallows stdout unless you pass -s
, so i was thinking the added verbosity would only matter if the tests fail. I guess we could go back and make them all depend on the pytest verbosity though. I don't feel strongly about it either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed it mostly in CI, where it bloats the log without adding too much value (at least when everything passes). For the moment, that's okay but might get more annoying once more tests are migrated.
So maybe I do this later if I get annoyed enough - or I'll actually extract all of these testsuite _test methods (in the actual classes) to pytest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably get away without -s
in the CI. I think pytest will still dump the output that it captured in the event of a failure, while leaving everything nice and quiet if it passes.
|
||
@pytest.fixture | ||
def R1(): | ||
from warnings import catch_warnings, filterwarnings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason for the local imports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though it's ugly, my reasoning was that the fixtures are loaded on-demand by the tests, and in the future we may run only a subset of tests (requiring a subset of fixtures). In that case the local imports won't slow things down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these imports really so expensive that it's worth optimizing this (at the cost of readability)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤷 I moved them to the top level
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, let's get it in
elements = ( "R1", "R2", "R3", "R4" ) | ||
|
||
|
||
@pytest.mark.parametrize("e", elements) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the amount of boilerplate is scary.
Why can't you parametrize e over [ZpLC(2), ZpLF(2), etc.] directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a couple of related problems. (I'm not happy about the boilerplate either, but it should decrease in relative volume as more tests are moved to pytest.)
The big one is that we want to cache the fixtures and create them on-demand, so they aren't recreated multiple times, or created when they won't be used (if only a subset of tests is being run). For that magic to work, we need to mark the fixtures with @pytest.fixture
... but then you can't call them directly:
_____ ERROR collecting src/sage/rings/padics/padic_lattice_element_test.py _____
Fixture "R1" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
If we forego the caching / on-demand creation, we can create the objects ourselves, but then we lose the nice fixture naming. if I pass in ZpLC(2)
, for example, we get...
src/sage/rings/padics/padic_lattice_element_test.py::test_padic_lattice_element[e0]...
instead of
src/sage/rings/padics/padic_lattice_element_test.py::test_padic_lattice_element[R1]...
which uses the fixture name. Now obviously R1
is a lame name for a fixture and doesn't improve anything, but I'm trying to structure these in a decent way for future copy/pasting where the fixtures may actually be meaningful and take a while to construct.
extensions = ( "w0", "w1" ) | ||
|
||
|
||
@pytest.mark.parametrize("e", extensions) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part can do parametrize("index", [0, 1]) I guess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, this one was easy to improve without hurting anything. I rebased onto the latest develop, but only the final commit is new.
def test_finite_extension_from_limit_valuation(e, request): | ||
r""" | ||
Run the ``TestSuite()`` for two examples given in the | ||
``FiniteExtensionFromLimitValuation`` documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Speaking of more downsides of pytest migration, a cost of having the test far from the code is that now if the example given in the FiniteExtensionFromLimitValuation documentation is changed, this docstring will silently become false.
You can change the docstring to "run the test suite for two examples that were given in the FiniteExtensionFromLimitValuation documentation", but then what's the value of the docstring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just laziness on my part and not inherent to pytest. I'd love to describe the objects being tested more intelligently, or explain why they're being tested... but the original doctests didn't come with an explanation, and I don't know anything about mapped valuations, so this was all I could think of.
Copy two TestSuite() runs from a TESTS:: block to pytest. These are on the verge of hitting the "slow doctest" limit for long tests, and sometimes cause warnings in the CI. But more importantly, they are simply better suited to pytest: the user would never want to run them, and the output is not meaningful as documentation.
These have been moved to pytest and no longer need to be run as doctests.
Move some of the TestSuite() tests from a TESTS block to pytest. These are slow, and not really meant for end-user consumption, so are more appropriate here.
Delete a few doctests that have been moved to pytest.
Move some TestSuite tests from src/sage/combinat/backtrack.py (for the PositiveIntegerSemigroup class) to a new pytest file. These tests aren't meant for end-user consumption, and they are slow, so they are more appropriate here.
Remove a TestSuite() doctest that has been moved to pytest.
Shuffling around the doctests has made some FutureWarnings happen in different places.
Run tools/update-meson.py to install the new pytest source file.
Run tools/update-meson.py to install the new pytest source file.
Use parameterized tests and fixtures instead of an internal "for" loop to improve the reporting a bit.
Use parameterized tests and fixtures to improve the reporting from these tests.
In an attempt to avoid the following error on the CI... dt.find_objects_from_name('FareySymbol') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/sage/src/sage/misc/dev_tools.py", line 279, in find_objects_from_name for smodule_name, smodule in sys.modules.items(): RuntimeError: dictionary changed size during iteration we make a copy of the sys.modules dict before iterating over its items.
Move the padic imports and FutureWarning filter to the top level, so we don't have to repeat them four times.
The one test in this file is easy to parameterize based on the index (an int) of a fixture supplied to it. This allows us to eliminate a lot of the boilerplate associated with fixture parameterizations.
3cd9ba7
to
a424b8e
Compare
Fix some "slow doctest" warning by moving these out of the doctests, and into pytest where (IMO) they belong. They aren't for end-user consumption, and we aren't checking the output from them (as is the point of a doctest).