-
Notifications
You must be signed in to change notification settings - Fork 84
140 testing mechanism #188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pseewald
wants to merge
28
commits into
fortran-lang:master
Choose a base branch
from
pseewald:140-testing-mechanism
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 26 commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
e99c881
Tests: change directory structure of automatic test detection
pseewald fae7d60
new testing mechanism working now
pseewald b9c5e7d
Unit tests: use fprettify as a module instead of relying on subprocesses
pseewald ce9d619
finish core work on test reorganization
pseewald e1d7aa7
suppress exceptions from test output (closes #134)
pseewald 1576631
polishing test suites
pseewald 1303fe9
Splitting test class into unit and integration tests
pseewald 02728b2
minor cleanups and test fixes
pseewald bb668c5
More test fixes
pseewald 74cb5ad
More test fixes
pseewald 72bc6ce
Updating test results
pseewald 24e71d9
Setting limit to number of lines per file (to speedup cron tests)
pseewald 7bfac51
Allow to set options as annotations within Fortran file
pseewald 2856e8e
cosmetic
pseewald 3063fdd
Fix bug related to first line
pseewald eec327e
Fix regex
pseewald 39a7f00
Update cron test results
pseewald 15b6ec2
Clean up file
pseewald 65237e8
Explaining test mechanism in README.md
pseewald 2e97c9c
Further working on README.md for tests
pseewald c173257
update README.md
pseewald f23c7e6
minor change
pseewald c1cbf90
removed todo
pseewald 7f487e6
Merge branch 'master' into 140-testing-mechanism
pseewald bb37a80
Adapt test workflows to new test mechanism
pseewald c7ddfff
fix issues with gh actions test workflow
pseewald a26d994
simplify cli as suggested by @max-models
pseewald 6120962
fix typos (as suggested by @dbroemmel)
pseewald File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -153,4 +153,120 @@ A = [-1, 10, 0, & | |
|
|
||
| ## Contributing / Testing | ||
|
|
||
| The testing mechanism allows you to easily test fprettify with any Fortran project of your choice. Simply clone or copy your entire project into `fortran_tests/before` and run `python setup.py test`. The directory `fortran_tests/after` contains the test output (reformatted Fortran files). If testing fails, please submit an issue! | ||
| When contributing new features by opening a pull request, testing is essential | ||
| to verify that the new features behave as intended, and that there are no | ||
| unwanted side effects. It is expected that before merging a pull request: | ||
| 1. one or more unit tests are added which test formatting of small Fortran code | ||
| snippets, covering all relevant aspects of the added features. | ||
| 2. if the changes lead to failures of existing tests, these test failures | ||
| should be carefully examinated. Only if the test failures are due to | ||
| intended changes of `fprettify` defaults, or because of bug fixes, the | ||
| expected test results can be updated. | ||
|
|
||
|
|
||
| ### How to add a unit test | ||
|
|
||
| Can the new feature be reasonably covered by small code snippets (< 10 lines)? | ||
| - **Yes**: add a test by starting from the following skeleton, and by adding the code to the file `fprettify/tests/unittests.py`: | ||
|
|
||
| ```python | ||
| def test_something(self): | ||
| """short description""" | ||
|
|
||
| in = "Some Fortran code" | ||
| out = "Same Fortran code after fprettify formatting" | ||
|
|
||
| # seleced fprettify command line arguments, as documented in "fprettify.py -h": | ||
|
||
| opt = ["arg 1", "value for arg 1", "arg2", ...] | ||
|
|
||
| # helper function checking that fprettify output is equal to "out": | ||
| self.assert_fprettify_result(opt, in, out) | ||
| ``` | ||
|
|
||
| Then run `./run_tests.py -s unittests` and check in the output that the newly added unit test passes. | ||
|
|
||
|
|
||
| - **No**: add a test by adding an example Fortran source file: Add the Fortran file | ||
| to `examples/in`, and the reformatted `fprettify` output to `examples/out`. | ||
| If the test requires non-default `fprettify` options, specify these options | ||
| as an annotation `! fprettify:` followed by the command-line arguments at the | ||
| beginning of the Fortran file. Then you'll need to manually remove | ||
| `fortran_tests/test_code/examples` to make sure that the test configuration | ||
| will be updated with the changes from `examples`. | ||
|
|
||
| Then run `./run_tests.py -s builtin`, and check that the output mentions the | ||
| newly added example with `checksum new ok`. Check that a new line containing | ||
| the checksum for this example has been added to the file | ||
| `fortran_tests/test_results/expected_results`, and commit this change along | ||
| with your example. Rerun `./run_tests.py -s builtin` and check that the | ||
| output mentions the newly added example with `checksum ok`. | ||
|
|
||
|
|
||
| ### How to add integration tests | ||
|
|
||
| This is a mechanism to add external code bases (such as entire git repositories | ||
| containing Fortran code) as test cases. In order to add a new code base as an | ||
| integration test suite, add a new section to | ||
| [testsuites.config](fortran_tests/testsuites.config), adhering to the following | ||
| format: | ||
|
|
||
| ``INI | ||
| [...] # arbitrary unique section name identifying test code | ||
| obtain: ... # Python command to obtain test code base | ||
| path: ... # relative path pointing to test code location | ||
| suite: ... # which suite this test code should belong to | ||
| `` | ||
|
|
||
| For `suite`, you should pick one of the following test suites: | ||
| - `regular`: for small code bases (executed for every pull request) | ||
| - `cron`: for larger code bases (executed nightly) | ||
|
|
||
|
|
||
| ### How to locally run all unit and integration tests: | ||
|
|
||
| - unit tests: `./run_tests.py -s unittests` | ||
| - builtin examples integration tests: `./run_tests.py -s builtin` | ||
| - `regular`: integration test suite: `./run_tests.py -s regular` | ||
| - `cron`: integration test suite (optional, takes a long time to execute): `./run_tests.py -s cron` | ||
| - `custom`: a dedicated test suite for quick testing, shouldn't be committed. | ||
|
|
||
|
|
||
| ### How to locally run selected unit or integration tests: | ||
|
|
||
| - unit tests: run | ||
| `python -m unittest -v fprettify.tests.unittests.FprettifyUnitTestCase.test_xxx` | ||
| (replacing `test_xxx` with the actual name of the test method) | ||
| - integration tests: run | ||
| - a specific suite (`unittests`, `builtin`, `regular`, `cron` or `custom`) | ||
| `./run_tests.py -s ...` | ||
| - tests belonging to a config section (see [testsuites.config](fortran_tests/testsuites.config)): | ||
| `./run_tests.py -n ...` | ||
|
|
||
|
|
||
| ### How to deal with test failures | ||
|
|
||
| Test failures are always due to fprettify-formatted code being different than | ||
| expected. To examine what has changed, proceed as follows: | ||
| - Unit tests: failures should be rather easy to understand because the test | ||
| output shows the diff of the actual vs. expected result. | ||
| - Integration tests: we don't store the expected version of Fortran code, | ||
| instead we compare SHA256 checksums of the actual vs. expected result. The | ||
| test output shows the diff of the actual result vs. the *previous* version of | ||
| the code (that is, the version before `fprettify` was applied). Thus, in | ||
| order to obtain the diff of the actual vs. the *expected* result, the | ||
| following steps need to be executed: | ||
|
|
||
| 1. Run `./run_tests.py -s` followed by the name of the failed test suite. Check | ||
| the test output for lines mentioning test failures such as: | ||
| `Test top-level-dir/subdir/file.f (fprettify.tests.fortrantests.FprettifyIntegrationTestCase) ... checksum FAIL`. | ||
| 2. Check out the reference version of `fprettify` for which the test passes (normally, `develop` branch). | ||
| 3. Run the integration test(s) via `./run_tests.py -n top-level-dir` (replacing | ||
| `top-level-dir` with the actual directory mentioned in the test output). | ||
| 4. Check out the version of `fprettify` for which the test failed and run the integration tests again. | ||
| 5. Now the `diff` shown in the test output shows the exact changes which caused the test to fail. | ||
|
|
||
| If you decide to accept the changes as new test references, proceed as follows: | ||
| - Unit tests: update the expected test result within the respective test method (third argument to function `self.assert_fprettify_result`) | ||
| - Integration tests: run `./run_tests.py ... -r` and commit the updated `fortran_tests/test_results/expected_results`. Then | ||
| run `./run_tests.py ...` and check that tests are passing now. | ||
|
|
||
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
File renamed without changes.
2 changes: 1 addition & 1 deletion
2
fortran_tests/before/example_swapcase.f90 → examples/in/example_swapcase.f90
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
|
|
||
| ! fprettify: --case 1 1 1 1 | ||
| MODULE exAmple | ||
| IMPLICIT NONE | ||
| PRIVATE | ||
|
|
||
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'examinated' -> 'examined'