Skip to content

Commit 6e3fb38

Browse files
committed
Fix a few typos in test doc [ci skip]
Several typos found by PyCharm spellchecking Changed style of document heading Changed a couple of words elsewhere. No substantive changes this time. Signed-off-by: Mats Wichmann <[email protected]>
1 parent 671353f commit 6e3fb38

File tree

1 file changed

+15
-14
lines changed

1 file changed

+15
-14
lines changed

testing/framework/test-framework.rst

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1-
***********************
1+
#######################
22
SCons Testing Framework
3-
***********************
3+
#######################
4+
45
.. contents::
56
:local:
67

@@ -67,7 +68,7 @@ End-to-end tests are by their nature harder to debug. For the unit
6768
tests, you're running a test program directly, so you can drop straight
6869
into the Python debugger by calling ``runtest.py`` with the ``-d / --debug``
6970
option and setting breakpoints to help examine the internal state as
70-
the test is running. The e2e tests are each mini SCons projects execected
71+
the test is running. The e2e tests are each mini SCons projects executed
7172
by an instance of scons in a subprocess, and the Python debugger isn't
7273
particularly useful in this context.
7374
There's a separate section of this document on that topic: see `Debugging
@@ -190,7 +191,7 @@ that path-component in the testing directory.
190191
The use of an ephemeral test directory means that you can't simply change
191192
into a directory to debug after a test has gone wrong.
192193
For a way around this, check out the ``PRESERVE`` environment variable.
193-
It can be seen in action in `How to convert old tests to use fixures`_ below.
194+
It can be seen in action in `How to convert old tests to use fixtures`_ below.
194195

195196
Not running tests
196197
=================
@@ -427,8 +428,8 @@ and you can also visit the SCons Tools
427428
Index at https://github.com/SCons/scons/wiki/ToolsIndex for a complete
428429
list of available Tools, though not all may have tests yet.
429430

430-
How to convert old tests to use fixures
431-
---------------------------------------
431+
How to convert old tests to use fixtures
432+
----------------------------------------
432433

433434
Tests using the inline ``TestSCons.write()`` method can fairly easily be
434435
converted to the fixture based approach. For this, we need to get at the
@@ -463,7 +464,7 @@ the optional second argument (or the keyword arg ``dstfile``) to assign
463464
a name to the file being copied. For example, some tests need to
464465
write multiple ``SConstruct`` files across the full run.
465466
These files can be given different names in the source (perhaps using a
466-
sufffix to distinguish them), and then be sucessively copied to the
467+
suffix to distinguish them), and then be successively copied to the
467468
final name as needed::
468469

469470
test.file_fixture('fixture/SConstruct.part1', 'SConstruct')
@@ -499,8 +500,8 @@ kind of usage that does not lend itself easily to a fixture::
499500
Here the value of ``_python_`` from the test program is
500501
pasted in via f-string formatting. A fixture would be hard to use
501502
here because we don't know the value of ``_python_`` until runtime
502-
(also note that as it will be a full pathname, it's entered as a
503-
Python rawstring to avoid interpretation problems on Windows,
503+
(also note that as it will be an absolute pathname, it's entered using
504+
Python raw string notation to avoid interpretation problems on Windows,
504505
where the path separator is a backslash).
505506

506507
The other files created in this test may still be candidates for
@@ -526,7 +527,7 @@ result doesn't match).
526527

527528
Even more irritatingly, added text can cause other tests to fail and
528529
obscure the error you're looking for. Say you have three different
529-
tests in a script excercising different code paths for the same feature,
530+
tests in a script exercising different code paths for the same feature,
530531
and the third one is unexpectedly failing. You add some debug prints to
531532
the affected part of scons, and now the first test of the three starts
532533
failing, aborting the test run before it even gets to the third test -
@@ -554,7 +555,7 @@ to a file instead, so they don't interrupt the test expectations.
554555
Or write directly to a trace file of your choosing.
555556

556557
Part of the technique discussed in the section
557-
`How to Convert Old Tests to Use Fixures`_ can also be helpful
558+
`How to convert old tests to use fixtures`_ can also be helpful
558559
for debugging purposes. If you have a failing test, try::
559560

560561
$ PRESERVE=1 python runtest.py test/failing-test.py
@@ -570,7 +571,7 @@ There are related variables ``PRESERVE_PASS``, ``PRESERVE_FAIL`` and
570571
was the indicated one, which is helpful if you're trying to work with
571572
multiple tests showing an unusual result.
572573

573-
From a Windows ``cmd`` shell, you will have to set the envronment
574+
From a Windows ``cmd`` shell, you will have to set the environment
574575
variable first, it doesn't work on a single line like the example above for
575576
POSIX-style shells.
576577

@@ -625,7 +626,7 @@ Avoiding tests based on tool existence
625626

626627
For many tests, if the tool being tested is backed by an external program
627628
which is not installed on the machine under test, it may not be worth
628-
proceeding with the test. For example, it's hard to test complilng code with
629+
proceeding with the test. For example, it's hard to test compiling code with
629630
a C compiler if no C compiler exists. In this case, the test should be
630631
skipped.
631632

@@ -715,7 +716,7 @@ E2E-specific Suggestions:
715716
ahead and calling the external tool.
716717
* If using an external tool, be prepared to skip the test if it is unavailable.
717718
* Do not combine tests that need an external tool with ones that
718-
do not - divide these into separate test files. There is no concept
719+
do not - split these into separate test files. There is no concept
719720
of partial skip for e2e tests, so if you successfully complete seven
720721
of eight tests, and then come to a conditional "skip if tool missing"
721722
or "skip if on Windows", and that branch is taken, then the

0 commit comments

Comments
 (0)