|
| 1 | +--- |
| 2 | +layout: page |
| 3 | +title: Testing |
| 4 | +subtitle: Running Tests with Nose |
| 5 | +minutes: 10 |
| 6 | +--- |
| 7 | +> ## Learning Objectives {.objectives} |
| 8 | +> |
| 9 | +> - Understand how to run a test suite using the nose framework |
| 10 | +> - Understand how to read the output of a nose test suite |
| 11 | +
|
| 12 | + |
| 13 | +We created a suite of tests for our mean function, but it was annoying to run |
| 14 | +them one at a time. It would be a lot better if there were some way to run them |
| 15 | +all at once, just reporting which tests fail and which succeed. |
| 16 | + |
| 17 | +Thankfully, that exists. Recall our tests: |
| 18 | + |
| 19 | +~~~ {.python} |
| 20 | +from mean import * |
| 21 | +
|
| 22 | +def test_ints(): |
| 23 | + num_list = [1,2,3,4,5] |
| 24 | + obs = mean(num_list) |
| 25 | + exp = 3 |
| 26 | + assert obs == exp |
| 27 | +
|
| 28 | +def test_zero(): |
| 29 | + num_list=[0,2,4,6] |
| 30 | + obs = mean(num_list) |
| 31 | + exp = 3 |
| 32 | + assert obs == exp |
| 33 | +
|
| 34 | +def test_double(): |
| 35 | + # This one will fail in Python 2 |
| 36 | + num_list=[1,2,3,4] |
| 37 | + obs = mean(num_list) |
| 38 | + exp = 2.5 |
| 39 | + assert obs == exp |
| 40 | +
|
| 41 | +def test_long(): |
| 42 | + big = 100000000 |
| 43 | + obs = mean(range(1,big)) |
| 44 | + exp = big/2.0 |
| 45 | + assert obs == exp |
| 46 | +
|
| 47 | +def test_complex(): |
| 48 | + # given that complex numbers are an unordered field |
| 49 | + # the arithmetic mean of complex numbers is meaningless |
| 50 | + num_list = [2 + 3j, 3 + 4j, -32 - 2j] |
| 51 | + obs = mean(num_list) |
| 52 | + exp = NotImplemented |
| 53 | + assert obs == exp |
| 54 | +~~~ |
| 55 | + |
| 56 | +Once these tests are written in a file called `test_mean.py`, the command |
| 57 | +"nosetests" can be called from the directory containing the tests: |
| 58 | + |
| 59 | +~~~ {.bash} |
| 60 | +$ nosetests |
| 61 | +~~~ |
| 62 | +~~~ {.output} |
| 63 | +....F |
| 64 | +====================================================================== |
| 65 | +FAIL: test_mean.test_complex |
| 66 | +---------------------------------------------------------------------- |
| 67 | +Traceback (most recent call last): |
| 68 | + File "/Users/khuff/anaconda/envs/py3k/lib/python3.3/site-packages/nose/case.py", line 198, in runTest |
| 69 | + self.test(*self.arg) |
| 70 | + File "/Users/khuff/repos/2015-06-04-berkeley/testing/test_mean.py", line 34, in test_complex |
| 71 | + assert obs == exp |
| 72 | +AssertionError |
| 73 | +
|
| 74 | +---------------------------------------------------------------------- |
| 75 | +Ran 5 tests in 3.746s |
| 76 | +
|
| 77 | +FAILED (failures=1) |
| 78 | +~~~ |
| 79 | + |
| 80 | +In the above case, the python nose package 'sniffed-out' the tests in the |
| 81 | +directory and ran them together to produce a report of the sum of the files and |
| 82 | +functions matching the regular expression `[Tt]est[-_]*`. |
| 83 | + |
| 84 | + |
| 85 | +The major boon a testing framework provides is exactly that, a utility to find and run the |
| 86 | +tests automatically. With `nose`, this is the command-line tool called |
| 87 | +_nosetests_. When _nosetests_ is run, it will search all the directories whose names start or |
| 88 | +end with the word _test_, find all of the Python modules in these directories |
| 89 | +whose names |
| 90 | +start or end with _test_, import them, and run all of the functions and classes |
| 91 | +whose names start or end with _test_. In fact, `nose` looks for any names |
| 92 | +that match the regular expression `'(?:^|[\\b_\\.-])[Tt]est'`. |
| 93 | +This automatic registration of test code saves tons of human time and allows us to |
| 94 | +focus on what is important: writing more tests. |
| 95 | + |
| 96 | +When you run _nosetests_, it will print a dot (`.`) on the screen for every test |
| 97 | +that passes, |
| 98 | +an `F` for every test that fails, and an `E` for every test were there was an |
| 99 | +unexpected error. In rarer situations you may also see an `S` indicating a |
| 100 | +skipped tests (because the test is not applicable on your system) or a `K` for a known |
| 101 | +failure (because the developers could not fix it promptly). After the dots, _nosetests_ |
| 102 | +will print summary information. |
| 103 | + |
| 104 | + |
| 105 | +> ## Fix The Failing Code {.challenge} |
| 106 | +> |
| 107 | +> Without changing the tests, alter the mean.py file from the previous section until it passes. |
| 108 | +> When it passes, _nosetests_ will produce results like the following: |
| 109 | +> |
| 110 | +> ~~~ {.bash} |
| 111 | +> $ nosetests |
| 112 | +> ~~~ |
| 113 | +> ~~~ {.output} |
| 114 | +> ..... |
| 115 | +> |
| 116 | +> Ran 5 tests in 3.746s |
| 117 | +> |
| 118 | +> OK |
| 119 | +> ~~~ |
| 120 | +
|
| 121 | +As we write more code, we would write more tests, and _nosetests_ would produce |
| 122 | +more dots. Each passing test is a small, satisfying reward for having written |
| 123 | +quality scientific software. Now that you know how to write tests, let's go |
| 124 | +into what can go wrong. |
0 commit comments