Skip to content

Commit b54e3a8

Browse files
committed
cosmetic fixes in README.md
1 parent e299bf4 commit b54e3a8

File tree

1 file changed

+21
-13
lines changed

1 file changed

+21
-13
lines changed

README.md

Lines changed: 21 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -28,21 +28,23 @@ In this demo project there is a C library (could also be C++ etc). The library
2828

2929
### Quick Overview of Testing
3030

31-
Installing google test suite (a unit test framework) -- could have used other test frameworks such as CppUnit or etc.
31+
There are many different phases of testing. Here are a few areas but phrased as questions.
3232

3333
Common Testing "Questions" about a project:
34-
* Does it run as intended? (is it funcitonally correct)
35-
* Does it have side effects when running? (are resources tied up such as ports blocked, thread contention?)
36-
* Are all the possible permutations of execution tested? (code coverage)
37-
* How much memory or resources are used? (is memmory efficiently used? Is memory freed correctly after use?)
34+
* Does it run as intended? (is it funcitonally correct, does it do what its supposed to do?)
35+
* Does it have side effects when running? (are resources tied up such as ports blocked, thread contention? Are other programs or services affected unintentionally?)
36+
* Are all the possible permutations of execution tested? (code coverage, Is every piece of code - every if-then statement etc tested?)
37+
* How much memory or resources are used? (is memmory efficiently used? Is memory freed correctly after use? When the program is complete does it leave things intact?)
3838
* Does it exit gracefully? (are any resources requested released before the code exits?)
3939

4040

41-
4241
### Unit Testing
42+
4343
Unit Testing is a practice of writting small tests to see that piece of code, typically a full module or library, passes a set of tests to make sure it runs as intended. The simple unit tests are done after writing function. We then make a small program (the Unit test program) which calls our new function with as many different example parameters as we think are appropriate to make sure it works correctly. If the results returned match the expected results we can say the function passes the test. If the results for a given set of parameters don't agree we call an assertion (usually via a special ASSERT type macro) which records the failure and attempts to keep running then test in our test program. The goal is to be able to craft a set of these tests which test all the possible paths of execution in our code and passes all the test.
4444

45-
Note that its not the goal to create a test that passes every possible permutation of the input parameters - as this could be an impossibly large number or variations even for just a few parameters. This idea of testing all the possible paths of exeuction is called code coverage. Testing code coverage is done with tools which see if the test program has successfully "challenged" the target library code by examing whether each execution path (or line of code) has been run. For example if there is a function like this:
45+
Note that its not the goal to create a test that passes every possible permutation of the input parameters - as this could be an impossibly large number or variations even for just a few parameters. This idea of testing all the possible paths of exeuction is called code coverage. Testing code coverage is done with tools which see if the test program has successfully "challenged" the target library code by examing whether each execution path (or line of code) has been run.
46+
47+
For example if there is a function like this:
4648

4749
```C
4850
int add5ifGreaterThan2 (int a) {
@@ -68,12 +70,12 @@ We do this with test code such as this:
6870

6971
```C
7072
//code in test program ...
71-
ASSERT (add5ifGreaterThan2(1) == 1) // supplies value of 'a' that tests the if (a<2) case
72-
ASSERT (add5ifGreaterThan2(3) == 8) // supplies value of 'a' that tests the if (a>2) case
73+
ASSERT (add5ifGreaterThan2(1) == 1) // supplies value of 'a' that tests the if (a<2) case and tests for a result
74+
ASSERT (add5ifGreaterThan2(3) == 8) // supplies value of 'a' that tests the if (a>2) case and tests for a result
7375

7476
```
7577
76-
Of course this example is very simple but it gives a general idea of how the parameters need to be chosen to make sure both sides of the if clause in the example are run by the test program.
78+
Of course this example is very simple but it gives a general idea of how the parameters need to be chosen to make sure both sides of the if clause in the example are run by the test program. The ASSERT macro checks whether the result is logically true. If it is not then it triggers an error process. Depending on the testing framework used different types of logging and output can be examined if the statement fails.
7779
7880
7981
#### More info
@@ -100,6 +102,8 @@ We'll be using Google Test (called gtest) here so first we need to install it.
100102
Here is the link to the project source
101103
[Google Test](https://github.com/google/googletest)
102104
105+
Examples here are built using Ubuntu Linux, but should apply to most other operating systems.
106+
103107
On Ubuntu Linux you can install gtest using this command. If you are developing on another sytem refer to the documentation link for install procedures. Other than installing, all of the commands and test procedures we'll be using later will be the same (whether Windows / MacOS / POSIX / Linux).
104108
105109
@@ -124,8 +128,7 @@ sudo ln -s /usr/lib/libgtest_main.a /usr/local/lib/gtest/libgtest_main.a
124128
125129
```
126130

127-
You can read more about the Google Test project here:
128-
[Test Primer.md](https://github.com/google/googletest/blob/master/googletest/docs/Primer.md)
131+
You can read more about the Google Test project here: [Testing Primer](https://github.com/google/googletest/blob/master/googletest/docs/Primer.md)
129132

130133

131134
===========================
@@ -136,10 +139,15 @@ You can read more about the Google Test project here:
136139
The lib.h / lib.c files are broken out as examples of testing an embedded library. Most of the projects I work on are for embedded systems so I wanted a way to get a build badge for these embedded projects. Since many of those compilers and environments are not on Linux I wanted just a simple abstraction of how the Travis build project works without all the complexities of a "real" project.
137140

138141

139-
## How it works
142+
## Testing vs Continuous Integration
140143

141144
In this demo project there is a C library (could also be C++ etc). The library code is just a few demo functions which are in the lib.h and lib.c files. They don't really do anything but allow for simple abstraction of what is necessary to build a larger project.
142145

146+
147+
Once you've made unit tests, and gotten your code to run using the local test suite the next step starts. How does an *outsider* know your code passes tests? This is where continuous integration (CI) starts. CI uses services (such as Travis-CI, Circle-CI, Jenkins and many others) to automatically run your test suites and then report the result. When a CI program runs your test suite it can be configured to accept or reject your code based on the tests passing. This in turn can be used to automatically deploy your code. This is called Continuous Deployment (CD) or pipelines. CD and pipelines are beyond this repo and example.
148+
149+
## Using Travis-CI as an example of build-badge and CI
150+
143151
Travis-CI looks in the .travis.yml (note that is dot travis dot yml) to see how to run the code. In this case it first calls make which compiles lib.c and example.c in to lib.o and example.o and then links them to produce the final executable called example.out. If you look inside the file example.c you will see there are few hand written test cases. They are not meant to be a complete example of how to write test cases just a simple view of how the tests will be run in general. The main() function calls local function run_tests() which in turn calls each individual test case. Rather than link in a larger test case environment such as cppUnit etc there is a trivial set of test functions, one for each function in the lib.c library. If run_tests() is able to run all the tests successfully it will return to main() with a value of S_OK else it will return some failure code. This value is then returned from the main() program in example.out on exit.
144152

145153
Travis-CI then runs the example.out and looks for the exit code from the main() function. Being a Posix style of system an exit code of zero from example.out is considered passing and hence Travis-ci will then declare the build passing. If a non zero value is returned travis will declare the build failing. So to sum up, the primary means for travis knowing whether the test suite passes is getting the proper exit code from the test suite executable which in our case here is running example.out.

0 commit comments

Comments
 (0)