You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+35-23Lines changed: 35 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,13 +7,17 @@
7
7
This repo covers setting up a basic testing suite with github badges for a C/C++ library. Its not meant to be deep tutorial on testing but just cover some basics of setting up unit tests, coverage tests, and continuous integration (in this case using Travis-CI). The repo doesn't have a lot of code - there is a simple library which is tested for coverage and integration.
8
8
9
9
### Motivation
10
+
I just wanted to make a small standalone test project to see tools and workflow for C (or C++) language testing.
10
11
11
-
I just wanted to make a small standalone test project to see tools and workflow for C language testing.
12
12
13
+
<<<<<<< HEAD
13
14
14
15
copyright (C) <2016 and onward> <M. A. Chatterjee> <deftio [at] deftio [dot] com>
15
16
version 1.0.1 (updated for travis-ci.com transition) M. A. Chatterjee
@@ -30,23 +34,25 @@ In this demo project there is a C library (could also be C++ etc). The library
30
34
31
35
### Quick Overview of Testing
32
36
33
-
Installing google test suite (a unit test framework) -- could have used other test frameworks such as CppUnit or etc.
37
+
There are many different phases of testing. Here are a few areas but phrased as questions.
34
38
35
39
Common Testing "Questions" about a project:
36
-
* Does it run as intended? (is it funcitonally correct)
37
-
* Does it have side effects when running? (are resources tied up such as ports blocked, thread contention?)
38
-
* Are all the possible permutations of execution tested? (code coverage)
39
-
* How much memory or resources are used? (is memmory efficiently used? Is memory freed correctly after use?)
40
+
* Does it run as intended? (is it funcitonally correct, does it do what its supposed to do?)
41
+
* Does it have side effects when running? (are resources tied up such as ports blocked, thread contention? Are other programs or services affected unintentionally?)
42
+
* Are all the possible permutations of execution tested? (code coverage, Is every piece of code - every if-then statement etc tested?)
43
+
* How much memory or resources are used? (is memmory efficiently used? Is memory freed correctly after use? When the program is complete does it leave things intact?)
40
44
* Does it exit gracefully? (are any resources requested released before the code exits?)
41
45
42
46
43
-
44
47
### Unit Testing
48
+
45
49
Unit Testing is a practice of writting small tests to see that piece of code, typically a full module or library, passes a set of tests to make sure it runs as intended. The simple unit tests are done after writing function. We then make a small program (the Unit test program) which calls our new function with as many different example parameters as we think are appropriate to make sure it works correctly. If the results returned match the expected results we can say the function passes the test. If the results for a given set of parameters don't agree we call an assertion (usually via a special ASSERT type macro) which records the failure and attempts to keep running then test in our test program. The goal is to be able to craft a set of these tests which test all the possible paths of execution in our code and passes all the test.
46
50
47
-
Note that its not the goal to create a test that passes every possible permutation of the input parameters - as this could be an impossibly large number or variations even for just a few parameters. This idea of testing all the possible paths of exeuction is called code coverage. Testing code coverage is done with tools which see if the test program has successfully "challenged" the target library code by examing whether each execution path (or line of code) has been run. For example if there is a function like this:
51
+
Note that its not the goal to create a test that passes every possible permutation of the input parameters - as this could be an impossibly large number or variations even for just a few parameters. This idea of testing all the possible paths of exeuction is called code coverage. Testing code coverage is done with tools which see if the test program has successfully "challenged" the target library code by examing whether each execution path (or line of code) has been run.
48
52
49
-
```
53
+
For example if there is a function like this:
54
+
55
+
```C
50
56
intadd5ifGreaterThan2 (int a) {
51
57
int r;
52
58
@@ -60,27 +66,27 @@ int add5ifGreaterThan2 (int a) {
60
66
```
61
67
62
68
Our test program for add5ifGreaterThan2() needs to supply values of a that are both less and great than 2 so both paths of the if statement
63
-
```
69
+
```C
64
70
if (a<2)
65
71
```
66
72
67
73
are tested.
68
74
69
75
We do this with test code such as this:
70
76
71
-
```
77
+
```C
72
78
//code in test program ...
73
-
ASSERT (add5ifGreaterThan2(1) == 1) // supplies value of 'a' that tests the if (a<2) case
74
-
ASSERT (add5ifGreaterThan2(3) == 8) // supplies value of 'a' that tests the if (a>2) case
79
+
ASSERT (add5ifGreaterThan2(1) == 1) // supplies value of 'a' that tests the if (a<2) case and tests for a result
80
+
ASSERT (add5ifGreaterThan2(3) == 8) // supplies value of 'a' that tests the if (a>2) case and tests for a result
75
81
76
82
```
77
83
78
-
Of course this example is very simple but it gives a general idea of how the parameters need to be chosen to make sure both sides of the if clause in the example are run by the test program.
84
+
Of course this example is very simple but it gives a general idea of how the parameters need to be chosen to make sure both sides of the if clause in the example are run by the test program. The ASSERT macro checks whether the result is logically true. If it is not then it triggers an error process. Depending on the testing framework used different types of logging and output can be examined if the statement fails.
79
85
80
86
81
87
#### More info
82
88
83
-
Here is a link to the wikipedia article for more depth on unit testing practice and history: [https://en.wikipedia.org/wiki/Unit_testing](Wikipedia: Unit Testing).
89
+
Here is a link to the wikipedia article for more depth on unit testing practice and history: [Unit_testing](https://en.wikipedia.org/wiki/Unit_testing).
84
90
85
91
### Frameworks
86
92
To make Unit Testing easier to automate, unit testing frameworks have been written to help test results from function calls, gather statistics about passing/failing test cases, and
@@ -100,12 +106,14 @@ Unit testing frameworks are available in most languages and many have names like
100
106
We'll be using Google Test (called gtest) here so first we need to install it.
Examples here are built using Ubuntu Linux, but should apply to most other operating systems.
104
112
105
113
On Ubuntu Linux you can install gtest using this command. If you are developing on another sytem refer to the documentation link for install procedures. Other than installing, all of the commands and test procedures we'll be using later will be the same (whether Windows / MacOS / POSIX / Linux).
You can read more about the Google Test project here: [Testing Primer](https://github.com/google/googletest/blob/master/googletest/docs/Primer.md)
131
138
132
139
133
140
===========================
@@ -138,18 +145,23 @@ You can read more about the Google Test project here:
138
145
The lib.h / lib.c files are broken out as examples of testing an embedded library. Most of the projects I work on are for embedded systems so I wanted a way to get a build badge for these embedded projects. Since many of those compilers and environments are not on Linux I wanted just a simple abstraction of how the Travis build project works without all the complexities of a "real" project.
139
146
140
147
141
-
## How it works
148
+
## Testing vs Continuous Integration
142
149
143
150
In this demo project there is a C library (could also be C++ etc). The library code is just a few demo functions which are in the lib.h and lib.c files. They don't really do anything but allow for simple abstraction of what is necessary to build a larger project.
144
151
152
+
153
+
Once you've made unit tests, and gotten your code to run using the local test suite the next step starts. How does an *outsider* know your code passes tests? This is where continuous integration (CI) starts. CI uses services (such as Travis-CI, Circle-CI, Jenkins and many others) to automatically run your test suites and then report the result. When a CI program runs your test suite it can be configured to accept or reject your code based on the tests passing. This in turn can be used to automatically deploy your code. This is called Continuous Deployment (CD) or pipelines. CD and pipelines are beyond this repo and example.
154
+
155
+
## Using Travis-CI as an example of build-badge and CI
156
+
145
157
Travis-CI looks in the .travis.yml (note that is dot travis dot yml) to see how to run the code. In this case it first calls make which compiles lib.c and example.c in to lib.o and example.o and then links them to produce the final executable called example.out. If you look inside the file example.c you will see there are few hand written test cases. They are not meant to be a complete example of how to write test cases just a simple view of how the tests will be run in general. The main() function calls local function run_tests() which in turn calls each individual test case. Rather than link in a larger test case environment such as cppUnit etc there is a trivial set of test functions, one for each function in the lib.c library. If run_tests() is able to run all the tests successfully it will return to main() with a value of S_OK else it will return some failure code. This value is then returned from the main() program in example.out on exit.
146
158
147
159
Travis-CI then runs the example.out and looks for the exit code from the main() function. Being a Posix style of system an exit code of zero from example.out is considered passing and hence Travis-ci will then declare the build passing. If a non zero value is returned travis will declare the build failing. So to sum up, the primary means for travis knowing whether the test suite passes is getting the proper exit code from the test suite executable which in our case here is running example.out.
148
160
149
161
## Code Coverage
150
162
Code coverage is achieved using gcov from the gcc test suite. The example.out test program is compiled with the flags -ftest-coverage -fprofile-arcs. To see the code coverage run gcov:
0 commit comments