CPUnit project page

What is CPUnit?

CPUnit is a unit test framework for C++ applications and programs. It is very much inspired by the elegance of JUnit, althoug, since C++ is a language quite different from Java, both the implementation and the way tests are written differ. (For instance, tests in CPUnit are never encapsulated in a class, but rather a namespace). The similarity is in the ease of use, the minimality of coding required to write good tests and user friendly command line options for test execution.

Contents:

  1. Download CPUnit (redirects to the CPUnit project site)
  2. What is CPUnit?
  3. Motivation and main features
  4. A mininal example
  5. Introducing suites
  6. Using fixtures
  7. Testing for expected exceptions
  8. Modes of execution
  9. Linking sub-projects together
  10. Using with Doxygen
  11. Available asserts
  12. Handling custom exceptions
  13. Tricks for more compact code
  14. Doxygen documentation
  15. User documentation for CPUnit 0.7 (beta) and older

CPUnit project site

Motivation and main features

I was looking for a good xUnit-based test framework for my C++ code, and after ten years of Java programming, I have been quite spoiled using JUnit. After having browsed the web for some time, I realized two things:

  1. Due to the lack of flexibility in C++ compared to Java, you cannot "have it all" (as you can in JUnit).
  2. None of the frameworks I looked at (and I shall not claim to have looked at all of them) satisfied my primary list of requirements.
So I wrote my own. And afterwards, since I am a humble man, I found it to be so good I wanted to share it with someone else.

Main features

The main requirements I had when I set out are listed below in prioritized order.
  1. Minimal amount of code required to write a test
  2. Automatic test registration
  3. Support for suites/grouping of tests
  4. Support for fixtures
  5. Scalable to large projects with many modules
  6. Possibility to run subsets of the tests using wildcards

Limitations

The test framework is a C++ unit test framework, and is thus made to work with C++ compilers. It makes use of the following "exotic" mechanisms: Also, since there are no "good" time functions in ANSI C++, the default timing of the tests has second resolution. However, if your platform complies with the POSIX standard, the code is supposed to pick this up, and times will be presented with up to millisecond resolution. (If it does not, and you have a POSIX compliant platform, pass -D _POSIX_TIMERS to the build script.) Currently (version 0.6), there is no special usage of Windows time APIs when on Windows, but this will follow soon. If you want to make an adaptation to the platform of you choice, the key is to modify the StopWatch class. Have a look in "src/cpunit_StopWatch.cpp", and I am pretty sure you will have it your way in a matter of minutes.

CPUnit project site

A mininal example

The below code snippet demonstrates the least amount of code required to write an executable test in CPUnit:

        #include <cpunit>
        
        using namespace cpunit;

        CPUNIT_GTEST(example_test) {
           const int expected = 33;
           const int actual   = i1;
           assert_equals("The two integers should be equal.", expected, actual);
        }
      
(The using namespace cpunit; directive is to avoid having to prefix all the assert_* functions by their namespace cpunit::).
Let the snippet be located in a file named "TestDemo.cpp", and for the sake of this example, we assume the following: In a prompt, you now have to type
        >g++ -I/include -L/lib -lCPUnit -o testExecutable TestDemo.cpp 
        >./testExecutable
      
and the output will show up as:
        >g++ -I/include -L/lib -lCPUnit -o testExecutable TestDemo.cpp 
        >./testExecutable
        .
        
	Time: 0.010

        OK (1 tests)
        >
      
You can download the code for this example here:
TestDemo.cpp

CPUnit project site

Introducing suites

If you want to write more than just a few tests, it is handy to group the tests into suites. In CPUnit, suites are equivalent to namespaces of tests, and modifying the above example into using suites turns out as follows:

        #include <cpunit>
        
        namespace DemoTest {

           using namespace cpunit;

           CPUNIT_TEST(DemoTest, example_test) {
              const int i1 = 33;
              const int i2 = i1;
              assert_equals("The two integers should be equal.", i1, i2);
           }

           CPUNIT_TEST(DemoTest, example_test_two) {
              const int i1 = 33;
              const int i2 = 34;
              assert_true("The two integers should be different.", i1 != i2);
           }
        }
      
Notice that, when inside a suite, we use a slightly different macro for the tests. In particular, the suite (namespace) name is a parameter to the macro.
Although you can name your suites almost anything, it is advisable to use names reflecting what the suite aims at testing. Compiling and running is as before:
        >g++ -I/include -L/lib -lCPUnit -o testExecutable TestDemo.cpp 
        >./testExecutable
        ..
        
	Time: 0.018

        OK (2 tests)
        > 
      
You can download the code for this example here:
DemoTest.cpp

CPUnit project site

Using fixtures

It is not uncommon that several tests in one suite operate on the same type of test data. To reduce the amount of code in the tests, this common code can be placed directly in the suite (namespace). In the following example, we shall write a test to verify that std::sort works for integers:

        #include <cpunit>
        #include <algorithm>
        #include <vector>
        
        namespace SortTest {
        
           using namespace cpunit;

           const int length = 3;
           const int data[length] = {3, 1, 2};
           std::vector input(data, data+length);

           CPUNIT_TEST(SortTest, test_sort) {
              // Construct the expected result
              const int expected_data[length] = {1, 2, 3};
              std::vector expected(expected_data, expected_data + length);

              // Sort and then test the actual result
              std::sort(input.begin(), input.end());
              assert_equals("std::sort failed.", expected, input);
           }
	   
           // Construct a reverse comparator for the next test.
           struct RevCmp {
              bool operator () (int i1, int i2) {
                 return i2 < i1;
              }
           };

           CPUNIT_TEST(SortTest, test_reverse_sort) {
           // Construct the expected result
           const int expected_data[length] = {2, 3, 1};
           std::vector expected(expected_data, expected_data + length);

           // Sort and then test the actual result
           std::sort(input.begin(), input.end(), RevCmp());
           assert_equals("std::sort reversely failed.", expected, input);
        }
     }
   
If we are lucky, this will work out all right. There is a problem, however. The tests modify common input data, and in a worse setting, this might cause different results if tests are executed in different orders. (In CPUnit, there is no guarantee with respect to the order of execution of tests).
The solution is to use fixtures. Fixtures consist of a set of data together with two functions, a set-up function and a tear-down function, which are run before and after each test, respectively. In CPUnit, there can be one fixture for each suite:
        #include <cpunit>
        #include <algorithm>
        #include <vector>
        
        namespace SortTest {
        
           using namespace cpunit;

           const int length = 3;
           const int data[length] = {3, 1, 2};
           std::vector input;
          
           CPUNIT_SET_UP(SortTest) {
              input = std::vector(data, data + length);
           }

           CPUNIT_TEAR_DOWN(SortTest) {
              input.clear();
           }

           CPUNIT_TEST(SortTest, test_sort) {
           // Construct the expected result
           const int expected_data[length] = {1, 2, 3};
           std::vector expected(expected_data, expected_data + length);

           // Sort and then test the actual result
           std::sort(input.begin(), input.end());
              assert_equals("std::sort failed.", expected, input);
           }
	   
           // Construct a reverse comparator for the next test.
           struct RevCmp {
              bool operator () (int i1, int i2) {
                 return i2 < i1;
              }
           };

           CPUNIT_TEST(SortTest, test_reverse_sort) {
           // Construct the expected result
           const int expected_data[length] = {2, 3, 1};
           std::vector expected(expected_data, expected_data + length);

           // Sort and then test the actual result
           std::sort(input.begin(), input.end(), RevCmp());
           assert_equals("std::sort reversely failed.", expected, input);
        }
     }
     
Now, each test is guaranteed to execute on the same data each time, regardless of which tests have executed before. I admit that, the tear-down function is not required in this concrete case, but I included it for the sake of the example. You will usually just need a tear down function if you have allocated data on the heap, or if your tests modify global data (possibly used by tests in other suites) having to be re-set.

You can download the code for this example here: SortTest.cpp

Now, compile and run it all together:

        >g++ -I/include -L/lib -lCPUnit -o testExecutable *.cpp 
        >./testExecutable
	

CPUnit project site

Testing for expected exceptions

Sometimes, you want to check that your code really does throw an exception. You can register this type of tests using the CPUNIT_TEST_EX and CPUNIT_TEST_EX_ANY macros:

      CPUNIT_TEST_EX_ANY(MyStuffTest, test_that_that_exception_really_comes) {
        throw "The '*_ANY' macro registers a test that succeeds whatever is thrown";
      }

      CPUNIT_TEST_EX(MyStuffTest, test_for_particular_exception, MyException) {
        // This macro registers a test that succeeds only if a test of
        // the type 'MyException' is thrown.
      }
      
Beware, however, that the cpunit::AssertionException is given special treatment: If the test code throws a cpunit::AssertionException, this exception is re-thrown, and the test still fails.

CPUnit project site

Modes of execution

Running a subset of the tests

If you have a lot of tests and you only want to run a few, you can use wildcards as follows:
      >./testExecutable "SortTest*"
    
This will cause all tests which fully quallified name (namespace and all) starting with "SortTest" to be executed. You can use many wildcards if you like:
      >./testExecutable "SortTest*test*"
    
Or just list the tests you want to run:
      >./testExecutable SortTest::test_reverse_sort SortTest::test_sort
    
Listing tests has the advantage of enforcing the order of execution. When specifying e.g. "SortTest*", all tests matching the pattern will be executed, but there is no guarantee to the order of execution within the set of tests. However, specifying several specifications will cause each set of tests to be executed in the specified order, so in the last example above, "SortTest::test_reverse_sort" will be executed first, followed by "SortTest::test_sort".

If you want to know which tests exists, run

	>./testExecutable -L
      
This will list all the registered tests in alphabetical order. Passing one or more glob-patterns in addition to -L will list all tests matching the glob-patterns.

Verbose and robust mode

By specifying the flag -v (or --verbose) to the test executable, you will have one line printed for each test being executed.
Specifying the flag -a (or --all) will force all tests to be executed, even if there are errors. (Default behaviour is to stop at the first error).

Error message formatting

If you are unhappy with the way errors are reported, you have some flexibility in choosing the formatting.
The format is specified as -f=<format> in a printf-like manner, and the following formatting options are available: The default setup is "%p::%n - %m (%t)%N(Registered at %f:%l)", and an example error message then looks like this:
      SortTest::test_reverse_sort - ASSERT EQUALS FAILED - std::sort reversely failed. Expected <[2,3,1]>, was <[3,2,1]>. (0.020s)
      (Registered at SortTest.cpp:31)
    
(The alert reader undoubtedly noticed the erroneous test data in the above example).
Notice that, specifying a new error report format with -f will only take effect in robust mode (--all).

More execution options

Specifying "-h" or "--help" on the command line displays all command line options.

CPUnit project site

Linking sub-projects together

This can be achieved through several means. The below procedure is my way of accomplishing this, and makes use of static linking:

  1. Compile each subproject into an archive as follows:
    	    > g++ -c -I/include *.cpp
    	    > ar -rcs libTests.a *.o
    	    >
    	  
  2. Link everything together into one large executable:
    	    > g++ -o allTests -L/lib -lCPUnit -Wl,-all_load ./sub_proj_1/libTests.a ... ./sub_proj_n/libTests.a
    	    >
    	  
    The option -Wl,-all_load tells the linker to load all object files from all static libraries. (Default behaviour is to only load object files that are explicitly referenced, which will fail with CPUnit, since the object files contain self registering mechanisms with no external references).

CPUnit project site

Using with Doxygen

To make sure Doxygen expands the CPUNIT_XXX macros in order to produce proper documentation of your tests, add the following setup in the Doxygen configuration (under preprocessor configuration):

	ENABLE_PREPROCESSING = YES
	MACRO_EXPANSION      = YES
	PREDEFINED           = "CPUNIT_TEST(n,f)='void f()" \
	                       "CPUNIT_GTEST(f)='void f()" \
                               "CPUNIT_SET_UP(f)='void set_up()" \
                               "CPUNIT_TEAR_DOWN(n)='void tear_down()"\
                               "CPUNIT_TEST_EX(n,f,E)='void f()"\
                               "CPUNIT_TEST_EX_ANY(n,f)='void f()"\
                               "CPUNIT_GTEST_EX(f,E)='void f()"\
                               "CPUNIT_GTEST_EX_ANY(f)='void f()"
      
(I admit it looks strange with the unmatched ', but this is what produces correct output on Doxygen 1.7.4).

CPUnit project site

Available asserts

All assert functions are declared in "cpunit_Assert.hpp":

        
       template<class T>
       void assert_equals(const std::string msg, const T &expected, const T &actual);
       
       template<class T>
       void assert_equals(const T &expected, const T &actual);

       template<class T, class Eq>
       void assert_equals(const std::string msg, const T &expected, const T &actual, const Eq &eq);

       template<class T, class Eq>
       void assert_equals(const T &expected, const T &actual, const Eq &eq);
  
       void assert_equals(const std::string msg, const double expected, const double actual, const double error);
       void assert_equals(const double expected, const double actual, const double error);
  
       void assert_true(const std::string msg, const bool statement);
       void assert_true(const bool statement);

       void assert_false(const std::string msg, const bool statement);
       void assert_false(const bool statement);

       void assert_not_null(const std::string msg, const void *data);
       void assert_not_null(const void *data);

       void assert_null(const std::string msg, const void *data);
       void assert_null(const void *data);
       
       void fail(const std::string msg);
       void fail();
      
Notice in particular the third and fourth assert functions, taking a separate template based, argument Eq. This must be a function object on the form
	struct MyEq {
	   bool operator()(const T &t1, const T &t2) const {
	      // do your comparision here...
	   }
	};
      
and is the way to perform equality checking on complex objects where == is not sufficient.

Assert macros

In addition to the above assert functions, there are also some simple assert macros available:

	CPUNIT_FAIL     ()
	CPUNIT_FAIL1    ( message )
	CPUNIT_ASSERT   ( predicate )
	CPUNIT_ASSERT1  ( message, predicate )
	CPUNIT_DISPROVE ( predicate )
	CPUNIT_DISPROVE1( message, predicate )
      
The *_ASSERT macros check that the predicate is true, and the *_DISPROVE macros check that the predicate is false. The predicate code (when there is one) and the line number of the statement will be part of the final error message, so there is less need for a comment when using these macroes. However, all macros have a sibling supporting a message, which can be stream-formatted, such as
	for (int i=0; i<100; ++i) {
          CPUNIT_ASSERT1("Failed for i=" << i, i < 100);
        }
      

Handling custom exceptions

Off the shelf, CPUnit only handles its own exceptions and subclasses of std::exception when running in robust mode (in addition to a catch(...) clause). To add explicit handling of other types of thrown objects, do as follows:

  1. Implement a subclass of cpunit::TestRunnerDecorator for handling your own exceptions, using cpunit::RunAllTestRunner as an example.
  2. Register the class in the get_test_runner method in the cpunit::TestExecutionFacade, as indicated by the comments. (You can browse to the code from the doxygen documentation).

Tricks for more compact code

This section lists some more or less hidden features that will help you spend less symbols when defining your tests.

String concatenation

To steer away from the macro pitfall, (and since I do not like to program everything in upper case), I decided to define all asserts as functions. The downside is that producing assert comments (the first argument for the assert functions taking strings) containing variable information is cumbersome. To alleviate this, I added a macro, CPUNIT_STR, which allows you to write assert comments like this:

       CPUNIT_GTEST(test_stuff) {
	  for (int i=0; i<30; ++i) {
             assert_true(CPUNIT_STR("Statement " << i << " is not valid."), is_valid(i));
          }
       }
	
Of course, you may redefine this to something shorter in your test if you like.

Macros for scope names

If you have deep name space structures, writing headers like
     CPUNIT_TEST(Level1::Level2::Level3::Level4::Level5::GadgetTest, test_stuff) {
        ...
     }
      
many times in a file is annoying, time consuming and impossible to read, all at the same time. You may, however, replace the scope name (and test name for that matter) by a macro:
#define GADGET_TEST Level1::Level2::Level3::Level4::Level5::GadgetTest

     CPUNIT_TEST(GADGET_TEST, test_stuff) {
        ...
     }
      

Good luck!
-db-

CPUnit project site