zondag 11 december 2016

Building your software and unit testing...

When unit testing, you start by adding a new unit test executable target to your project. Then you run this whenever you have time for it. When it's green your code is fine and you commit or refactor, when it's red you fix the problems and get it green again. It's a very simple idea, yet it's wrong.

It's the same problem that you get when your project has warnings. Warnings don't work. They fail, because when recompiling with an incremental compile - which every nontrivial project will do - you do not get warnings from the files that you do not compile. It's a way of the build output not being stable - it only gives it when it happens to recompile one of those files, or when you do a full build. You tend not to see warnings often because of this, even if you have them, so people either ignore warnings altogether or turn them into errors.

The same thing can be done with your unit tests though! There's no reason that your build output should not depend on your unit test. But you actually don't want it to depend on your unit *test* itself, but on the *result*. That can be arranged though - running a unit test (with output redirected into a file) and coalescing the outputs are just two more steps in the build sequence. You can then make your actual output folder depend on the unit test results existing - ie, the build step creating them succeeding - and you will always run your unit tests as part of the build.

This is awesome in ways that you won't expect. It means that a red test will imply a red build. It implicitly interacts with build dashboards - a red test, makes a red build, makes a red icon on the build dashboard. You cannot commit with the appropriate pre-commit hooks. People who only try to "compile" the software will also be testing it, so "it compiles for me" actually *is* good enough to commit with.


And it allows you to only re-run the tests that have changed. After your incremental build, you now have incremental unit testing. And incremental system testing. The better split-up your tests have (ie, the less dependencies they have), the less tests will run for a given change. In many cases you will be testing nearly nothing, as you already showed from the compile step that the test executable hasn't changed - and if that doesn't change, why would the result? No reason to run it again.

So next time you set up unit tests, make your outputs depend on them being successful.

Geen opmerkingen:

Een reactie posten