Why slaying mutants is good for measuring the quality of your tests.
Test automation of a piece of software is a well developed practice these day. But assessing the quality of those tests is not easy. One really big mistake is to rely on coverage because you can have 100% (or close to) coverage but still having really low test quality.
Say you have a beautiful function
foo defined as
And your test looks like:
If you run the test with your favourite test runner, it will pass! (obviously because we passed
true in the
assert function call) You get the greenlight. Even sadder, you also get a 100% coverage for this function because we called it in the test (even if we don’t use the result of that call to determine if the test pass or not).
Of course this is a contrived example to put emphasis on my point above that coverage is not a reliable metric for code quality. (At the bottom of the article I link to a Kata revolving exactly around this idea of false coverage, and about fixing bad tests)
One way to get more confidence into you test suite is to use Mutation testing. The way it works is by changing a tiny bit of your code, say replace
!= in one place in your code, then run your test suite against the modified version of your code.
This new version of your code is called a mutant. If all your tests pass on the mutant, it means your test quality is not good enough because it did not catch this change. In this case the mutant is sometimes called a zombie because it lives (did not get ‘killed’ by the test suite).
Then you repeat the process of introducing another mutation from the original code to create another mutant (say this time you replace
let c = new Foo() by
let c = null) then you run the test suite again and you determine if the mutant has been killed.
Rinse and repeat a lot, then you count how many mutant you killed in relation to how many were produced. This ratio is called the mutation score, and should be as close to 1 as possible. Mutation testing allows to be more confident in the test you write.
There are two main assumption behind mutation testing, let’s see what they are:
The first one is the competent programmer hypotheses. It states that most bugs introduced by an experienced programmer into a codebase are small syntactic errors. The second one is the coupling effect hypotheses. It asserts that simple faults can create other faults/bugs in an emergent/cascading fashion.
The changes in the code (replacing
!= like in the first example) are called mutations, and they are defined by mutation operator. There are many different families of mutation operator.
You have operator on (non-exhaustive list):
null, changing method and field scope
Of course it’s not all good, if it was we’d be using it in every project since it was invented (in the 70’s!)
Because we mutate pieces of code, it happens that mutant causes a crash or an infinite loop (when say you change the condition on a loop) and that hinders the tests.
Some mutation testing framework deal with that by letting you disable some classes of mutations operator if it does not play well with your codebase.
In most cases you get a timeout for that mutant, so you don’t get a clear result. And it can slow down considerably your mutation testing run.
Another negative thing (and, from what I understand, until recently the major drawback that prevented spreading of mutation testing) is that for even a small code base you can generate hundred or thousands of mutants. Then you need to tests every one of them. That’s a huge resource sink, that could only be overcome for small projects or medium-big projects with huge resources. (Remember the idea was first devised in the 70’s)
Now we have much more resources at hand so it is less a problem but still, that can quickly add time to your test pipeline (especially if your test suite is slow). To help the performance aspect, there are numerous optimizations that have been devised, like only generating mutation for line of code that are covered by at least a test, testing the mutants by running only the specifics tests that covers the line with the mutation instead of the whole test suite etc. This make using mutation testing a possibility.
Another option is to use extreme mutation (the paper ‘Will My Tests Tell Me If I Break This Code?'3 is a good read). Basically extreme mutation will remove any code from a tested function, and replace it with a value of the type of the method.
This generates less mutant so is quicker to test. You can find here a comparison of regular mutation vs extreme mutation testing of various java library with the framework Pit using a standard generator (called Gregor) and a extreme mutation generator (called Descartes).
Below you can find the result of that comparison:
You can see that going from extreme mutation to standard mutation testing is roughly an order of magnitude in number of mutant generated and time taken to test every mutant.
Try it sometime, for me it was really fun and eye opening to catch bad tests I wrote!
Software testing proves the existence of bugs not their absence.
Dadeau, F., Héam, P-C., Kheddam, R.: Mutation-Based Test Generation from Security Protocols in HLPSL(2011) in: 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation ↩︎