Sola Virtus Invicta.
This is a software testing blog I wrote sporadically for a few years between September 2013 and January 2017. The formats here are a bit funky with the images missing. Hit the button over there to view it in situ on my old free Wordpress site.
Testing Hippopotami
A post that will tell you nothing about hippos, but much about the fallacies of test cases, requirements and coverage metrics.
This blog post is tagged with the word "hippopotamus". You can go and look for yourself if you like. Down the bottom there, you'll see it listed along with all of the other actually relevant tags. You're probably wondering why it's tagged with "hippopotamus" - in fact, if you're a tester and are generally inquisitive of mind, I'd be worried if you aren't - but we'll come to the why shortly.I'm now going to make a number of claims about the relationship between this post and hippopotami:
- This post is about hippopotami
- This post is sufficient to help you learn about hippopotami
- This post is equally as informative or useful for learning about hippopotami as any other post tagged with the word "hippopotamus"
- Any blog post which is not tagged with the word "hippopotamus" cannot tell you anything about hippopotami
I think you will all agree, especially upon reading the remainder of this post, that each of those claims are absolute nonsense.This post isn't about hippopotami, it will tell you nothing informative or useful about that species of mostly herbivorous, sub-Saharan mammal which is also the third largest species of mammal in the world (well, almost nothing), and nor is it anywhere near as useful as the doubtless many other posts out there which are tagged with the word "hippopotamus" that actually are about hippopotami. Equally, I'm sure there are many posts online which aren't tagged with "hippopotamus" (perhaps because their platform doesn't support tagging, or simply because "hippopotamus" can be a hard word to spell correctly) that actually are informative on the subject of hippopotami.So, why am I talking about hippopotami - aside, of course, from it being a simply wonderful word, in spite of the slightly grating plural form (though I much prefer 'hippopotami' to the equally correct 'hippopotamuses')?Well, the reason is that some testers make equally ridiculous claims to these all the time, and yet they don't actually recognise the immediate fallacy of such claims.Any time that you create a test case and "link" it to a requirement in any manner - be it via a traceability matrix, or through the requirements mechanisms of overly rigid, restricting, inefficient and expensive test management tools, or by any other means - you're essentially doing the same thing as tagging a blog post. You're claiming that there exists a relationship between two things. And that's all you're doing.And yet, that's not what a lot of testers say or believe they're doing. Many testers will make the following claims:
- This test case covers that requirement
- This test case is sufficient to help you learn about that requirement
- This test case is equally as informative or useful as any other test case linked to a requirement
- Any test case which is not linked to a requirement cannot tell you anything about the program
While these may not be claims that testers make explicitly (though many do), these claims are implicit in the way that testers use test cases as conditions that "cover" a requirement. It is also implicit in the way that managers measure the effectiveness, completeness, progress or quality of testing by checking that the documented requirements are covered by one or more test cases.Unfortunately, these claims are usually just as ridiculous as the above claims about this post's relationship to hippopotami.A test case that is linked to a requirement may not actually "test" that requirement at all. The tester may have misunderstood the requirement, or the program, or the relationship between them, or many other things, and thus developed a test which is irrelevant or ineffective when it comes to learning about that requirement. It may validly test something else, but that's not the claim that is being made.Even if a test case does actually test the requirement it is linked to, that doesn't mean that that requirement has been sufficiently tested. It might, if the requirement is alarmingly simple. But more than likely there are many things to learn about a requirement, and one test case is probably not sufficient to facilitate that sort of learning. Even if you have many test cases linked to a requirement, there is almost always something more we can learn. Complete coverage is impossible.When testers report on test cases, they usually count them as if they are equal. They might report that we have 200 test cases for this set of requirements. They might say that 98% of their test cases have passed. They might suggest that because each requirement is covered by two test cases (one positive and one negative!), that they have achieved a good measure of coverage. But test cases are not created equal. One test case might elicit lots of useful information, whereas another might be a very simple and obvious verification. One test case might take a week to run, where another takes 30 seconds. But when we reduce each test to the equivalent value of "1" in such claims, the inevitable variability that exists between different test ideas - and the useful information each may generate - is often lost.Then there is the fallacy that a test that is not linked to a requirement is not testing anything. This is nonsense. If a test case helps you learn something about the program then it is useful. Just because it may not have been "linked" to something or other in a spreadsheet or a tool does not diminish its usefulness. Similarly, even if a test cannot be linked to a requirement because there are no documented requirements to which it is relevant, that does not mean the test is meaningless or useless. Not all requirements are documented and so not all requirements can be linked explicitly to a tangible representation of a test idea (though there are ways of representing how your testing helps you learn about such implied or tacit requirements).Unfortunately, testers do make these claims every day. I have seen them do it, even in the projects on which I have worked. These claims are rooted in the test case culture in which most of our profession is adrift (James Bach and Aaron Hodder provide a compelling deconstruction of this culture in the Feb 2014 issue of Testing Trapeze), but it is also a product of the ignorance and laziness of testers and managers who do not stop and think critically about the way they represent the work that they do.It is obvious to anyone who reads this post that the claims I made about its relationship to hippopotami at the beginning are nonsensical. And yet, testers and management sit back and make the same claims day in and day out without stopping to question what they are doing. Testers - whose profession it is to challenge, to learn, to question and to analyse - are guilty of not applying those same qualities to their own work.It doesn't need to be this way. There are responsible and efficient and effective ways to manage and report on your testing. Test cases and meaningless metrics will only leave this profession wallowing in the mud. But unlike the happy hippopotami, that's not where we belong.