Sola Virtus Invicta.
This is a software testing blog I wrote sporadically for a few years between September 2013 and January 2017. The formats here are a bit funky with the images missing. Hit the button over there to view it in situ on my old free Wordpress site.
Traceability Matrix: Myth & Tricks
A critique of the traceability matrix, that most deceitful of testing deliverables.
Ah, the traceability matrix. That paramour of the project manager. A cheap raincoat on a wet and windy day, that inevitably sends you scurrying back to the store all a-bluster: "But you said it would keep me dry!"
Traceability is an interesting notion. The concept, at first glance, appears reasonable. It is the assumption of cause and effect in the world of testing. Something ought to be this way and so you test for it. Anything you are testing for is being tested to meet a certain criteria.
Traceability then, is the straight line that people like to draw between a requirement and a test case. A traceability matrix? Lots of those lines. But usually, people prefer if these lines are represented in the cosy adjunct of reassuring little cells in a spreadsheet. Because something with such a firm sense of structure must mean something.
But of course the notion of traceability is just that: a notion. It is not a real, tangible thing. It is a way of thinking about the relationship between something you do and the reason you had for doing it. It can be a useful notion, but it does not guarantee anything.
This is because the line that you draw from what you have done doesn't really have anywhere concrete to attach to. The idea is that you trace your testing back to a "requirement", but that is an intangible thing in its own right.
What you link to really is more a given representation of a requirement. Often, it's abstracted even further, because you're linking to your interpretation of a representation of a requirement. There is a lot of room for misinterpretation there. It's like trying to tie a balloon to a cloud.
Ultimately what a traceability matrix represents is an attempt to justify the testing you have done against the model you have used to inform it. Why did you do what you did? Why were those tests important? What has informed your actions?
But a matrix is far too reductive a way to convey that information. Because, even allowing for the innate vagaries which exist around requirements, even if you had a matrix which showed you had tested every documented requirement, that doesn't mean you have tested everything.
Your documented requirements (by which I mean, for the purposes of this article, any requirement communicated to you in any form, or elicited by you in the course of you work), which are all that you can realistically hope to include in a matrix (because if you don't know them you can't include them), only ever represent a partial model of a solution.
Then there's the fact that whatever test you may have conducted for each of those requirements may have been completely worthless. You may have misunderstood the requirement and thought the product worked where it didn't. You could be having an off day and missed an obvious bug. Maybe you just didn't think of certain potential issues to test them.
However, if you then produce a traceability matrix which draws lines between your tests (however misinformed, shoddy or incomplete they are) and the requirements that you know about (regardless of all the ones you don't), a project manager will look at that and think "great, testing is done".
And anything, anything, which can ever give anyone cause to think "testing is done" is a certifiable Bad Thing. Testing is never done. Any tester worth their salt knows that.
And this is where we get to the crux of the problem with traceability matrices. They are too simplistic a representation of an impossibly complex thing. They reduce testing to a series of one to one relationships between intangible ideas. They allow you to place a number against testing. A percentage complete figure.
What they do not do is convey the story of the testing. The really valuable information that testing has driven out. It doesn't contextualise how you built the model of the product, how that informed your testing and, crucially, how confident you feel in the work you have been able to do in testing the product.
This comes back to what James Bach and Michael Bolton teach about telling the "testing story". They assert that to report on testing accurately we must "learn to describe and report on the product, our testing, and the quality of our testing". It is only by doing this that we can give a true picture of what testing we have done and why we have done it.
And that, ultimately, is what a traceability matrix is supposed to do. It is supposed to display the testing you have done (usually as a list of test cases) and why you have done it (usually as a list of requirements). But these lists, presented without context, are worthless. There is a whole lot of information that informed the population of those lists which are essential in evaluating the validity of the pairings that the matrix creates.
The true danger of a traceability matrix is to the credibility of the tester. Allowing a traceability matrix to tell your testing story means that if you've "covered everything" and problems are found, the only implication is that you did bad testing. You weren't up to scratch.
And even if you did the best testing in the world, and you were aware of the limitations of your model, you worked hard to identify and test to appropriate oracles, you provided clear and concise updates and reporting - if you did all that and still reduced it down into a traceability matrix then you have failed in your mission as a tester.
Because your mission as a tester is to provide information to allow people who matter within the project to make informed decisions about the quality or value of the product. And a traceability matrix contains no useful information at all, but is perceived to be representative of everything.