Sola Virtus Invicta.

This is a software testing blog I wrote sporadically for a few years between September 2013 and January 2017. The formats here are a bit funky with the images missing. Hit the button over there to view it in situ on my old free Wordpress site.

Testing Testing

Artful Persuasion

Just look at the pictures, you won't be disappointed.

I found myself in a conversation with some testers who had only ever been exposed to a test case driven approach to structuring and performing testing, and they felt that the task of creating a visual test model was simply duplicating their effort. One of them neatly described the test process they used to me thusly: "I read the requirements, analyse them a bit in my head, and then write out my test cases".Perfect.I spend a good deal of my time these days talking about using visual models and other testing techniques that deviate from the "traditional" process, and I have developed a reasonable array of tactics for justifying why these are not only useful, but important ideas if we're serious about doing good testing. However, I spied a whiteboard on the wall and thought of a beautifully simple means of delivering that message in this context.First, I drew up the approach to testing that had been described. It looked a little something like this:Traditional Test ProcessThey agreed that this pretty much represented their process (I'm better at drawing on a real whiteboard than when using the trackpad on my computer, I promise).However, I then pointed out that this diagram lacks something quite important, and added to the picture as follows:Where the magic happens..."This," I said, gesturing at the green squiggle I had artfully created, "is where the magic happens". Typically, the opportunity was taken to rib each other that there's nothing - and certainly no magic - going on in each other's heads, but they soon conceded that yes, this actually was a more accurate picture of their testing process. Still though, they weren't entirely sure where a visual model came into it.As such, I continued my delicate scribbling:Bypass"The problem," I suggested, "is that lots of the magic that happens in the green squiggle doesn't make it into your test cases". I pointed out that when we read a specification document to drive our testing we tend to use the singular requirements, represented by the red lines in the 'Spec', to marshall our thinking: to take singular elements of the green squiggle and direct them into specific test cases. "This can be good," I went on, "because it means we're checking that those specific requirements are met"."However it's also potentially a problem because there are ideas within that green squiggle that don't directly align with the documented requirements. And if all that we are using to structure our testing is the requirements document, there is a very real risk that those ideas will get lost". This brought murmurings of assent, so I took up the pen one final time to press home my point.Happy FrogI suggested that this was where the visual model came in useful and explained that the purpose of creating a visual model (which is that thing that looks a bit like a happy frog) is to attempt to* capture all of the magic that happens in the green squiggle we mentioned earlier. It's a simple and flexible approach, which means that those ideas that don't "fit in" with the documented requirements aren't filtered out. The model can thus help to represent more completely the way that you understand the solution, and the ideas you might have about testing it.I explained how we can then take advantage of these ideas (because we often won't have a specific "expected result" against which to "check" them) by conducting exploratory testing sessions (the dotted green shapes) which can help us to learn more about these ideas or even fill in the gaps that we have noticed exist in our model of the solution."We're already doing that though!" protested one of the testers present, "We find stuff which isn't in the spec all the time! We then just write another test case to cover it"."Absolutely you do," I agreed, "but that's the problem!" I then explained how these fortuitous discoveries occur because all testing that is performed by a human, even scripted testing, is exploratory to some degree. This is because we - as a species - are pretty hopeless at precisely following instructions and, if we are at all paying attention, will often spot potential problems that might not be relevant to the specific test case we are performing.The value of the visual model, I suggested, is that it allows us to structure and focus the way in which we go about these discoveries, rather than relying on them happening by accident. Creating a more holistic visual representation of our understanding of the software means we can target exploratory testing - a style of testing that emphasises investigation and discovery - in the places where we feel there might be these problems.*Thanks to Petteri Lyytinen for pointing out my previously hyperbolic phrasing. A visual (or any) model will likely never capture all of the good things that go whizz-bang in your head, but they can certainly facilitate such an attempt.

Read More