Sola Virtus Invicta.

This is a software testing blog I wrote sporadically for a few years between September 2013 and January 2017. The formats here are a bit funky with the images missing. Hit the button over there to view it in situ on my old free Wordpress site.

Testing Testing

On Time & Requirements: A Discussion

A discussion around how traditional and context driven approaches to testing might differently consume time and requirements .

A short while ago, I was the Test Lead on a project and had helped to change the way we were doing our testing. From a pretty rigid, test case driven approach, I helped the team to adopt an approach more influenced by the context driven school of software testing. Most specifically, this included the use of visual test models and visual test coverage models for planning and reporting our testing, as well as the adoption of an overall more exploratory mindset in our test activities, often structured under the influence of Session Based Test Management.

Towards the end of the project, a new Test Director joined the team. He had not encountered such approaches before and I initially worried that the progress we had made might now be threatened - especially as I was due to leave the project shortly and so wouldn't necessarily be around to defend and explain the methods we had introduced. Thankfully, he was open-minded and was both willing and eager to learn about what we had done and the reasoning behind our approach.

We conversed regularly and, while his understanding remained incomplete, he soon became supportive of the methods I had implemented and began to consider how he might go on to implement such an approach in future environments. To find that an experienced Test Director was willing to learn, and then potentially adopt, a more context-driven approach - with which he was previously entirely unfamiliar - gave me a huge amount of reassurance that our approach stood up to scrutiny, but also encouraged me that real change could be achieved in our community with hard work and perseverence.

Mostly though, it challenged me to provide answers to his questions and to consider hypothetical questions about the benefits of implementing such an approach in future projects. In one such enquiry, he asked how a context driven approach like ours would compare to a traditional approach in terms of time required for testing and how it, as an approach, would give consideration to how the requirements were documented.

Here I have decided to share these questions and my reponses, as it may prove useful to other testers who are considering the value of adopting a different approach to testing, or for those who are in the midst of changing and perhaps require some further ammunition to convince their managers of the benefits of such a move.

These portions have been anonymised as I am not at liberty to disclose details of the project specifics, but I hope that they prove useful or informative.

"You mentioned (Adam) that if you had followed the waterfall approach for <Project X>, we wouldn't have had a hope of finishing on time.....if we assume that initial analysis was complete and we were ready to start mind mapping or standard test design/scripting, how could we begin to compare how long each of these methods would have taken to complete? This also has to assume that resource with the same level of expertise was doing the work (important because for example, a more senior TA could produce a sufficiently complete set of test scripts with less duplication in less time than an intermediate). I am deliberately leaving 'quality' out of this particular comparison, focusing on time."

Where the question refers to a "waterfall approach", this was referring not to the lifecycle model, but to the traditional approach to software testing under the factory school method - where each phase of testing (analysis, design, execution) occurs sequentially. My response to this question was as follows:

In terms of a comparison of the time required for each method, probably the main thing to consider is the difference in emphasis between the two approaches.

The traditional scripted approach is necessarily confirmatory in nature. All a scripted test can ever tell you is whether or not something you already knew about the system is or is not true at a given point in time.

To do this, it places a great deal of emphasis on test preparation - the analysis and preparation of test scripts is seen as a required pre-requisite to the test activity. As such, you don't truly start testing until the execution phase.

In contrast, the exploratory approach favoured under a context-driven testing mindset enables an investigative approach to testing and seeks to enable the discovery of new information about the product, and insight which is not restricted simply to that which has been elicited in a documented requirement (which is of course but a subset of the complete "requirements" for any system).

The methods we utilised under this approach - to model the system and use this to structure and guide our testing - actually enables testing to start immediately. As soon as you begin the creation of your model you are engaging with and therefore testing the software - because you are recording and analysing your understanding of the product.

As such, to compare the two approaches is difficult - they provide different sets of information and so a direct comparison is not really feasible. However, the context-driven approach to testing allows a greater period of time for the actual testing activity because test prepration, design and execution occur in parallel.

In the specific context of <Project X>, I am confident that the CDT influenced methods we utilised saved a great deal of time. This is because the testers generally had a good understanding of the processes involved in the testing activity, and so instead of spending a significant amount of time stepping these processes out across a hundred test cases, they were able to spend their time directly engaging with the meat and drink of the system - i.e. the calculations involved in <function A>, and in actually using the known processes to explore the outcomes of different scenarios, guided by their models, their understanding and their instincts.

An example of this is <Team 1>. This was the most complex functional area in <Project X>, it included a fundamental re-design of the calculations involved and the delivery of the solution and the development was masssively delayed. However the testing was completed in a very short timespan and gave sufficient information about the functionality to suggest that the issues had been resolved.

In contrast, <Team 2> (who had the same number of testers) had simply to test <a simpler piece of functionality>, and chose to adopt a traditional approach (due to tester preference). Their code was essentially ready when they started, but they spent two weeks scripting before even engaging with it. As a result they had to work all the hours under the sun to get it done and still required a second release to pick up the pieces.

The second enquiry related to how our approach would depend on how requirements were documented:

"Secondly, when you decided to go with mind mapping here, were there any concerns about the requirements? As you know effort spent on clarifying requirements can save a lot of pain further down the line. So if we assume we have exhausted all avenues on making the requirements better (and they can always be improved!) to what extent, if any, does the selection of using mind mapping as an approach, as opposed to another approach, give consideration to how the requirements are documented.?"

This time, the use of "mind mapping" refers to the creation of visual test models and visual test coverage models. My response follows:

As for the consideration of the documentation of requirements, a context driven approach to testing acknowledges the fact that the documented requirements - in whatever form they exist - are but a subset of the compete set of requirements for a system.

It recognises that all testing is done against a model of the system. That model is always simply one representation of someone's understanding of the system and what it should or should not do.

By representing this through visual product models, our approach highlights and makes visible the tester's interpretation of the product and the requirements. By seeking contribution to or feedback on these models, we attempt to reduce the risk that our understanding is misguided.

Compare this to a traditional approach, which would see a test case (or test cases) written to cover each documented requirement.This highlights the confirmatory nature of traditional approaches to testing - you're only ever checking that a requirement is met or isn't.

That, as an activity, provides value to the client, but it is not the only set of information required. Testing under a context driven approach would still provide these checks but places a much greater emphasis on the cognitive activity of exploring and investigating the full context of a product and the true mission or purpose of a system - whether documented or no.

So the use of this approach is agnostic to the style of requirements documentation. While the documented requirements would certainly inform the model of the system developed to guide the test activity and mission, it would not be the only source.

This conversation (which occurred via email, thankfully, as my memory is not sufficient to recall these exchanges verabtim) was obviously rooted in the context of the project on which we had collaborated and so my examples and framing were based on that project to make them as accessible as possible. However, looking back at them I think they stand as a reasonable examination of the contrast between traditional and context-driven approaches to testing as they relate to 'time to test' and the nature of requirements.

However, I'm keen to know if anyone has any additional points they would add to these responses? Or whether you disagree with my evaluations? These were written off the cuff in about ten minutes, and I have reproduced them verbatim here (except for the anonymising) so I am sure they can be improved and critiqued. I'm very eager to hear your comments and feedback if you have any.

Read More