Understanding Test Case Management

Getting Your Arms Around Your Testing Strategy

If you have significant software operations, then you need significant testing efforts. At least, you do if you want to deliver software that pleases your users. And if you have significant testing operations, then you need test case management.

But what does that mean, exactly? And how do you go about it? To understand that, let's start with some simple principles and then dive more in depth into the subject of test case management. You can probably infer a definition based on the name and your personal experience. But let's look explicitly, from first principles.

From the moment you start writing software, you also start testing it. You might not initially conceive of it this way, but it's true. When you write the iconic "hello world" program, the next thing you do is run it to verify that it does, in fact, say hello to the world.

As you go from that initial seed of a program to more and more complex functionality, the problem of testing moves from the realm of common sense to that of serious logistics. Before long, you can't any longer keep track of everything the software should do in your head. To capture that, you rely on artifacts such as requirements documents or collections of user stories.

But those documents don't tell a live story. Rather, they just articulate what the software should do when complete. They don't, in and of themselves, tell you whether the software is complete and whether it behaves in satisfactory fashion. That job falls to your software testing efforts. Testing requires creating a direct relationship between requirements and demonstration of successful completion of said requirement. And the basic unit for that is the test case.

At this point, let's introduce three concepts: the test scenario, the test case, and the test script. These concepts answer slightly different questions. The test scenario answers the question, "what goals do users have with the application?" The test case then answers the question, "what discrete activities comprise the users' goals?" And finally, the test script answers the question, "exactly how do I execute this discrete activity?" There is some debate over the finer points of these delineations in the industry, but you a more detailed treatment here.

With this in mind, you can trace a neat relationship sequence. Requirements tell you what the software needs to do. Then test scenarios define a means for confirming the requirement. Test cases describe how, the component pieces of confirming that requirement. And, finally, test scripts (whether automated or executed manually) tell you exactly how to execute that component piece. Typically, you'll have a number of test cases per test scenario, covering various permutations of inputs and behaviors. You then have one or possibly more scripts per test case.

Having located a test case on the map of concepts, let's look at the components of a test case. Bear in mind that I'm talking in broad strokes here. This won't represent necessarily represent an exhaustive list, nor is there anything necessarily wrong with your test case if it lacks something here.

  • An ID: A unique way of identifying the test case.
  • Title or brief description: A quick means of understanding the software activity in question.
  • Related requirement and/or test: What broader scenario and requirement does this test case roll up to?
  • Remarks/Notes: Free form comments about the test case.
  • Script: Exact steps for executing the test.
  • Pass/Fail Status: Is the test case currently passing or failing?
  • History and Audit Trail: You should be able to see its pass/fail history as well as general changes and who has run/modified the test case.

As you can see, you have a lot of information that varies both materially and temporally. And that's just what I've listed. Many organizations capture more information than what's here.

Think back to the "hello world" program from earlier in the post. If that's all the code you have, test cases seem silly. So too does the whole concept of test case management. And that remains true in the early phases of many projects as well. It remained true in general for a lot of shops in the early days of software development on the whole. As they added features incrementally, it always would have seemed overly formal or wasteful to plan a lot of this out. But this resulted in the proverbial "boiling frog phenomenon." The software grew gradually more and more complex. Eventually, organizations realized their efforts to test were such a mess as almost to be futile.

You see, test case management doesn't naturally scale well. Especially not when you try to start from a history of just launching the application and making sure that things look right. The components of it would occur to you in fits and starts. "Hey, we should probably catalog the things we test for." And then, perhaps weeks or months later, "oh, wow, it would have been handy if we'd kept track of whether these tests passed or failed last time."

Each individual component of a test case would eventually have made sense. Uniquely identify them so they don't get mixed up. Add notes to provide additional context. Trace them back to the relevant requirement to avoid testing for things that aren't relevant. Keeping track of all of that stuff is hard. In fact, just knowing to keep track of all of that stuff is hard. A lot of organizations learned this lesson painfully.

It is for this reason that, in 2017, test case management tooling is indispensable. Test case management tools today come with all sorts of features, integrations, and interesting capabilities. But perhaps the most critical is also the simplest. Just with the data fields they give you out of the box, they let you know what information to capture and track. Just launch such a tool and create your first test case. Right there, you'll have a whole list of required information.

You'll also have a means for easily capturing, storing, and tracking, and searching for this information. When, in the past, large software organizations relied on things like spreadsheets or even collaborative tools like Wikis or Sharepoint, things would get quickly out of hand. All of those are useful tools, but they're not designed for this specific use case. This not only created confusion, but it also forced the people responsible for QA to spend inordinate amounts of time banging square pegs into round holes to make the tooling work. They couldn't focus as much on the testing itself.

Certain things have become near-universal in software development. You don't do without or roll your own source control tools or bug tracking software. Test case management falls into that category. Testing significant pieces of software, is, of course, critical. But so too is using fit for purpose software to manage that effort.

Of course, you'll realize benefits just beyond time savings and sanity preservation. In the years since the earliest test case management tools, they've added a lot of functionality to make your life easier.

First of all, I mentioned integrations as a feature of such tools. I also mentioned the idea of automated test scripts earlier. Well, a test case management tool lets you integrate with automated test scripting utilities like Selenium or SoapUI. This mixing of automated management and execution lets you reach a whole new gear in testing efficiency.

Speaking of efficiency, these tools keep busy-work to an absolute minimum for folks using them. The combination of automating test scripts and automating results tracking means that QA personnel spends its time doing real knowledge work. They're not filling out spreadsheets or mindlessly executing test scripts like human automatons. Instead, they're leveraging their unique combination of domain and testing knowledge to ensure a complete set of test scenarios and test cases.

Finally, think of one last, overlooked stakeholder. I'm talking about management. With haphazard arrays of spreadsheets (or even more manual approaches), QA folks could barely keep track of test cases themselves, let alone give meaningful reports to external stakeholders. Test case management tools have great data entry, tracking, and automation capabilities, but they also give you really powerful reporting tools as well. You can easily see all sorts of data, trends, and graphs related to the ongoing testing status of the software.

The world pays a lot of attention to the particulars of organizing software. From its core components of classes, methods, and functions, people organize it using architecture and they put it into packages called microservices or distributed systems. Countless talks, seminars, and courses devote themselves to this.

On the quality assurance side of the world, you can look at the test case (and test script) as this same basic component. The test case proclaims what the software should do in discrete situations, and it constitutes the final world on the matter. Testers then assemble these basic units into coherent scenarios about the software which, in turn, speak to delivering the software's requirements.

Just as software developers tout the importance of all the various tools they use in their discipline, test case management approaches play a critical role in testing. The more you can automate and make easy, the better the quality becomes. So right from that very first test case stating that the software should say "hello world," make sure you already have a plan for test case management.