The Software Testing Lifecycle - An Introduction

Introduction

The discipline of software testing has come a long way since the days of having QA staff sit at a keyboard to enter input according to a test script and then manually record the results. Today we test fast and we test often. Current market conditions demand it.

Companies realized a long time ago that it is better to adopt procedural patterns and techniques that has proven to be useful rather than wasting time reinventing the wheel. One procedural pattern has proven to have significant benefit is the Software Testing Lifecycle (STLC).

Describing the Software Testing Lifecycle in a general manner is the purpose of this article. The scope of this article is to describe the sequence of steps that make up the STLC. In addition, the article relates portions of the STLC to common testing strategies such as Test Driven Development (TDD), Behavior Driven Development (BDD), and Context Driven Testing (CDT).

The Phases of the STLC

The Software Testing Lifecycle is a process that is segmented into a set of sequential steps that are common place in most testing scenarios. There is a lot of discussion on social and professional media about how many steps make up the STLC. Some documentation describes the pattern as having 8 steps, while this article divides the STLC process into 6 steps.

While the STLC can be divided up in several ways, this article will take a six-step approach. The 6 steps include: Test Planning, Test Setup, Test Execute, Test Assert, Test Teardown, and Reporting.

Let’s take a look at the particulars of each step, starting with Test Planning.

Test Planning

Test planning is the step in which you determine what you are going to test, how your are going to implement your test, and where you will record the result. The test could be a single test or a number of tests organized within a test suite. The concrete outcome of the Test Planning phase is represented by the production of a formal document--the "test plan." The test plan can take many forms, it maybe a word document or a spreadsheet file that describes each activity that a manual tester performs upon an application’s GUI. Or, the test plan can be a Selenium script that gets run by a human or within an automated testing environment.

The test plan might be in a format particular to a testing tool you are using, such as QAComplete or TestComplete. If the scope of your test is server side code, the test document might be a collection of commented text files, relevant to the testing framework, that contain testing code to be executed upon source code. Some examples include: .java files for execution under JUnit, .cs files for .NET testing, .py files for Python testing, or .js files for Node.js testing.

Determining what you are going to test involves setting the scope and boundaries of the test. Is the scope of your test single function or an AWS Lambda? Maybe the scope of your test is at the other end of the testing spectrum, a full scale system integration test, such as a web site. Each test scenario will have items that are in scope and out of scope.

The scope of testing will vary according to the testing strategy you are implementing. For example, in a Test Driven Development scenario the scope is typically the unit test. Unit tests exercise functions or in the case of an API, endpoint URLs. In an Integration Test or Context Driven Test scenario, the scope of testing will be broader, encompassing the entire solution the system or software is designed to support.

The importance of determining scope in the Test Planning phase is to make sure that the team is doing exactly the work required to meet the need at hand. Extra work is costly and in some cases can be useless.

Once scope is determined, you can decide the tools that you want to use to execute your tests. Should you have a very granular test scope such as testing object source code, you are going to use a tool that supports the particular source code language. For example in .NET you can use SmartBear’s TestLeft or TestComplete. If your source is server-side NodeJS JavaScript you can use Mocha or Chai. Typically for Java you use JUnit or TestNG at the unit testing level. Python unit testing can be done with unittest.

As you go higher up the software stack to the API level you will use tools that allow you to execute tests via HTTP and to be able to monitor results. Understanding the scope of your testing is still important when determining what tools to use at the higher level. Are you testing for data integrity meaning the data coming out of calls to endpoint URLs matches the expectation defined by the data going in? Or are you testing for performance at the API level? In this case, you may find tools such as SoapUI and LoadUI useful.

Not only will you need to plan the tools you are going to accommodate the scope of testing you determine, if you are taking an automated approach, you are going to have make sure that you can integrate the selected tool into the Continuous Integration/Continuous Deployment (CI/CD) process. Running tests in an automated manner, under a CI/CD tool such as Jenkins or Travis CI is becoming more common each day.

Once the test plan is place, the next step in the STLC is Test Setup, the place where you prepare tests for execution.

Test Setup

Test Setup is the step in which the physical preparation necessary for testing happens. Test setup is the place where the services or mocks related to the object or system being tested are configured and activated. Examples of setup activities are, connecting to a data store, starting up an HTTP server for API access and instantiating mock objects. At a system level, Test Setup is the place where test data that is to be injected into the system is created.

Most testing frameworks have features that allow you to declare setup code in a consistent, predictable manner. For example, when setting up a test in Java’s TestNG, you apply the annotation @BeforeTest, to declare your test setup code, like so:

@BeforeTest public void setUpTheTest() throws MalformedURLException { // Put your setup code here . . . }

When testing at the system level, the test setup step can be done in the CI/CD tool. For example, when integrating a tool such as TestComplete under Jenkins, system level Test Setup is facilitated through the Jenkins Control Panel.

Test Execution

Test Execute is the phase in which your tests runs. Test execution can take the form of invoking an algorithm or modifying the state of the system. For example, where you have an application that turns a color JPEG into one that is black and white, you will execute that behavior in TextExecute. Or, say you are testing to see that new customer data can be added to the system, thus changing system state, this too will be done in Test Execute.

The way you operationally invoke execution will depend on the type of testing you are doing. At the unit test level the execute phase might do nothing more than call a function. At the system integration level you might have to inject data among a variety of mobile devices, in real time or by way of device emulation. When it comes to running Test Execute in a performance testing scenario;you might need to make hundreds, maybe thousands of HTTP calls to an API endpoint URL to see the load that the servers can handle.

Test Assertion

Test Assert is the phase that confirms that the results expected from executing a given test are met. Test assertion is usually defined as test strategy. If you are taking a pure Behavior Driven Development (BDD) approach, your assertion will be limited to behavior. For example, imagine a method, Add(x,y) that returns the sum of two numbers. A BDD assertion against the Add(x, y) method is that the result of the test is the sum of the two numbers submitted.

All you care about is that the method behaves according to expectation--you submit the numbers 1 and 2, the result is 3. The scope of the test is not to confirm the mechanics of the function, only that the result meets expectation.

If you take a more granular testing approach as advocated by the London School of Test Driven Development, you are going to want to know not only did the method, Add(x,y) provide the expected result, but that the function executed in the manner expected. Asserting the mechanics of Add(x,y) might require a review of the code coverage metrics associated with test execution. Code coverage output will tell you exactly which lines of code have been executed.

When taking the London School approach to testing an API, not only will you need to assert that the API returns the expected result, for instance a unique identifier is returned when new customer profile data is submitted, but also the test assertions needs to confirm that the internals of the API call executed to expectation. And, if you have a layer of web or mobile UI that calls the API, and you are testing against the UI, you will need to assert that all parts within the scope of testing meet expectation-- UI, API and subsystems included. As you can see, making test assertion according to the London School approch is a thorough undertaking.

Context Driven Testing (CDT) takes a different approach to test assertion. In the CDT sensibility, the most important thing to be asserted is that the software under consideration provides the solution for the problem it’s intended to solve. Thus, assertions will be made in a holistic manner. Tests might be performed within a CI/CD process, or they might be performed manually. Tests results can also be asserted by code. Results might be reported verbally by 10 people with cell phones. The most important thing to understand is that assertion will take place within the context of the software, and that context includes not only the code, but the people making the software and the people using the software. If the software is made by a company that has a QA budget of millions of dollars, there is probably going to be a lot more automation, orchestration and tooling.

If assertions are made by a startup composed of 3 people, with limited resources for QA, testing might be conducted manually by friends and family according to the an improvised test script. Overtime, as the enterprise enjoys more success and as budgets increase, test execution and test assertion might become more formal. But, the crucial consideration in terms of CDT is that there is always testing in force and that testing is focused on the problem the software is solving, within the current context of the software.

Test Teardown

Test Teardown is the cleanup phase. Teardown is the place where an issue such as data pollution is addressed. Data pollution occurs when a test puts some data in a data store and thus alters the state of the data store away from the initial state. For example, you could have a test that checks that an item added to a sale is reflected in an invoice. The sale is executed in Test Execute by injecting data and the expected invoice behavior is verified in the Test Assert step.

However, after that work is done, the sale item data still exists in the data store. The data injected by the test might have unanticipated side effects in subsequent tests. Thus, the data store must be set back to the good state. This work happens in teardown.

Also, teardown is where you sever server connections that have been established in setup. For example, imagine a situation in which Test Setup started a localhost web server that is configured to run port 7071. Test Setup invokes the web server using a startup script. The test executes and assertions are made. However the web server is still running despite the fact the testing is over. Unless the local host web server is shut down in Test Teardown, a Port in Use error will be thrown when the next test tries to start the server up again.

Reporting

Test reporting is an important aspect of any testing regimine and a critical part of the Software Testing Lifecycle, Without reporting, there is no idea of code quality. To use a term bantered about between QA engineers, “A test without reporting ain’t.” We need reporting to know that all the test passed, or which failed as well as the degree of failure. Also, reporting provides the historical data needed to evaluate the software’s behavior over time.

Test reporting is a broad landscape with many tools and techniques available to meet the given reporting need at hand. Typically you decide on the reporting methods and frameworks to be used during the Test Planning phase of the STLC. The important thing to remember when planning your test reporting is to make sure reporting is accessible for automated analysis. Having the results of a test suite stored as a solitary file on a network drive offers limited opportunity in terms of modern testing practices. A single, isolated test report provides no historical perspective into how your software performed over time. Did failure rates go up? Does an analysis of performance indicate that the new code being created is making the application slow down?

In order to answer these questions you need to analyze many test reports. Thus, the emerging trend in test reporting is to conceptualize test results as events to be logged for historical analysis. You can use the report integration features of a product such as TestComplete to send test results to information repository such as Jira or Bugzilla for subsequent review and analysis. Some companies have devised ways to send test results to a robust log aggregation service such as Splunk or Sumologicas as log output. Once tests are integrated into the CI/CD process, and the results are stored in a common data store, you can extend the CI/CD process to implement report aggregation and support the ongoing analysis of test results in a historical context.

Putting It All Together

Using the Software Testing Life Cycle pattern is becoming the standard operating procedure for incorporating predictable quality assurance practices into the Software Development Lifecycle. The STLC is similar to the Software Development Lifecycle in that it takes a phased approach to implementation, one in which the process moves through a sequence of steps, where each step has a particular concern. Although there are different interpretations about the number of steps that compose the Software Test Lifecycle, the semantics of the pattern overall is consistent: plan, test, analyze, report. This articles described a 6 step approach: test, setup, execute, assert, teardown, report.

Regardless of the degree of granularity you think best for your organization, the critical concept to understand is that the steps in STLC pattern are cyclical. You never reach the end, rather you reach the last step and then start over again.

The days of conceptualizing testing as the final step in the software development process have come to a close. The demands of the modern marketplace require that testing be a continuous part of the Software Development Lifecycle. In fact, there is a good case to be made that the STLC should be applied to each phase of the Software Development Lifecycle.

Experienced software developers and quality assurance personnel understand the benefit of testing fast and testing often. The beauty of the Software Testing Lifecycle is that if offers both the structure and flexibility needed to implement testing procedures that are useful to any enterprise, regardless of size, from the 5 employee startup to the established business that employs hundred of developers. The STLC pattern is simple, flexible and reliable. It’s easy to adopt, thus making it a valuable tool in the Quality Assurance toolbox.