What is Functional Testing

Functional Testing, Demystified

I'd wager that asking 20 people to define "functional testing" would yield 20 different results. Of course, that's not unique in the software industry. You'd get equivalent results for definitions of the terms "agile" or "RESTful". Ours is an industry constantly breaking new conceptual ground, so terms emerge and then subsequently confuse. People use the same words to mean different things.

You can probably understand why this happens with a term like functional testing. After all, the adjective functional is both vague and rather subjective. Say I announce that my car is functional. To some, that represents an announcement that it's operating properly or even well. But others might interpret that as a sarcastic indictment of the car, indicating that it barely works.

Now, take that vagueness and subjectivity of the term in general, and add a software-specific connotation for the word. A function represents a granular unit of software source code, so functional can now also function as an adjective in that sense. Confused? You should be. But not to worry. Functional testing is an established, fairly tangible concept. To understand it, you just need some proper definitions that clarify and remove the inherent English baggage of the word.

To understand the actual working definition of functional testing requires you to understand the specific connotation of the word functional. In this context, functional refers to the functionality of the application.

When people build software, they build it to preform a whole array of tasks. Functional testing is thus the process by which you evaluate whether or not it performs those tasks, for which it was designed. More colloquially, it asks whether the software meets the requirements. Functional testing is thus what we might rationally think of as the most fundamental kind of software testing. Say I build a small utility that takes a list of words and sorts them into alphabetical order. Functional testing of this utility would involve feeding it various word lists and then seeing whether it yielded a properly sorted list.

Of course, a thorough functional testing strategy wouldn't stop there. It would also address how the program behaved when given an empty list and whether it properly communicated its limitations. If you give the tiny utility a list of quintillions of words, does it freeze your computer indefinitely? Or does it quickly tell you that you've submitted too large of a list?

You might think that the importance of this activity goes without saying. But I want to say it anyway, because the true answer might just surprise you. And the true answer is also fundamental to building a good functional testing strategy. On the surface, the importance of functional testing seems to be about making sure the software does what you meant for it to do. You write a utility to sort word lists, so you want to make sure that what you did built word lists. And, while ensuring the software does what you meant it to do is certainly important, it's only a secondary priority.

The true priority -- the true purpose -- of functional testing involves confirming to your users or stakeholders that you're delivering what you promised them. Functional testing ensures that the software is fit for their purposes. This can have formal implications, such as a company paying you to build software conforming to a spec. Or it can have only implied implications, such as building a "contact us" page into your company's website. In either case, unexpected or problematic behavior of the software has real business implications. Functional testing aims to prevent that outcome.

If you go back to our running example of the word list sorter, thinking about how it works in any detail seems like overkill. You feed it a few lists and see if it sorts them. And, yes, for such a tiny program, your functional testing strategy would be pretty simple. But none of us earn a living writing or testing programs this simple.

Once you get into the realm of non-trivial software, you need to adopt meaningful methodology. The first step here is to have some means of capturing end user requirements. In more traditional shops, this involves enormous requirements specifications documents talking about what the software shall and shan't do. With the rise of agile methodologies, you see requirements expressed as use cases or user stories.

Functional testing involves turning the requirements into some kind of testing plan. Historically, in the world of waterfall development, this would happen after the requirements, design, and implementation phases in what was called the "testing phase." With the more agile concepts of use cases and user stories, design of functional testing plans happened closer and closer to the conception of the requirements themselves. In some of the most mature, high-functioning shops you'll see, business analysts now work with developers and QA during the conception of requirements to design acceptance tests. This makes sense when you think about it. As you define a requirement you should also define how you'll know when you've successfully met that requirement.

But in all cases, functional testing involves turning requirements into an executable test plan, ready for those responsible for the functional testing.

What I just described is the most ubiquitous form of functional testing. Typically executed by a QA group, the comprehensive plan comes together to answer the question, "is the software really doing what we promised?" But this isn't the only way you can achieve functional testing. You have other, less obvious ways to get an answer to this important question.

One involves the idea of "eating your own dog food" or, simply, dogfooding. Dogfooding is a practice whereby a software vendor internally uses the software it sells to others. Think of a company that makes bug tracking software using its own software to track its own bugs. This doesn't involve any specific plan, but it provides a lot of value by letting you put the software through the paces of actual use.

Speaking of putting it through the paces, you can also let your users do this. Generally speaking, this is the entire idea of a beta test. You enlist friendly customers as part of your functional testing effort. They use it in actual production scenarios and give you feedback on its fitness for purpose.

One last subtle strategy that I'll mention is the feature toggle. If your deployment pipeline is sophisticated enough, this lets you rollout new features with no fanfare and then roll them back if anything goes wrong or looks fishy. Think of this as running micro-beta tests on unwitting users. The optics are a little different, but this is just another creative form of functional testing.

What I've talked about so far should give you a fairly concrete idea of what functional testing involves. But let's round that out a bit with a study in contrasts. If functional testing does involve aligning software behavior with user expectations, what does it not involve?

First, functional testing is not white box testing. (Indeed, people often refer to functional testing as black box testing). White box testing involves understanding and examination of the system's internals. For a straightforward example, think of the developers' automated unit test suite. These tests understand the how classes and methods within the software work together -- a level of knowledge that end users would neither care about nor understand.

Secondly, functional testing is a different concern than the helpfully named non-functional testing. Functional testing involves flavors of evaluating the system's expected behavior. This includes, of course, straightforward verification, but also concerns like checking for behavior regressions and so-called "sanity testing." Non-functional testing, on the other hand, evaluates things that would matter to your users but that wouldn't show up in a spec. These include testing for adequate security, behavior under heavy loads, fail-over strategies and, generally things you know they need and that you have to provide from an operational perspective.

With both definition and differentiation in mind, I'll close by talking about how you can take all of this and build a sold functional testing strategy. In a complex world, you'll never be able to test every scenario that your users could conceivably care about. But how can you achieve as much coverage as possible?

First of all, leverage automation everywhere you can. If you find yourself writing out easily repeatable procedures (test cases) for humans to execute by rote, understand that you can almost certainly automate this. And hopefully, you're writing acceptance tests with QA as you're defining requirements. If so, work with the developers as well, because they can leverage automated acceptance tests frameworks. Only through automation can you efficiently realize wide checks against new functionality as well as a robust regression suite.

Leveraging automation then frees up your testing personnel to put their smarts and creativity to use. They can combine their knowledge of testing strategy and their knowledge of the domain to do exploratory testing -- creatively exercising usage scenarios that others hadn't thought of while defining the requirements.

Functional testing is an activity that requires both organized planning and creativity. And, done right, it should involve everyone in the development organization. Once the entire team participates and marches to a clear, coherent strategy, you will truly have demystified functional testing.