Quality Engineering Basics: Types of Testing
Part 4 of Quality Engineering Basics. In this series, I’ll cover some basics of Quality Engineering and Software Testing. Today's topic is types of testing.
In our exploration of Quality Engineering basics, we’ve discussed bug reporting, bug retrospectives, and debugging. While each plays an important role, the heart of quality engineering and software testing is, fundamentally, testing itself.
I originally intended to write about exploratory testing today. However, I realized it might be best to start with an overview of various types and categories of testing. Each of these has its place within your software testing strategy, and getting to know them will help build a solid testing foundation. Once we get that out of the way, we can take a closer look at exploratory testing next week, in part 5 of the series.
Understanding Testing Types
Having a shared understanding of testing types can be invaluable when crafting a comprehensive testing strategy, and is essential for avoiding confusion.
Imagine this scenario:
You manage a team of QA engineers, and you and your team refer to the primary work they do as functional testing.
Your boss calls this same work acceptance testing, or sometimes system testing.
Another person uses the term system testing when talking about automated tests at the system level, but her definition doesn’t include manual testing.
If someone wants to know if the system tests passed, what would your answer be? How would other people in this example answer?
Instead, if everyone agreed to use the term functional testing and agreed on its meaning, there would be less chance of confusion or misinterpretation.
With this in mind, let's explore key categories that highlight the wide range of testing practices.
Functional vs. Non-Functional Testing
One of the biggest distinctions in types of testing is functional vs. non-functional. A good testing strategy should include both functional and non-functional testing. However, I have often observed testers who fall into the trap of looking only at the functional aspects, diligently covering each of the software’s functional requirements, while omitting the non-functional.
Functional Testing focuses on verifying that the software meets the specified requirements. It can include types like unit, integration, system, and acceptance testing.
Non-functional Testing looks at aspects of the software not directly covered by functional requirements, such as performance, security, usability, and compatibility testing.
Without non-functional testing, the probability for bugs to escape is high. For instance, compatibility testing is essential for mobile apps. Just look at the variety of Android phones on the market. Android phones vary widely in screen size, OS version, and hardware performance. Without compatibility testing, an app might work well on one device but fail on another, potentially affecting many customers. If you test primarily on the latest high-end devices, but most of your customers are using older, low-end devices, there’s a testing gap here, and a high likelihood customers are going to face performance and other compatibility issues.
Level of Testing
Tests can also be described by the level of testing, or what part or how much of the software is being tested. This is what you’d typically see in a testing pyramid diagram, which I’ll write more about in the future.
Unit Testing is the smallest building block of testing, used to test individual units of code. Typically written by the developer as they write code. These are frequently written before the code, particularly when following a Test-Driven Development (TDD)1 approach.
Integration Testing is the next step up from unit testing. Integration testing checks how multiple units or components of code work together.
System Testing, also known as end-to-end (e2e) testing, covers the entire, working system.
Acceptance Testing is similar to system testing, with system-level coverage. However, acceptance testing usually focuses on specific end-user requirements. In some situations, these tests are performed by business analysts, or with actual customers in their environment.
Manual vs. Automated Testing
This category differentiates tests based on their execution method.
There has been a lot of emphasis on test automation in recent years. Automation offers efficiency and consistency when done right, and reduces the overhead of repetitive manual testing. Manual testing, on the other hand, allows for exploration and variation, which is often the key to discovering certain bugs.
In my experience, both are necessary to deliver a high-quality product.
Manual Testing involves human effort to execute tests. It allows for real-user interaction with the software, making it ideal for exploratory, usability, or ad-hoc testing scenarios.
Automated Testing uses software tools to run tests automatically, executing predefined actions and comparing the outcomes to expected results. This approach is efficient for regression, performance, and load testing, ensuring consistency and saving time over multiple test cycles.
Static vs. Dynamic Testing
An often overlooked category, here we make the distinction of whether the software is being executed during the testing or not.
Frequently, static testing is seen as the domain of developers only. However, more technical testers can contribute to code and documentation reviews. Also, I am a firm believer in testers being involved early and often, and that includes reviewing requirements and design docs, which are often overlooked aspects of testing.
Static Testing involves reviewing code, documentation, and other project artifacts without executing the code. Techniques include walkthroughs, reviews, and static analysis tools. It aims to identify issues early in the development lifecycle.
Dynamic Testing requires running the software and validating its behavior against expected results. It covers a wide range of testing types, from unit to acceptance testing, focusing on functional and non-functional requirements.
Regression vs. New Feature vs. Bug Fix Testing
We can also talk about testing in terms of whether we’re testing existing functionality or new code.
Regression Testing is the process of checking existing functionality to ensure it still works as expected.
New Feature Testing focuses on new or changed code.
Bug Fix Testing is just what it sounds like — checking to make sure a bug fix fixes the bug in question.
Additional Testing Types
In addition to the categories and types defined above, there are many more such as smoke testing, build acceptance testing, exploratory testing, accessibility testing, sanity testing, beta testing, internationalization testing, and localization testing, just to name a few.
Wrapping Up
As you can see, there are many ways to refer to and categorize types of testing.
When building out a testing strategy, I recommend the following:
Establish a shared testing language within your team or organization, so that everyone knows exactly what each term is.
Check how many of these types/categories are covered.
Determine who’s responsible for each type of testing.
Get testers involved early and often!
Automate the repetitive, boring stuff.
Next Up: A Deep Dive into Exploratory Testing
Next week, in part 5 of Quality Engineering Basics, I’ll do a deep dive into Exploratory Testing, and share why it’s one of my favorite testing types.
Conversation Starters:
Did any of the testing types/categories or their definitions surprise you?
Does everyone you work with have a shared understanding of testing language/terminology?
What type of testing do you enjoy the most?
What type of testing would you like to learn more about?
Paving the way for quality, one test at a time,
Brie
To learn more about TDD, I recommend Kent Beck’s book Test-Driven Development or his Canon TDD post here on Substack.