When you start to research the basics of software testing, you often end up puzzled with a diversity of types of software testing. Manual and automated. Functional and non-functional. Load, stress, security, regression… It seems that the list will never come to an end.
All types of testing have different purposes and tasks. But does it mean that you have to apply all those types at every project? Or maybe there is a useful cheat sheet that will help to skip something without giving up quality?
The good news is the types of testing needed always depend on a particular project. A good QA team will choose them for your individual project. As a software testing company, we can say that for sure.
So here’s the list of methods and types of software testing we typically use for quality assessment and improvement. Each comes with a brief explanation – as brief as it may be, taking into account their diversity.
1. Automated testing
Software code always needs a thorough check, even if it seems perfectly written. You can test code by checking software features when they are ready for use – by installing an app, trying to log in, clicking on the links and buttons, and so on.
This process can be manual, when a person clicks on buttons to check everything, or automated, when a QA engineer uses a special program to run a check. Automated testing requires writing code for the check-up. Automation is good for complex applications with large functionality.
Why do you need automated testing?
Automated testing helps automate frequently repeated tasks and the largest part of software functionalities. The manual check in these cases is too time-consuming and thus inefficient.
The advantages of automated testing include:
Repeatability. All tests always work the same way, and you can run themthem anytime.
Fast execution. An automated script doesn’t need to look up into instructions and documentation, and this saves a lot of time.
Lower support costs. As testers need less time to support the scripts and analyze tests, there are fewer working hours to pay for.
Independence. A QA engineer can spend time doing other useful things while a test is in progress. Engineers often leave the tests to run during after hours when the load on local networks is low.
Automation can optimize resources. It helps save time, avoid human error, and maximize the performance of a testing team. When the specialists don’t have to deal with a large number of routine tasks, they can focus on more uncommon tasks that do require human input.
Who needs automated testing?
If you have a long-term project with opportunities for development and scaling, a QA team will need to repeat some tests frequently. The slightest code changes can cause new bugs, and a team has to check everything all over again.
After you write automated tests, you can quickly start them at any time and get highly accurate results. Automation simplifies project maintenance and reduces the costs of testing.
When is it better than manual testing?
Remember that automated testing is not an alternative for manual testing. It is an efficiency booster, but not a replacement. A QA team applies always applies both for higher efficiency. The coverage of automation depends on a project type and the peculiarities of business processes in a company.
Manual testing can be repetitive and boring. Automation can help to avoid these downsides, as a computer will do everything for a tester. Also, automation saves time, effort, and money. One can run an automated test over and over with minimal effort.
We would recommend automating the following types of testing (where appropriate):
To decide whether automation is relevant, you need to answer the question “do the benefits outweigh in this case?” If you cannot find a single part where automation is feasible and reasonable, it is better to stick to manual testing. Don’t go with automation for the sake of automation only.
2. Manual testing
During manual testing, QA engineers don’t use any other tools but hands and documentation. Even before automating the testing of any application, you have to perform a series of tests manually. Without manual testing, we can’t be sure if automation is feasible at all.
So manual testing is a direct interaction of a QA engineer with an application. A person can get immediate feedback about the product, and that’s impossible if you use automated testing.
With manual testing, we can get information about the state of the product much faster. It takes time to write autotests and even more time to change and update them.
Why is manual testing important?
A QA team can get feedback from a product owner when development is in progress. Let’s say, a person notices some feature in an almost-ready product that is better to change before the release. If you already have prepared tests to check this feature, you need to rewrite them. Updating autotests and checking them will take up valuable time one could use to check the feature.
When is manual testing enough?
Manual testing fits for any software, while automation is good only for stable systems. Some types of testing, such as ad-hoc or exploratory testing, can only be performed manually.
Manual testing may be a time-consuming and lengthy process, but there is no need for automation on small short-term projects. Besides, automation can be quite expensive at the early stages of development as keeping the tests up to date requires resources.
But what to do if you want to regularly add new functionality and keep up with the competitors? Before creating auto tests, always check the product capabilities manually. In this case, manual testing speeds up the process, especially for mobile development.
There is one more highlight of manual testing. When it comes to user interface design, the services are helpless. There is no software smart enough to estimate the nuances in color tones and their effect on a user.
3. Functional testing
The task of functional testing is to verify that the software features meet the functional requirements. Simply put, expectations and reality should match. Product developers provide is what was expected initially.
So when to perform functional testing?
When you need to verify the requirements specified in the documentation.
When you need to verify that the app provides expected business processes.
QA engineers conduct functional testing based on their personal experience, project documentation, communication with a client, and expectation of app behavior. The advantages of functional testing are the imitation of real-user behavior and wide coverage with a variety of tests.
To cover these two aspects, we perform several types of software testing – feature, full-system, and regression testing.
At this stage, a tester interacts with an app like a real user. Sometimes software features meet the requirements but are difficult to understand for an end-user. To avoid this, a testing team follows a user path by clicking on all buttons and links, filling in forms, entering promo codes, interacting with popups, etc. The main task is to mimic user behavior as clear as possible.
It shows if the system as a whole meets the requirements. In other words, after separate modules of the system are integrated, it is necessary to make sure they are still working as supposed. Sometimes bug-free units are incompatible when brought together and start to act crazy, and we should make sure it is not the case.
This is one of the key regular check-ups. Regression testing has a place after every code modification. Sometimes updates affect unrelated features. For example, adding a new barcode in a database of a calorie counter may break the layout of the weekly report page. The purpose of regression is to verify that the latest updates haven’t affected an unchanged part of the functionality.
4. Performance testing
Performance testing is a way to learn how quickly a system or a part of it works under a certain load. It also provides an insight into different attributes of the system quality – scalability, reliability, resource consumption, etc.
All projects need performance testing. It is equally important for e-commerce and booking platforms, for fitness and gaming apps. It is essential for mobile, desktop, and web applications.
Users always want to have stable and flawlessly functioning apps. Performance testing is the way to make sure the software meets the expectations. There are several types of performance testings. Each focuses on different aspects of an app’s performance.
It is the basic form of performance testing. It estimates app behavior under an expected load – an average number of users, a number of simultaneous transactions, etc.
All types of projects need load testing. Before a release, a team should be certain an app will withstand a load. Monitoring databases, servers, and network help detect weak spots.
It estimates software reliability during extreme or disproportionate loads and helps you understand the system capacity. For example, e-commerce platforms need to run stress tests before big sales. Streaming platforms needed to run stress tests when users went on the lockdown. Game apps need stress tests before the release of long-awaited updates. Stress testing shows if the system performance is sufficient in case the load greatly exceeds the expected standard.
Volume (flood) testing
It allows a QA team to find out how many users a system can handle or what amount of data to process before it loses productivity and the stability will become just unacceptable. We are mainly interested in how an increase in the number of users or data stored affects the time of execution for operations.
It is essential to check the functionalities that process large volumes of data or use complex queries to reach data. For example, some learning apps make all finished lessons available and store them as user data. An app becomes heavy, but it should still be fast.
Stability (reliability) testing
The purpose of this type of software testing is to ensure that an app can withstand an expected load for an extended time. A QA team models an increased load to check memory consumption and identify potential leaks. This is how you discover performance degradation that leads to slower information processing and/or longer response time.
It validates the ability of the system to allocate extra resources under pressure. If the load rises, the software should be able to apply back-up systems to support the functioning. The effect of server failures, reaching a performance threshold, and other reasons beyond our control on user experience should be minimal. It is necessary to keep things going for a while without manual intervention until the affected functionality is being fixed.
5. Security testing
There is a difference between security testing in general and two of its types – penetration and compliance testing. It may seem puzzling for some, so let’s figure out what makes them different.
A QA team always conducts an iterative check of software functionality and infrastructure to detect the weak spots. Security testing starts at the initial stages of designing a product. It may include risk assessment, vulnerability scanning, code control, etc.
Penetration testing looks for the ways to outsmart an information system and circumvent its protection protocols. A team gets a report with a list of detected vulnerabilities, attack vectors, achieved results, and recommendations for improvement.
Penetration testing considers not only software and hardware, but also data stored, organizational activities, company documentation, and different business processes. The results of a pen test depend on hardware metrics, staff’s actions, and consistency of operational processes as much as code quality. By the way, we perform only penetration testing.
It is used to verify that software meets the general standards. These standards are developed by large organizations like W3C (World Wide Web Consortium) or niche enterprises (for example, when it comes to healthcare software). The compliance check is not a must for all types of software. When the product is serious, this stage can involve a board of regulators and compliance experts working on the product analysis.
6. Usability testing
Usability testing is conducted to assess the convenience, intuitiveness, usefulness, and satisfaction of using the product for the end-user. There are three types of tests we recommend and provide.
It is a close look at the personal experience of using an app or any other type of software. At this stage, a number of people model the behavior of real users to find the drawbacks QA professionals might have missed.
“Missed” is not about being uncompetitive or inattentive, but viewing an app from the point of view of development. Developers and users may have a different understanding of how a “good” app should work. In some way, it reminds a focus group, but user testing is performed on an individual basis.
As a rule, developers adapt an app for a wide range of users, including people with disabilities. Accessibility testing is testing software for compliance with the recommendations of the Web Content Accessibility Guidelines (WCAG) 2.1. to make sure it is suitable for users with disabilities.
Most often, applications are adapted for people with hearing or visual impairments, and sometimes for users who cannot use a keyboard or other manual input device. Accessibility assessment, however, is important not only for people with disabilities.
It is helpful for all uses in certain circumstances. For example, when you are in a noisy environment, you can watch videos without sound but with subtitles. Or on the contrary, when reading is inconvenient, you can use listening mode.
People often confuse usability and UX that stands for “user experience”. What’s the difference? UX is the complete experience that comes with a product. It can go beyond features and move into a broader concept of customer experience, supported by all the marketing materials and presentations.
Usability is about the product being simple and easy to use. It is usually measured by a number of issues that cause frustration during using a product (or their absence). There are five main criteria of usability:
Usefulness shows how well a product solves users’ tasks.
Efficiency determines how quickly and easily a user can perform the necessary actions.
Learnability illustrates how easy it is for the user to understand the game or application.
Satisfaction reveals positive emotions a person gets from the use of a game or an app and the desire to recommend it to friends.
Accessibility identifies the ability to use the application for people with disabilities.
Levels of testing
Every type of testing includes a set of activities each with its own objective, strategy, and deliverables. The goal of each testing type if to verify its particular objective or send the code for re-work. In simple words, you run stress tests to make sure the system can withstand a sudden increase of users. If it can’t, it is necessary to make some changes.
Levels of testing are a different concept. A level of testing determines the area covered – an individual unit, a group of units, or the system as a whole. Testing on different levels takes place throughout the software development life cycle. It is key to successful product release and maintenance.
There are four levels of testing:
Unit testing studies each separate element of the system. Usually, developers cover these tests, so let’s focus on the rest three levels that belong in our area of responsibility.
This next level of testing goes after unit testing. It helps figure out whether units function properly together. In other words, we test two or more related units to verify that the integration was successful and no critical bugs popped up.
There are two approaches to integration testing that influence the way we perform it:
Bottom to top testing. We put together all the low-level modules and test them. After that, the next level of units is assembled for testing. It goes on and on until the functionality is fully covered. This approach is good to use when all (or almost all) units of the level are ready.
Top to bottom testing. The team starts to check high-level units and gradually adds low-level ones. QA engineers use module stubs if necessary to simulate missing units of the functionality if they are not ready yet. Later, ready active units replace these components.
The name speaks for itself. On this level, we check the whole system for bugs in it. A tester investigates the relations between all hardware and software components. Then they check the way the system functions.
By this time, the software is ready for a potential release. A QA team conducts tests on all browsers or operating systems and runs different types of testing. We can divide all tests into groups:
Functional testing is meant to detect the mismatches between actual and expected behavior of different features. These tests cover all implemented functions and take into account the most probable types of errors.
Non-functional testing focuses on software characteristics that can be measured by different values – speed, capacity, etc. It is a set of tests that reveal how the system performs.
Last but not least, there is acceptance testing. This level comes directly before handing software to a customer. It is a final check verifying that a product meets customer’s requirements. Either a QA team or a client’s team can run this check.
There are three main types of acceptance testing, each with its specific goal:
Sanity testing looks into software performance in detail every time after we get a relatively stable build. It validates that all important parts of the functionality work at a low level. For example, during sanity testing, you check if a user can install an app and log in after an update.
Smoke testing is executed every time we get a new project build that is yet unstable. We need to make sure that all the critical features of the software under test work as expected. The idea is to identify serious problems as early as possible and reject a current build at an early stage of testing without wasting time on obviously defective software.
Ad-hoc/monkey testing is an ‘informal’ check that doesn’t require any documentation or planning. It is an improvisation, so a QA team doesn’t need to prepare test cases and scenarios. Ad-hoc is helpful when the deadlines are tight and there is little time for consistent testing. A QA engineer relies on their general understanding of the application and common sense.
The goal of software testing is to cover the maximum amount of testing scenarios. The more bugs you detect, the higher chances are to release flawlessly functioning software. There are some types of software testing, like user testing, that any company can handle without professional QA expertise. The rest, however, requires a more serious approach. We hope our article will help you understand better what exactly a QA company checks and keep up with the improvements during the testing stage of SDLC.