If there’s one thing you need to know about good software development practice, it’s that good developers test their work. As a business owner, you should be testing everything that goes in front of a user – if you want the best results.
But it’s challenging when there are no examples to make testing clearer, give you more information, and decrease the chances that the idea will be wrongly applied to real-life situations.
So, in this post, let's review seven practical examples of how the UTOR QA team uses regression testing.
To refresh your memory — or introduce you to the concept — we’re going to explain regression testing and its importance. Then, we’ll dive into a few regression testing examples
While the same tests may not get you the same results, they can get you inspired to run tests of your own.
It's common to have code changes at the development stages of any software. During or after these code changes are implemented, it's impossible to tell how the code changes may adversely affect another dependent feature(s).
Therefore, to deal with this unpredictability and ensure that the end product delivered is top-notch, Regression Testing must be included in the software build's life cycle.
For example, X's Product has a programmed function to trigger a series of events, including verification, acceptance, and sending out automated emails when the assigned buttons are clicked.
Say a minor problem is detected in the triggered email process, and for the development team to deal with this, some slight changes have to be made in the code structure.
When the modifications are being made, it will be only for the automated email triggers, but it won't end with the email triggers only when regression testing.
The verification and acceptance processes will be cross-checked to make sure their functions are all still intact as well, and the code change didn't cause further issues, no matter how small.
Advanced or Basic programming languages like Java, Python, etc., are never necessary to carry out Regression tests.
It's merely a method of carrying out tests on a software build to verify implemented modifications and ensure that existing components on any related areas aren't affected adversely after said modifications.
The process will be verified as successful as soon it's confirmed that whatever bugs have been fixed and every other aspect of the software build is kept intact.
When a fresh build comes up to be verified, a Functional Test is performed by the testing team to verify and make sure that the changes made on the preexisting and also new functions remain intact, and they all interact well.
Lastly, after doing the test, the testers check if the previous functions are still working as they were. This also verifies that the new modifications didn't introduce any issues to otherwise perfectly functional modules.
|Recommended: Learn everything about regression testing.|
Regression testing is usually needed whenever there is a change in specifications and the system's code structure, requiring a series of tests to ensure that the modifications don't affect other related and non-related components of the software build.
This testing is also required whenever new features are integrated into the software build and fix bugs, defects, and other issues before deployment.
First of all, for a Regression test to take place, there have to be certain conditions that warrant that the test to be performed.
To start with, there must be reports of a malfunction in the code.
When the reports are confirmed and the code having the issue is identified, it is further broken down to know how and why the problems are present.
Naturally, the next step would be to take the steps needed to change and fix the affected areas.
Immediately after all bugs issues are dealt with, the next step would be to perform the Regression test. This will be done by choosing and running useful tests. In this case, the tests are put into two categories.
We'll be illustrating an example of how regression tests are run, featuring a project involving an image processing software build.
The explanation will be based on a real-life scenario and talks about engaging manual and automated regression tests.
But first, let's see everyday examples of how this test varies and what they focus on;
-- Bug Regression: Here, issues that are said to have been fixed are retested specifically.
-- Old fix method: Here, issues and bugs previously dealt with are all retested to be sure those areas remain intact. This was the original idea behind regression.
-- Conversion/Port method: In this scenario, the program is transferred to a different platform. This type of Regression test is then performed to ascertain if the transferred program was successfully integrated. The modifications will be mostly those within the newer environment rather than, the older one.
-- Configuration method: Here, a later model of the application or device in use is introduced, and the program is run on or together with it. This example is a lot similar to Conversion testing. Still, the original code and platform don't change, only the environment and a few units integrated with the software in question.
-- General Functional method: A Retest is done on a larger scale. This includes every area, including those that were formerly functioning correctly. We test them in a bid to check if any of the newer modifications affected their encoding adversely. This here is the original idea behind automated regression testing.
-- Build Verification: This example is an integral part of the regression testing as much as it is a part of the testing life cycle of any software. To pass a new build by this method doesn't require so many test cases or a bulky suit. Usually, the build is checked physically for any broken or faulty areas to know if it is at all worth testing, to begin with, or if any of the modified parts of the new build don't integrate well enough as was intended. When a build doesn't pass the smoke test, it usually is turned down in totality, and not sent back with error reports for fixing.
-- Localization method: This example entails tweaking the program to showcase its software connection in a foreign language and owing to an alien set of traditional rules. This will most likely involve many tests, including many of the old and new tests, albeit most of the old tests would have been changed to have a conducive environment to fit in the new language.
Regression testing in its natural element doesn't only expose underlying problems. It can also integrate with any other testing technique, mainly because any test can be used more than once. Therefore any test can be said to be a regression test.
Let's go on to the project scene.
After every loop, a regression test is performed manually, and this procedure was dependent on the test cases realized, on course, as the project went on.
An identical test suite to the Sanity procedure was also performed by the team, having over 150 test cases, cross-checked from time to time, to get rid of remiss cases.
This action helped maintain an evenly balanced suit considering the conditions (150-200 cases) and avoid any disorderly and unnecessary overflow.
With the help of the periodic Regression tests performed on the build, it was revealed that some available options for the image gallery were often modified; with each modification, dependent functionalities got affected every time.
This project scenario involves an application used for Image Capturing and Processing, with the latest Android Operating System (Android 11), for a tech company that is more involved in the execution of digital image processes, like saving, displaying, or printing.
This app was already integrated with a trademarked camera to enhance this purpose and let the owners of compatible Android devices be able to take High Definition photographs.
The team handling the project made use of the general stand-up type meetings. The time frame was about 3 years for this project, and a good number of professionals were included at various stages, about 5-15, including up to 5 test engineers.
Automated testing is unarguably the easiest way to sieve through a larger number of malfunctions by performing a broad range of tests within a shorter time frame, thereby reaching the deployment and release even faster.
During the project being highlighted, due to the test suites' automation, a lot of time was saved doing test repetitions and issues that weren't identified the first time around, doing the manual testing.
These issues were identified in the automation test phase, mainly because of allowed intervals/waiting for which this type of test is known. If the allowed time passes and the test wasn't successful, then it is reported as a failed run and repeated until it's reported as successful.
Tedious as it may sound, this is the only effective method of finding occasional issues that aren't always obvious. In contrast to this, whenever a problem isn't detected immediately, it will most likely stay undetected during manual testing since virtually no test engineer would go through the stress of iterating a manual test infinitely.
|Recommended: Learn about automated regression testing of software.|
As has been stated before, Regression tests can as well be run between builds and different release batches. In this example scenario, we'll look at how the test cases are carried out on three other builds of the same software (Build #1, #2, and #3), all in different environments.
First, the build's requirements will be given, and then the project team starts to work on the layout and structure.
After the structure is all set up, the QA team begins creating tests, let's say about 1000, that the product will have to undergo, as much as that will guarantee a successful and thorough test phase for the software build.
After writing the tests, then comes time to run it on the software. After test cases are successful, the product is released to the client for the last phase of running an Acceptance test, which, if successful, the product is then deployed for production.
Due to the success of the 1st build, the customer then requests for a few additional functions to be integrated. He should give the specifications for this.
The project team gets to work, and right after, the QA team is on deck to start creating new tests for the added functions. They create only 200 new tests this time around, bringing the total number to 1200 for both batches released.
The QA team gets to work, running the tests on the build, and immediately after, they will also test the older components using the former 1000 test cases written, and this is to verify if the additions did or did not mess up the already working functions (Regression Testing).
As soon as both test phases are complete, the product is given to the client, who still performs the mandatory Acceptance testing. If the Acceptance test is successful, the build is deployed for production.
Though satisfied with the 2nd Build, the client probably decides to do away with a function, let's say "Ads." After they've made their decisions and deleted the said function, their next step would be to remove every test case that has to do with Ads.
They are numbered to 100 and then run the remaining test cases to ensure that every other function is hale and hearty even after scraping off the Ads function. This procedure is classified under Regression testing as well.
There you have it — 7 regression testing examples to learn from. While you may be inspired to try one of these, keep in that, there's no one-size-fits-all when it comes to software testing — as long as the approach is effective for securing a healthy build. Many QA companies like us combine automation and manual testing to realize full test coverage and meet the intended requirements.
Ultimately, you'll want to choose what's best for your users, niche, budget, and resources. It's up to you to be on top of current testing trends and leverage that knowledge as you create your testing strategy.