🇺🇦 Message from UTOR team 🇺🇦
SHARE
When to Go for Automation Testing and How to Avoid Common Mistakes - 1

When to Go for Automation Testing and How to Avoid Common Mistakes

  1. When to Use Automation in Software Testing?
  2. Why We Need Automated Tests
  3. Why Automation Falls Short of Expectations
  4. Automating UI Tests Only Is Wrong
  5. The Way Out
  6. Testing Pyramid
  7. Complex Approach
  8. Bottom Line

This is an adaptation of the original article by Yaroslav Pernerovsky published on DOU.ua, a Ukrainian IT community and blog.

Many of you have heard that using automation is a good idea, but not everyone clearly understands how it benefits a project. Some expect to achieve 100% bug-free software. Others believe that QA automation engineers will cover all tasks more effectively than manual testers. Automation testing is full of myths and misconceptions. If you decide to go for automation testing, mind that it’s a rather expensive investment with significant advantages in a long-term perspective. So let’s try to figure out when to go for automation and how to avoid pitfalls on the way.

When to Use Automation in Software Testing?

You certainly need test automation in the following cases:

  • You’ve got a project that lasts for a year or longer. The number of regression tests to run is rapidly increasing. Testers risk getting stuck in a routine, repeating a small number of the same test cases instead of focusing on overall quality.
  • You have a distributed development team or more than two people writing code. The developers should be certain that the changes they make won’t break someone else’s code. Without autotests, they find out about the issues in a day or two in the best-case scenario. In the worst case, users will report critical bugs when the software is already in production. 
  • You support several product versions, constantly releasing patches and service packs for each. Testing different configurations is a routine, and we should minimize the routine.
  • You are developing a service that processes different kinds of data. Entering a large bulk of data manually to conduct tests and analyze results  is a too time-consuming and resource-intensive task for human beings.
  • You work according to agile principles – with short iterations and frequent releases. There is no time for manual regression runs within the sprint. Still, the team should be certain that recent code changes haven’t affected the functionality and no critical bugs appeared.

If your project is not like that, then you probably don’t have to bother about automation.

Why We Need Automated Tests

An opportunity to hasten the testing process isn’t the only benefit that comes with automated tests. There are several other features that add up to the efficiency of automation:

  • test coverage integrity;
  • clear and reliable results;
  • costs for development and support;
  • easy launching and analyzing results, etc. 

The main performance indicators of automation are speed, wide test coverage, and cost-efficiency. This is what you need to take into consideration.

Any kind of automation reduces the amount of routine work. Test automation is not an exception. However, there is a common misconception that autotests should completely compensate for the work of manual testers, and scripts are enough to check a product. This is nonsense. No script can replace a living person (yet). A script can repeat running the programmed actions – programmed by a person – and signal that something has gone wrong. A script can do simple checks quickly and without human intervention, but it can’t TEST.

Read more: Benefits of automation testing

Why Automation Falls Short of Expectations

There are quite a few reasons why automation might not meet your expectations. All are somehow related to incorrect development or management decisions, and sometimes to both.

Management decisions deserve a special article, but for now, we’ll just highlight the most damaging errors without broad explanation.

  • An attempt to save on hiring automation QA specialists. If a manager believes the company can just pay for Selenium courses for employees and they will become automation pros, there are some myths to be revealed.
  • An attempt to introduce automation without a well-thought-out strategy and planning – “let’s-implement-and-then-we’ll-see” approach. The only situation that is worse is automation for the sake of automation – “That other company has it, so I need it, too.”
  • Too late start: automation begins when manual QA engineers are exhausted and can’t handle the scope of work.
  • A belief that hiring students for running regression tests manually is cheaper. It means no one is going to implement automation on the project, even though it is essential.

Development solutions are decisions software engineers make during working on automation strategies and implementing them. This includes the choice of tools, types of testing, frameworks, etc.

Let’s take a closer look at some development mistakes. 

Automating UI Tests Only Is Wrong

The most common mistake is a decision to automate only tests for the graphic user interface. This idea doesn’t seem bad when you get it. It may even be helpful for quite a long time. Automating UI tests only can even be sufficient if the product is already at its final stage and is no longer developing. As a rule, however, this isn’t a good long-term solution for developing projects.

UI tests simulate how users interact with an application. It may seem that UI testing is the most logical starting point for automation, but there is a couple of nuances:

— UI tests are unstable

— UI tests are slow.

They are unstable because tests depend on the app interface layout. If you change the order of the buttons on the screen or add/remove an element, the tests may break. The automation tool won’t be able to find an item or it can click on the wrong button and alter test logic.

The more tests you have, the more time you need to spend on fixing and supporting them. As a result, you cannot rely on these tests due to frequent false-positive results. At some point, a QA engineer spends all their time correcting errors in scripts that failed instead of writing new ones.

These tests are slow because the application interface is slow, too. It requires redrawing, loading resources, waiting for some data to appear, etc. Test script just waits until the system runs it, and that’s a waste of time. Moreover, a test may fail because it is trying to use an element that a slow UI hasn’t managed to render yet.

The Way Out

Stabilization. We deliberately dramatized the instability. This issue isn’t difficult to solve, but automation QA engineers often prefer not to try solving it.

The first thing you need to do is make sure developers don’t forget to add unique attributes for the elements so that an automation tool can identify each of those elements. It is necessary to give up multi-level xPath expressions and CSS selectors and use unique ID, name, etc. wherever possible. This requirement should be explicitly stated in the development guidelines and listed among the definition of done for developers.

Be ready for an excuse like ”that’s an overhead for developers.” Perhaps it is, but the team needs to write unique IDs only once and then can forget about them forever. This simple procedure saves hundreds of hours for an automation QA engineer.

An application under test should be suitable for testing. If it’s impossible, you need to modify code or forget about this app forever.

Besides, it is a good practice to set up an automation tool to wait patiently for the moment when an element becomes available for interaction. 

Speeding up. If instability is simple enough to handle, we should address the issue of the slow tests comprehensively since it affects development as a whole.

The simple things that can speed up the process are to deploy the application and run tests on faster hardware, to avoid cases when network delays influence the test and application interaction, etc. In other words, it is necessary to solve the problem with hardware and test architecture. Such an approach helps to win at least twice more time.

Also, consider an opportunity of the independent and parallel launch of test cases, despite the limitations. The logic of the app under test isn’t always possible to test in multiple threads. Situations like this are quite specific and rare, but they do happen. The hardware specifications matter, too.

The most radical step is to create as few UI tests as possible. Fewer tests mean earlier results.

Testing Pyramid

Do you remember a famous testing pyramid?

Testing pyramid
If you put a blunt razor under the small pyramid in the evening, it will become sharp again by morning ©.

A pyramid is a very convenient metaphor. It clearly illustrates the desired number of automated tests for each level of the system architecture. There should be a lot of low-level unit tests and very few high-level UI tests. The question is, why is it so and why does this pyramid matter?

Everything is pretty simple. Do you recall how finding and fixing an issue proceeds during manual testing? First, a developer makes changes to the code. A QA engineer is waiting until a new build is assembled and deployed on the test stand. A tester runs a check, finds an issue, and creates a ticket in a bug tracking system. A developer immediately responds and fixes the problem. 

There are new changes to the code, and then goes the new build again, deploy, and retest. If everything is fine, a QA engineer closes the ticket. It takes from a couple of hours to several weeks to fix an issue after it is reported.

What happens when this test is automated? You still have to wait for the new build and for tests to be completed. Then you analyze the results of the test run. If there are some issues, it is necessary to determine their origin – whether it is the app or the test code. Then you run a failed test again manually to understand if the problem has been identified correctly. 

Start a ticket, wait until the bug is fixed. Restart the test, make sure everything is fine, and close the ticket. Again, it takes from a couple of hours to several weeks. But while the tests run, a QA engineer keeps working on other tasks. 

Automated tests using API to communicate with the back-end are a different story. There are some attractive options to consider:

— A team runs tests on a deployed application with all external systems functioning. Time for execution and analysis of the results reduces since there are fewer false-positive results. The rest is pretty much the same as in the UI tests.

— Tests are run on a ready build without deploying it to the test environment. External systems are blocked. Tests check the build, issues don’t require creating tickets since a developer fixes issues immediately. The progress in bug detecting and fixing speed is huge.

— And there are unit and component autotests. They do not require a ready build and are launched immediately after compiling a module. The response is instant. The time that passes between finding and fixing an issue is minimal – only a few minutes.

The lower you go down the pyramid, the less time mentioned autotests take. Testing becomes very time-efficient. The lower level you take, the more effective tests coverage and response time get.

Complex Approach

It is important to understand that unit tests check the code. They verify that a piece of code works as intended and doesn’t break the general logic. UI tests are critical, as they check the whole system – something a user will access.

To get effective results, you need to find the right combination of all kinds of autotests at each level.

Test-Driven Development is no longer a recommendation. It should come by default. TDD helps you avoid problems during refactoring and development issues typical for large teams.

At the level of API testing, you need to replace functional tests by fast and stable regression tests during a sprint.

Only demo acceptance tests, the so-called Happy Path or End-To-End scripts, proceed to the UI tests stage. It is relevant for both web and mobile applications.

Thus, if you simply follow the pyramid recommendations, you can get very quick tests with excellent coverage while maintaining the sane cost of development and support.

Bottom Line

Some projects don’t require full automation. Auxiliary scripts may be enough to make a tester’s life easier. But if we’re talking about a long-term project with a huge team working on it and an equally huge plan for development, automation is essential. Excellent test automation is achievable if you develop autotests at each level of the system architecture. This decision alone is already the key to success.

Don't forget to share this post!
0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
image
Looking for more? Just subscribe.

Early bird news, bonuses — only for subscribers!

    By clicking Subscribe, you accept the Privacy Policy.
    0
    Would love your thoughts, please comment.x
    ()
    x