🇺🇦 Message from UTOR team 🇺🇦
SHARE
Performance Testing Best Practices: What Teams Ought to Know About Proper QA - 1
Dmytro Vavilkin, Technical Writer

Performance Testing Best Practices: What Teams Ought to Know About Proper QA

  1. The bottom line

Performance testing is a wild goose chase without being adequately measured. You can keep improving your performance on and on – only the sky’s the limit (and your budget). But sometimes, it’s important to determine when enough is enough. 

What is performance testing all about? Performance testing has to be project-specific with defined criteria for responsiveness and robustness. As such criteria can vary, QA engineers refer to the best performance testing practices. 

UTOR provides both performance and load testing as a part of the standard Agile development routine, so here is our take on the performance testing best practices. 

  1. Use Different Performance Testing Types

It might be tempting to leave certain performance testing outside the scope, considering budget constraints. But that’s exactly what you shouldn’t be doing: only full performance test coverage gives you 360-degree visibility of your system’s state. Here are some of the common types of performance testing.

Load Testing

Load testing helps you get insights into how the system withstands a flow of users. Allow a certain degree of randomness by throwing various devices into the mix, and then see how the system responds under multiple conditions. For more details, please check our article on how we perform load testing services

Stress Testing

Any system fails when certain conditions get critical. You better be aware of such conditions in advance and avoid unpleasant surprises down the road. The idea of stress testing is relatively simple: apply extreme load and identify breakpoints. 

Volume Testing

This type of testing tells you how well a system can handle large volumes of data. It can be all Greek to someone as the three mentioned concepts are closely related. The key difference is though that volume testing concerns the system’s data integrity. So if a system has a database, QA engineers would expand it by adding more data inputs and check if any data losses occur.

Need more reasons to consider load vs stress testing? Here are some vital differentiating points between load testing, stress testing, and volume testing. 

Load testingStress testingVolume testing
What is measured?The system’s performance The system’s stability The system’s bandwidth 
Is maintenance concerned?NoNoYes
Testing conditionsTypical real-life conditionsCritical conditionsTypical real-life conditions

Scalability Testing

Scalability is another crucial aspect to factor in. This concept goes hand-by-hand both in QA testing and DevOps practices. These two disciplines tackle the issue from different angles, but the central theme is that a system remains as resilient as its ability to scale while being tested. 

Recovery Testing

There’re always many things that can go wrong, namely software, hardware, or network failure. Most of the time, it’s one of these three components that will give you trouble. When that happens, you would probably be thinking “Can the system continue functioning? When yes, at what capacity?” 

Your system ought to be built in a way that ensures stability (or quick recovery if that’s not possible) after one of the abovementioned elements goes down. There’s one way though to find out if your system is built in this way – through thorough recovery testing.

Capacity Testing

Capacity testing utilizes different aspects of volume, scalability, and recovery testing. However, it’s a more strategic approach that aims to estimate when the system and infrastructure need to undergo an update to meet the increasing user demands. 

Configuration Testing

As we mentioned, there are several components to your system (software, hardware, and network). All three ensure the system’s stability and robustness. They may function perfectly standalone, but how do they work intertwined? 

Configuration testing eliminates potential compatibility issues that might lead to performance degradation. The ultimate goal of configuration testing is to work out the best possible configuration of all the key system components to meet the system’s functional requirements. 

  1. Set Up Performance Testing Goals and Metrics

Performance testing goals and metrics must be set up prior to the actual testing. Effective planning includes a set of criteria that closely embody the business objectives and define the testing priorities. Such criteria are based upon the following performance testing metrics

Key Performance Indicators (KPIs) 

A KPI is a project-specific value that defines the degree of testing success. In performance testing, KPIs can be bonded to:

  • Response time. That is the time spent to fulfill a server request and generate a corresponding response (that can include DNS lookup, connection time, and redirect time).   
  • Requests per second (or hits per second). This metric shows the server’s ability to process user requests per second and is usually examined as a part of frontend performance testing
  • Wait time (or average latency). This metric is similar to response time. The key difference is that the latter measures time until a fully generated response is provided. The wait time takes into account the time until the first byte of information is sent from the server. 

Business Process Completion Rate

This testing goal defines how system performance affects the ability to reach the desired business objectives with the help of the system. The metrics below must be incorporated into the business process completion rate to reflect the user expectations.  

  • Transactions per second. This one is exactly what it sounds like – the average number of user transactions per second. It’s an aggregated metric: you first calculate all user interaction during the test time, and then you can figure it out for each second. This should give you the ability to see your system’s performance capabilities in a slightly different light. 
  • Average load time. That’s the amount of time required for the browser to render a complete webpage (including scripts, CSS, images, and such). The load time might vary, depending on user location, device, and internet coverage. 
  • Peak response time. Similarly to response time, it measures the input/output cycle. However, the response time is concerned with the longest possible time required to complete a cycle. 
  • Spike testing. Spike testing is performed to check how the system can handle sudden spikes of user activities. The ultimate goal here is to check whether there are any deviations in system performance with unexpected rise and fall of user load. 

Hardware Metrics

The QA team is generally concerned with your software quality. However, your hardware matters just as much. At UTOR, we check basic hardware metrics without digging too much into it, unless the project requires it. Here is what we look at:

  • CPU utilization. This basic metric tells you the percentage of utilized CPU resources. As a matter of rule, you want to keep a certain portion of your CPU resources in reserve. 
  • Memory utilization. The idea is to spot any potential data leakages for each particular server. In case the performance testing finds that memory usage is over 60%, chances are you might run out of resources shortly. 
  • Throughput. This metric can tell how fast data can travel with a network from one point to another. This is basically your connection speed, as the internet providers label it. 

For a more comprehensive list, please check our article on performance testing metrics

  1.  Make Sure You Plan Your Workload Accordingly

Make the projected system’s workload estimates before you start the actual testing phase. The precise results are important for budget purposes: you can end up overpaying for the resources that your system doesn’t actually need. 

Keep in mind these questions while planning for your workload:

  • What is it that I’m trying to achieve with this test?
  • How many user interactions per hour should I factor in? What’s the absolute maximum?
  • What are my server and database requirements?
  • Am I getting my priorities straight when planning for testing?
  • What are my key metrics?
  1.  Make Sure Your Tests Reflect Real Human Behavior

It’s easy to go down the wrong path with automated testing. As much as it eliminates manual work, it can result in getting out of touch. As a part of the performance test environment best practices, it can be suggested to conduct interviews with a focus group to get a sense of an actual user perspective and avoid this trap. 

In case you’re testing a dating app like Tinder, for instance, you’d need to focus on mobile devices due to the nature of this app (most people use Tinder on the go). 

Or if your objective is to test a streaming service like Netflix, smart TVs, and desktop devices would be your priority. That would certainly affect the scope of testing since you’d need to test smart TVs suddenly, which is beyond the typical scope of testing devices. 

  1.  Test All the Components of the Tech Stack

Some systems can be designed in more comprehensive ways while others can be pretty straightforward. Systems that feature a more sophisticated design typically have more components to their system. 

The way each system’s component functions (for example, CPU, network, server, user interface)  defines the system’s performance as a whole. Each component needs to get input, process some data, and generate the output. The output is then verified against the requirements.

  1.  Be Ready to Identify Performance Issues

Treat your testing procedure like a doctor examining a patient: be prepared to spot possible symptoms and recognize serious performance issues. Here are some of the most common ones:

Long Page Load Time

One second delay costs Amazon $1.6 billion. Pretty impressive, huh? Well, some industries like retail pretty much rely on customer experience to win over their customers. That is because a frustrated customer will go looking for a faster website in no time. 

Poor Responsiveness

It’s 2021, and your website responsiveness is paramount. Mobile app performance testing makes it easier for your users to use your website by making it accessible to a wider range of devices. Mobile phones use cellular technology to provide users with wireless internet services. That brings additional optimization challenges, but it’s totally worth it. 

System Crashing

How bad is the system crashing? You will start receiving negative reviews, and eventually, people will stop using your app. Your users will tolerate only a 0.25% average daily crush rate

There’s a number of reasons why your app might be crashing, with the main issues being poor memory management, excessive code, and device incompatibility. 

Data Losses 

Remember that Facebook and Cambridge Analytica story that gained media resonance? The worst thing that can ever happen to your company is getting labeled as a business that neglects customer privacy. It just doesn’t sound good in the eyes of the public. 

Surprisingly enough, a human mistake is among the top reasons data losses occur, followed by viruses, malware, and hard drive damages.

UTOR thoroughly analyzes these and other possible causes as a part of their extensive performance testing services

  1. Analyze Results and Report Continually

Your QA reports are part of your knowledge-sharing process: it’s an effective approach to distributing information between the company’s departments. Your performance testing results report can be used as a reference point, and ensure the avoidance of repetitive issues. 

The bottom line

Performance testing is arguably the most important part of quality assurance with regard to business performance. Although there’s no “one size fits all” approach here, following these best practices will significantly increase your chances of hitting all checkboxes. 

To sum up, there are a few important aspects to consider for successful performance testing. First of all, perform different types of performance testing to attack potential issues from various angles. Secondly, set up KPIs and metrics to measure the results (be sure to know what to measure specifically). And finally, be ready to spot possible issues and report continuously. 

Trust your performance testing to professionals! Get UTOR to do the heavy lifting for you. 

Don't forget to share this post!
0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
image
Looking for more? Just subscribe.

Early bird news, bonuses — only for subscribers!

    By clicking Subscribe, you accept the Privacy Policy.
    0
    Would love your thoughts, please comment.x
    ()
    x