🇺🇦 Message from UTOR team 🇺🇦
SHARE
Software Performance Testing Metrics: What Are Metrics and How to Use Them? - 1

Software Performance Testing Metrics: What Are Metrics and How to Use Them?

  1. What Are Performance Metrics?
  2. Why Are Performance Metrics Important? 
  3. Key Performance Metrics to Start Tracking
  4. More Performance Metrics to Consider
  5. How to Track Metrics Correctly? 
  6. Bottom Line

To start performance testing, you need to identify the success criteria to evaluate the testing process. When you plan and design performance test cases, target metrics become one of the key focus points. 

So metrics are a baseline for performance tests. Monitoring the correct parameters will help you detect areas that require increased attention and find ways to improve them. Now, let’s find out more about what exactly to track and monitor. 

What Are Performance Metrics?

Metrics are parameters and measurements gathered during the quality assurance process. They can refer to different types of testing. As you can guess, performance testing data allows you to understand the effectiveness of performance testing. In other words, these metrics show how well software responds to user scenarios and handles user flow in real-time.

There are two types of data that belong here: 

  • Measurements are data recorded during testing – for example, how many seconds it takes to respond to the request.
  • Metrics are calculations made with the help of specific formulas applied to measurements, such as different kinds of percentages, average indicators, etc. 

In practice, QA engineers refer to both as metrics since data of each type is used for the same purpose.

Why Are Performance Metrics Important? 

We conduct performance testing to ensure that an application will run smoothly. Metrics are those indicators that help to identify what exactly “smoothly” means. 

To estimate whether performance is satisfactory, you need to define milestones first. Then, you have to measure parameters that fall under these milestones and estimate the result, comparing the actual and the expected showings. Therefore: 

  • Metrics are a baseline for the tests.
  • They help to track the progress of a project. 
  • Using metrics, a QA team becomes able to define an issue and measure it for finding a solution. 
  • Tracking metrics over time allows you to compare the results of tests and estimate the impact of code changes.

In general, it is essential to track performance metrics to define what areas and features require increased attention and quality enhancement. 

Key Performance Metrics to Start Tracking

Now, what metrics does a QA team need to track? It depends mostly on the type of software under test, its core features, and the business goals of a product owner. So we’ll start the list of performance metrics with rather universal parameters you can and should track for every product. 

Response time

The time that passes from the moment a request goes to the server and until the last byte is received from the server is called response time. This metric is measured in kilobytes per second (KB/sec).

Requests per second

A client application forms an HTTP request and sends it to a server. The server software processes this request, generates a response, and sends it back to the client. The total number of consistent requests per second is the metric we are interested in – requests per second (RPS). These can be requests for any data source – HTML pages, multimedia files, JavaScript libraries, XML documents, etc.

User transactions

User transactions are a sequence of user actions via a software interface. By comparing actual transaction time with the expected time (or a number of transactions per second), you can conclude how successfully the system has passed the load testing.

Virtual users per unit of time

This metric also helps to find out whether the performance of a software product meets the stated requirements. It helps a QA team to estimate an average load as well as software behavior in different load conditions.

Error rate

This metric is calculated as the ratio of invalid to valid answers over a period of time. The results are calculated in percentage. The errors usually occur when software load exceeds its capacity. 

More Performance Metrics to Consider

Performance metrics, however, are much more numerous and diverse than those mentioned above. Here are some other parameters you may need to track. 

Wait time 

Also known as average latency, wait time tells how much time passes from the moment a request is sent to the server until the first byte is received. It is also measured in KB/sec. Don’t confuse it with response time – they consider different time frames.

Average load time

It is an average period of time that takes to deliver a request. Average load time is one of the major parameters end-users of a software product operate to estimate the quality of this software. 

For instance, if a web page takes more than three seconds to load, a person will most likely abandon it. To make sure this doesn’t happen, a QA team has to measure an average load time and suggest areas for optimization in case pages load too slow. 

Peak response time

This metric is a bit similar to the average load time. The difference is that peak response time shows the maximum time it may take for an application to fulfill a request. 

If this parameter is much higher compared to average load time, it indicates that at least one of the software components is problematic. It can be caused, for example, by a big image or a heavy data library.

Concurrent users

Also known as load size, this metric shows a number of active users at any point. It is one of the most widely used metrics used to study software behavior under a number of virtual users. 

It reminds requests per second, but in the case of concurrent users, a QA team doesn’t generate consistent requests. Due to the “thinking time,” all requests don’t go to a server simultaneously but come in sequences with short pauses in between. 

Transactions passed/failed

It is a rather simple metric, expressed by a percentage or passed or failed tests against the total number of tests. Similarly to load time, it is critical for users, being one of the most evident metrics for end-clients.

Throughput

Throughput shows the bandwidth used during the test. In other words, it indicates the maximum amount of data that flows through a particular network connection within a given amount of time. 

We measure throughput in KB/sec. This metric often depends on the number of concurrent users. 

CPU utilization

As you can easily guess by its name, this metric shows how much time a central processing unit uses to process a certain request. 

Memory utilization 

Similarly, memory utilization shows how much resources it takes to process a request, but in terms of physical memory on a specific device a QA engineer uses for tests (or a user has this software installed on). 

Total user sessions 

This metric shows traffic intensity over a particular period of time. For example, it can be a number of user sessions per week or per month, depending on what time frame a product owner focuses on. Total user sessions data can feature a number of page views and bytes transferred.

How to Track Metrics Correctly? 

Tracking metrics for the sake of tracking is not the best idea – it’s more like a waste of time. Metrics aren’t nice numbers to write down in reports. 

Like everything in the quality assurance process, they should answer specific questions and test hypotheses based on business goals. Only in this case metrics drive positive changes. 

There are several principles to keep in mind if you want to use metrics for benefit. 

  • Specify a client’s business objectives to come up with an ultimate list of performance requirements. 
  • Every feature should have a specific success metric assigned to it – a single parameter or a narrow range of parameters. 
  • Metrics should correlate with the value delivered to a software user – high speed, software stability, all the features working, etc.
  • Run multiple performance tests to track metrics over time. Only by analyzing the dynamics or the lack of it you can determine average indicators and get consistent findings. 
  • Test individual software units separately. Run checks on databases, services, etc. before you join them into a single application. Don’t wait until the final stage before the release. 

And once again: remember that metrics can provide valuable insights and ideas for optimization of vulnerabilities when you focus on a particular feature or area. Don’t measure everything you can just because you have time or desire to do so. 

Bottom Line

During performance testing, a QA team checks various non-functional aspects of a software product to find out how comfortable end-users will be with using this product. Tracking various metrics helps to evaluate its stability and speed. If QA specialists choose the right metrics to track, they will quickly determine what areas require improvement. 

UTOR can do that for your product. If you want to make sure your software performs as expected on production, let us know. 

Don't forget to share this post!
5 4 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
image
Looking for more? Just subscribe.

Early bird news, bonuses — only for subscribers!

    By clicking Subscribe, you accept the Privacy Policy.
    0
    Would love your thoughts, please comment.x
    ()
    x