Hiring new team members is an uneasy process. You want to be sure that a person can handle all the assigned tasks and challenges. We’ve come up with a list of questions that might be helpful to interview new members of a QA team. And if you are a QA engineer preparing for a job interview, you may find some useful information as well.
We’ve divided the questions into several categories for convenience. These categories cover general terminology, particularities of different testing types, and soft skills.
There are several reasons that make software testing a must for every project. Here are some of the key points:
Software testing evaluates product quality and compliance with business requirements. This compliance is crucial for product success.
Testing is a way to assure that software works as expected and users aren’t going to encounter any serious defects.
Testing implies the estimation of a product from the point of view of an end customer, so it helps deliver a positive user experience.
Testing allows finding bugs early to reduce the cost of a possible mistake in development.
It is also the way to ensure that the software meets international standards set for a certain niche.
It is data used in the software testing process – from requirements and inputs based on them, as well as expected outputs and actual results.
The purpose of quality assurance is to ensure that a company provides the best product/service to customers. This process is based on multiple reviewing activities.
Quality control is about examining a product in development to assure it meets original specifications. It is a broad term that includes software testing as one of the ways to validate conformance with business requirements.
100% test coverage is impossible. You can come close to absolute coverage if you create at least one test case for every feature. Some user scenarios, however, are unpredictable. Testers can miss them, and that’s okay – they will make conclusions and use this experience in the future.
100% bug-free software is a myth. The goal of a QA team is to make sure software is free of critical bugs.
It is a technique that identifies what can go wrong during software development. During risk analysis, a QA team identifies hidden issues and prioritizes the sequence of resolving those issues. There are issues of high, medium, and low importance.
The best practices are different for every project. They greatly depend on the company’s business processes. Some of the universal positive practices are:
a smart combination of manual and automated testing on large projects;
clear metrics and documentation;
responsibility of team members.
A testing technique is a process we use to ensure that certain parts of a system function properly. Testing tools are software solutions we apply for testing when necessary. There are only several techniques and a vast variety of tools.
There should be at least one test case for every requirement. Otherwise, test coverage is insufficient.
Test coverage estimates an amount of testing on a project. To measure it, we can list the features and see whether each has at least one test to run.
White-box testing is the technique that counts on a QA team knowing how the back-end works. QA engineers design test cases to exploit the code for finding bugs. During black-box testing, the team estimates functionality from the business requirements perspective. Testers check the software like a user would experience it.
For static testing, a QA engineer prepares a checklist to find errors. It is cost-efficient, covers more areas in a shorter time, and can start before a program is finalized.
Dynamic testing starts after code deployment. There is an actual application to check valid inputs and expected outputs.
We can group tests by the part of functionality they check or the stage of the software development process they are applied to.
The latter division defines four levels of testing:
Unit testing checks separate code components. It is conducted at the earliest stages, usually by developers.
Integration testing evaluates how different units are functioning together.
System testing examines the entire software system against the requirements.
Acceptance testing evaluates compliance with end-user requirements.
Functional testing is meant to verify that actual product features match those described in the documentation. A testing team checks the behavior of software against the expected specifications and requirements. Functional tests cover all features and take into account the most probable types of mistakes.
As you can guess from the name, functional testing covers the functionality – in other words, software features. Non-functional testing is about performance, usability, reliability, and other system parameters that indicate whether it is optimized for usage.
Performance testing focuses on software speed, stability, and scalability. Some of the performance testing types are:
Load testing – checks how the system behaves under an expected load.
Stress testing – inspects how the software handles the increase in traffic.
Spike testing – models sudden traffic rises and examines how software reacts to them.
Volume testing – reviews system behavior when the volume of in-app data stored increases.
Scalability testing – estimates the ability of the software to scale up and down.
Exploratory testing is the simultaneous development and execution of test cases. During exploratory testing, a QA team doesn’t follow a plan and doesn’t have any predetermined testing procedures. It is effective when it is not evident what test should come next.
Regression testing verifies that the latest code changes haven’t affected the existing functionality. Retesting verifies that bugs detected earlier have been fixed.
There are statement coverage, decision coverage, and path coverage.
Statement coverage verifies that each end every line of source code has been tested.
Decision coverage verifies that every decision in source code has been tested.
Path coverage verifies that every possible route has been tested.
Risk-based testing is an approach based on prioritizing test cases and software features taking into account their importance, the probability of breaking, and how critical probable mistakes are.
To start with, there is always a human factor. It is not possible to write a perfect flawless code because units can work unpredictably together. Other frequent reasons for bugs are programming errors, miscommunication, changing requirements, tight deadlines, and software complexity.
Testing is a process of detecting bugs. Debugging implies looking into code and fixing what causes bugs. QA team is responsible for testing. Debugging is the task for developers.
QA engineers only find bugs. Fixing bugs is a task for developers. So, after finding a software defect a tester reports it to a developer. It is important to document the bug well, mentioning the conditions, the number of times it occurs, and expected results. A complete bug report makes it easier to fix the issue.
A defect is a software issue you find and resolve before the software gets to production. A failure is a bug that reaches an end customer.
A latent defect does not cause failure, because it is difficult or impossible to recreate the set of conditions that causes it. A masked defect does not cause a failure either, but only because there is another defect that prevents the execution of this part of the code.
The essential components of a defect report are:
Date of detection
A tester who detected it
ID and number of the defect
Severity status and priority
Date when it was resolved
A developer who resolved it
Verification implies evaluating software to make sure it complies with the general or niche standards. Validation estimates software from the perspective of business requirements.
A bug is an issue in software detected during testing. This issue makes software function in a manner you don’t anticipate. An error is an issue that arises during testing because of a missing scenario in the requirements, mistakes in design or implementation.
There are several phases a bug goes through during software development. A bug can be:
new, when it’s just been detected;
assigned, when it becomes a responsibility of a particular development;
open, when the work on fixing is in progress;
rejected, if an issue is marked as a bug by mistake;
deferred, if it is not critical and there are urgent issues to resolve;
fixed, when a bug is finally resolved;
reopened, in case a tester is not satisfied with the solution;
closed, after the bug is finally verified and everything works well.
A blocker is a bug of the highest severity and thus of the highest priority. It interferes with testing by blocking a large share of software features. It is different from a critical bug since the latter deteriorates user experience but doesn’t block testing.
Similarly to SDLC that is a collective term for all activities that occur throughout the development process, STLC refers to all the activities performed during testing.
It depends on the type of SDLC the company sticks to. If we are talking about a traditional Waterfall approach, testing follows requirement collecting and analysis, designing, and coding stages. It comes right before the deployment.
In Agile methods, however, the place of testing isn’t that strictly defined by the order. It can start, for example, at the requirements stage. Documentation testing will help avoid gaps in product logic and contribute to project feasibility.
Testing can end after the product deployment if no further changes are planned or last through the product life cycle, which is a more frequent case.
Testing can start at different stages:
after the code is ready;
when code is still in development, but there is functionality that testers can work with;
at the beginning of the project when documentation is created.
It depends on the project complexity, a company’s business processes, and approach to testing. The time to stop testing also varies. There are some milestones a QA team keeps in mind to decide upon this. So, it can happen:
after test case execution;
after all the high-priority issues have been fixed;
if the interval between inherent failures is large;
when a large functionality is covered by automated tests and no further critical bugs appear.
Some decide to stop testing when deadlines are tight and there is no other option but to release software. This scenario is undesirable.
A test strategy describes the approach to STLC. The goal of a test strategy is to provide rational planning for the entire testing process – from management goals to real test cases. Altogether, it focuses on building an efficient quality assurance process.
Test strategy considers and explains team roles and responsibilities, test scope, environment, tools, schedule, and associated risks.
A test plan is a document that describes testing goals, resources, and processes for a particular project in detail. It provides a comprehensive overview of the working process. Creating a test plan is a task for a Test Lead or a Test Manager. The document features test scope, objectives, environment, entrance and exit criteria, risk factors, and deliverables.
A test case is a sequence of actions used to check a certain functionality. It helps detect and identify problems. Test cases verify specific flows with predetermined inputs, preconditions, and outputs.
A test scenario defines what exactly to test. You may need multiple tests to cover a single scenario.
Splitting testing into stages makes it easier to manage tests and the process in general. Every test stage has a different purpose. Sometimes there is a need to run tests in different environments, and that is another reason to distinguish between several different stages.
Throughout different stages, testers focus on different aspects of functionality and performance. It positively affects overall product quality.
The division may depend on the business processes inside the company. Usually, it consists of the following phases:
requirement analysis and validation;
test closure & reporting.
Documentation testing is useful but companies often skip it. Project documentation varies across the SDLC.
There is a project test plan that outlines the complete strategy. A team uses it from the beginning to the end.
An acceptance test plan comes in during the requirements phase. A team gradually completes it during the entire process, too.
A system test plan starts at the designing stage.
An integration test plan starts at the execution phase.
Unlike testing in traditional SDLC models, it goes alongside with coding. A QA team can find bugs earlier and reduce the cost of mistakes. Agile also implies continuous communication between teams. It allows testers better understand the customer-ready product and test it efficiently from a user’s perspective.
Some are certain that a QA Lead should only review test cases, but that’s not true. The QA Lead should contribute to the test creation just like any other member of the team.
Test automation is meant to facilitate the work of a QA engineer. Thus, it is relevant to automate tests for large systems, but only in cases when they are stable.
Automation is useful if you deal with complex calculations, regression testing, and non-functional testing. It may be reasonable to automate smoke & sanity testing, GUI & API testing.
You shouldn’t automate tests that may lead to unpredictable results. You cannot cover unstable parts of software with automation. You shouldn’t use automation when its benefits don’t apply – for example, when scripts writing takes longer than a manual check. Some types of testing, like usability and exploratory, cannot be automated per se.
Scrum is a popular way to adopt an agile methodology. SDLC is based on short iterations (one-, two-, or three-week sprints) that end with a release of an updated version of a product.
All members of the agile team have daily meetings to discuss the process, share updates and insights. Scrum Master is usually a person who holds the meeting, but every team member reports on the work done, tasks planned, and results. The continuous communication makes development more efficient and helps deliver a product to end-users faster.
The roles and responsibilities are quite flexible nowadays, just like job titles. A person should mention their key tasks and duties, both hard and soft skills. For more on this, check out one of our previous posts on QA team roles and responsibilities.
A person should list both testing and management tools, including bug-tracking systems, messengers, management apps, and other software resources a team applies. The list may be not default but vary depending on a client.
So the criteria for selection of management tools may depend on a client’s preferences. When it comes to tech tools, it is important to:
identify the features required in an automation tool for a project;
evaluate open-source and commercial options;
estimate cost (both license & training) and benefits;
consult with the team before making a final decision.
It is essential to find a balance between praises and criticism. It may sound a bit too vague, but that’s how you provide an effective feedback.
If a person is performing well, make sure they understand their achievements and encourage them to continue the same way.
If a person doesn’t live up to your expectations, don’t be too harsh. Start with things they do well, admit the contribution they make, and then explain what things need to change. Set deadlines for improvement. Provide assistance if needed and try to encourage the team member so they don’t give up but become a better specialist.
An interviewee should tell about challenging tasks they’ve managed to handle and share their ideas on improvements. A person can also mention their skills that would be useful for the entire team. An interview isn’t the best time to be humble. Good candidates are honest but not arrogant.
This question encourages to talk about different practices a person would like to implement – from specific documentation and metrics to teamwork, educational initiatives, and bonding. A good candidate will demonstrate the ability to take the initiative and inspire positive change.
Technical and soft skills matter equally much. Even in case a candidate has to deal mainly with management tasks, it is essential to know that they understand the QA process.
Here’s a piece of advice for recruiters:
Make sure a person has passed all the previous career stages before becoming a QA Lead.
Expect to hear about the hard skills an interviewee obtained through the years.
Ask about the tools they can work with.
The efficiency of clear communication, emotional intelligence, and conflict resolution should be mentioned among the most important things, too.
It always depends on a specific project, as well as the competencies and experience of every team member. The more experience a person has, the higher are the manager’s expectations.
The best decision is to set objectives that are a bit challenging but realistic. You don’t want a person to become exhausted and unmotivated by requirements that are primarily impossible to meet.
Encourage healthy competition within the team so everyone can do their best and learn new things meanwhile.
Conflicts are inevitable because of diverse backgrounds, tempers, and experiences of every team member. Thus, you should prepare for conflicts beforehand. They shouldn’t catch you flatfooted.
Conflict resolution always starts with a conversation. You need to hear all parties and then bring them together for discussion. By that time, you should decide on some compromises to offer.
Ask team members to co-operate based on a shared goal. Explain that you all have to focus on what’s best for the project and advance arguments for what is the best decision.
The list of questions is far from complete yet. These are the basics a software testing professional should know. We are going to update this list from time to time to cover more aspects of testing and keep up with trends. Bookmark this page to make sure you don’t lose and come back for more questions.