QA Engineer

QA Engineer Hiring & Testing Framework

QA hiring in 2026 is bifurcated: clients either want a manual QA who can think about edge cases, or an SDET who can write Playwright/Cypress automation that catches regressions. Many CVs claim both and most claim neither honestly. This guide gives you the rubric for both flavors and the questions that surface real test thinking.

Key skills

Must-have

Test design thinking

Can look at a feature and immediately list 10+ test cases including edge cases, not just the happy path.

Bug reporting fluency

Writes reproducible bug reports with steps, expected, actual, environment. The unsung skill that separates strong QA from average.

Domain literacy

Understands the product domain enough to spot business-logic bugs, not just UI glitches.

Automation literacy (for SDET roles)

For SDETs: 2+ years of Selenium, Playwright, Cypress, or equivalent. Has built and maintained suites, not just touched them.

Nice-to-have

API testing depth

Postman/REST-assured/Karate. Catches regressions cheaper than UI tests.

Performance testing

JMeter, k6, basic load test design. Useful for backend-heavy products.

Security testing basics

OWASP Top 10 awareness, basic injection/XSS testing. Increasingly demanded.

CI/CD integration

Has integrated tests into pipelines, knows how to keep flaky tests out of the build.

Interview questions (6)

1

I have just built a login form with email + password. Give me 15 test cases.

What to listen for

Happy path, empty fields, invalid email format, wrong password, locked account, SQL injection attempt, XSS in email, very long password, copy-paste, autofill, browser back button, multiple sessions, expired session, special chars, network timeout. Should hit at least 12.

2

Walk me through a bug you caught that everyone else missed. How did you find it?

What to listen for

Specific bug, specific test approach, specific impact. Reveals real test instinct vs. script-following.

3

How do you decide what to automate vs leave for manual testing?

What to listen for

Cost-of-flakiness, frequency-of-execution, criticality. Not "automate everything."

4

A test passes locally but fails in CI. How do you investigate?

What to listen for

Environment differences, timing, data state, parallel execution. Not "I just rerun it."

5

A developer says "this is not a bug, it is a feature." How do you handle it?

What to listen for

Document, escalate to PM/PO with impact framing, do not get political. Diplomatic + persistent.

6

What is the most flaky test you have ever debugged? What was the root cause?

What to listen for

Specific flakiness root cause (timing, shared state, network, external dependency). Not "I just added a wait."

Evaluation rubric

Score each candidate against these weighted criteria. Total: 100%.

CriterionWeightSignal
Test design thinking30%Generates broad test cases including edge cases without prompting.
Bug reporting quality20%Writes reproducible reports with full context. Sample bugs available on request.
Automation depth (SDET)20%Has built and maintained automation suites at scale.
Domain literacy15%Understands the product domain enough to find business-logic bugs.
Communication15%Productive disagreement with developers. Calm escalation.

Red flags

CV says "Selenium expert" but cannot describe a real test framework architecture

Generates only 3–4 test cases for a simple feature

Has never debugged a flaky test (means they have not maintained automation)

Bug reports are one-liners with no reproduction steps

Strict "developers vs QA" mindset — collaboration red flag

Apply this rubric automatically with CVPRO

Upload QA Engineer CVs and let AI score every candidate against the same 42-point evidence rubric.

Try CVPRO Free