Time-to-Hire Is a Vanity Metric. Here's What to Measure Instead.
Time-to-Hire Is a Vanity Metric. Here's What to Measure Instead.
Time-to-Hire Is a Vanity Metric. Here's What to Measure Instead.
Speed tells you how fast you're moving. It says nothing about where you're going.
Speed tells you how fast you're moving. It says nothing about where you're going.
Speed tells you how fast you're moving. It says nothing about where you're going.

5 min read
5 min read
5 min read
Get weekly updates
Get weekly updates
Opinions
Opinions
Opinions
Published on:
Published on:
Published on:
Read Time:
Read Time:
Read Time:
Category:
Category:
Category:
Ask any recruiter what metrics they're measured on and you'll get a familiar list.
Time-to-hire. Time-to-fill. Cost-per-hire. Offer acceptance rate. Funnel conversion at each stage. These are the numbers that go in the quarterly report, that get presented to the CHRO, that determine whether the recruiting function is considered to be performing.
They are also, almost without exception, measurements of speed and volume. How fast did you hire? How cheaply? How many people moved through the funnel?
Not one of them tells you whether you made a good hire.
The metric that actually matters — and why nobody tracks it
Quality of hire is widely agreed to be the most important metric in recruitment. SHRM has called it the holy grail of talent acquisition. LinkedIn's Global Talent Trends research consistently finds it at the top of the list of what recruiting leaders say they want to improve.
Only 25% of organisations feel confident they can actually measure it.
That gap — between what everyone agrees matters and what almost nobody can track — is not an accident. It's a structural problem, and it has a structural cause.
ATS systems own pre-hire data. Performance management and HRIS systems own post-hire data. These systems were built by different companies, at different times, for different purposes, and they rarely connect. Which means the recruiting function operates almost entirely without feedback on its own output.
Consider what that means in practice. A recruiter spends three weeks filling a role. They use their judgment, their process, their instincts. The hire starts. Twelve months later, that hire is either succeeding or failing — developing quickly or stagnating, building high-performing teams or creating management overhead, staying or leaving. The recruiting function almost never finds out which.
Without that feedback loop, there is no learning. The process that produced a great hire and the process that produced a bad one look identical from inside the recruiting function's dashboard. Both close with an offer accepted and a time-to-hire figure that goes in the report.
What time-to-hire actually optimizes for?
When you measure hiring teams on speed, they optimize for speed. This is not a criticism — it's rational behavior in response to the incentives created.
Speed optimization in hiring tends to produce a specific set of behaviors. Leaning toward candidates who look immediately ready — who require less evaluation, whose backgrounds are familiar, whose resumes read cleanly. Moving quickly past ambiguous signals rather than investigating them. Preferring candidates who interview smoothly over candidates who are more hesitant but more substantive.
In other words: it selects for candidates who are easy to evaluate quickly. Not candidates who are genuinely best for the role.
The Paper Tiger phenomenon — the candidate who optimizes well for the hiring process but underperforms post-hire — is partly a product of speed pressure. A thorough evaluation that takes 20 minutes per resume would catch many of the signals that distinguish genuine performance from performance on paper. But a recruiter managing 400 applications under time pressure doesn't have 20 minutes per resume. They have six seconds.
The speed metric created the six-second review. The six-second review created the demand for keyword-matching ATS. The keyword-matching ATS created the Paper Tiger problem. These are not separate failures. They're the same failure at different points in the same causal chain.
The SHRM data point that should have changed everything
In 2025, SHRM published benchmarking data that should have triggered a serious reckoning in the industry.
Both cost-per-hire and time-to-hire had increased during the period of peak AI adoption in recruiting.
That's the inverse of everything the technology was supposed to deliver. The tools sold on efficiency gains — faster screening, smarter filtering, streamlined workflows — were correlating with worse performance on the metrics the industry cares most about.
There are a few possible explanations. AI screening tools generate confidence in process while actually adding complexity. The escalating arms race between AI candidates and AI screeners creates more noise, not less, making evaluation harder. Or most simply: speed tools optimised the wrong part of the process.
You can make shortlisting faster all you like. If the shortlist is the wrong shortlist, you've just failed faster.
A framework for measuring what actually matters
Quality of hire isn't impossible to measure. It's just harder, and it requires connecting data across systems that were designed to operate separately. Here's a practical framework for doing it without rebuilding your entire HR stack.
90-day performance signal. The clearest early indicator of hire quality is whether the new employee is ramping as expected by the end of their first quarter. A structured 90-day assessment — not a formal review, just a calibrated manager evaluation — creates an early signal. Map this back to the candidate's screening score and source. Patterns emerge quickly.
Retention at 12 and 24 months. Bad hires rarely stay. The correlation between screening rigour and 12-month retention is strong enough that retention alone, tracked at the source and channel level, tells you something meaningful about hiring quality. Most ATS systems can produce this data if you ask the right questions.
Manager satisfaction at 6 months. A simple question to the hiring manager six months post-hire — "on reflection, would you make this hiring decision again?" — is surprisingly predictive and surprisingly underused. It's qualitative, but aggregated across enough hires it becomes a reliable signal about which parts of your process are producing good outcomes.
Performance ranking at 12 months. If your organisation runs performance calibration, the distribution of recent hires within performance bands tells you whether you're consistently hiring above the median of your current team or below it. This is the most direct measure of actual hiring quality, and it's available in most organisations — just rarely connected back to the recruiting function.
Interview score correlation. Retroactively mapping interview scores against 6 and 12-month performance data reveals which interviewers are predictive and which are noise. This requires the data to exist, which means capturing structured interview scores consistently — a discipline most organisations have but don't use analytically.
The ROI argument that should be in every budget conversation
The case for prioritising quality over speed isn't philosophical. It's financial, and it's straightforward.
McKinsey research finds that top performers generate 2.5x more output than average performers in standard roles. In complex, high-judgment roles — engineering, sales, product, leadership — that multiplier is often higher.
The salary difference between a top performer and an average performer in the same role is rarely 2.5x. Which means every hire where you select an average candidate over a top performer is leaving substantial economic value on the table — value that never shows up in the hiring dashboard but absolutely shows up in business outcomes.
Put it this way: if a hiring process that takes two weeks longer and costs £3,000 more per hire reliably produces candidates who perform 40% better in their first year, the ROI on the slower, more expensive process is not close. It's not even a difficult calculation.
The industry frames quality and speed as tradeoffs. They don't have to be. But achieving both requires starting by measuring the right thing — and accepting that a 30-day time-to-fill figure tells you almost nothing about whether your hiring function is doing its job.
What to do with Monday's open req?
This isn't an argument for slowing everything down. Speed matters. Open roles create drag. Time-to-fill has real costs.
The shift isn't from fast to slow. It's from optimising for speed as the primary goal to optimising for quality while taking speed seriously as a constraint.
Practically, that means: define what a good hire looks like before you open the requisition, not after. Build evaluation criteria around evidence of performance, not presence of keywords. Create a feedback loop — however simple — that connects your hiring decisions to outcomes six months later. And use time-to-hire as a guardrail, not a KPI.
The recruiting function that can demonstrate it consistently makes hires that perform in the top quartile of their cohort will never have to fight for resources, justify its process, or defend its technology stack.
The recruiting function that can only show it fills roles quickly will always be one reorg away from irrelevance.
Fast hiring and good hiring are not opposites. But they're not the same thing either. The industry has spent a long time pretending they are.
Ask any recruiter what metrics they're measured on and you'll get a familiar list.
Time-to-hire. Time-to-fill. Cost-per-hire. Offer acceptance rate. Funnel conversion at each stage. These are the numbers that go in the quarterly report, that get presented to the CHRO, that determine whether the recruiting function is considered to be performing.
They are also, almost without exception, measurements of speed and volume. How fast did you hire? How cheaply? How many people moved through the funnel?
Not one of them tells you whether you made a good hire.
The metric that actually matters — and why nobody tracks it
Quality of hire is widely agreed to be the most important metric in recruitment. SHRM has called it the holy grail of talent acquisition. LinkedIn's Global Talent Trends research consistently finds it at the top of the list of what recruiting leaders say they want to improve.
Only 25% of organisations feel confident they can actually measure it.
That gap — between what everyone agrees matters and what almost nobody can track — is not an accident. It's a structural problem, and it has a structural cause.
ATS systems own pre-hire data. Performance management and HRIS systems own post-hire data. These systems were built by different companies, at different times, for different purposes, and they rarely connect. Which means the recruiting function operates almost entirely without feedback on its own output.
Consider what that means in practice. A recruiter spends three weeks filling a role. They use their judgment, their process, their instincts. The hire starts. Twelve months later, that hire is either succeeding or failing — developing quickly or stagnating, building high-performing teams or creating management overhead, staying or leaving. The recruiting function almost never finds out which.
Without that feedback loop, there is no learning. The process that produced a great hire and the process that produced a bad one look identical from inside the recruiting function's dashboard. Both close with an offer accepted and a time-to-hire figure that goes in the report.
What time-to-hire actually optimizes for?
When you measure hiring teams on speed, they optimize for speed. This is not a criticism — it's rational behavior in response to the incentives created.
Speed optimization in hiring tends to produce a specific set of behaviors. Leaning toward candidates who look immediately ready — who require less evaluation, whose backgrounds are familiar, whose resumes read cleanly. Moving quickly past ambiguous signals rather than investigating them. Preferring candidates who interview smoothly over candidates who are more hesitant but more substantive.
In other words: it selects for candidates who are easy to evaluate quickly. Not candidates who are genuinely best for the role.
The Paper Tiger phenomenon — the candidate who optimizes well for the hiring process but underperforms post-hire — is partly a product of speed pressure. A thorough evaluation that takes 20 minutes per resume would catch many of the signals that distinguish genuine performance from performance on paper. But a recruiter managing 400 applications under time pressure doesn't have 20 minutes per resume. They have six seconds.
The speed metric created the six-second review. The six-second review created the demand for keyword-matching ATS. The keyword-matching ATS created the Paper Tiger problem. These are not separate failures. They're the same failure at different points in the same causal chain.
The SHRM data point that should have changed everything
In 2025, SHRM published benchmarking data that should have triggered a serious reckoning in the industry.
Both cost-per-hire and time-to-hire had increased during the period of peak AI adoption in recruiting.
That's the inverse of everything the technology was supposed to deliver. The tools sold on efficiency gains — faster screening, smarter filtering, streamlined workflows — were correlating with worse performance on the metrics the industry cares most about.
There are a few possible explanations. AI screening tools generate confidence in process while actually adding complexity. The escalating arms race between AI candidates and AI screeners creates more noise, not less, making evaluation harder. Or most simply: speed tools optimised the wrong part of the process.
You can make shortlisting faster all you like. If the shortlist is the wrong shortlist, you've just failed faster.
A framework for measuring what actually matters
Quality of hire isn't impossible to measure. It's just harder, and it requires connecting data across systems that were designed to operate separately. Here's a practical framework for doing it without rebuilding your entire HR stack.
90-day performance signal. The clearest early indicator of hire quality is whether the new employee is ramping as expected by the end of their first quarter. A structured 90-day assessment — not a formal review, just a calibrated manager evaluation — creates an early signal. Map this back to the candidate's screening score and source. Patterns emerge quickly.
Retention at 12 and 24 months. Bad hires rarely stay. The correlation between screening rigour and 12-month retention is strong enough that retention alone, tracked at the source and channel level, tells you something meaningful about hiring quality. Most ATS systems can produce this data if you ask the right questions.
Manager satisfaction at 6 months. A simple question to the hiring manager six months post-hire — "on reflection, would you make this hiring decision again?" — is surprisingly predictive and surprisingly underused. It's qualitative, but aggregated across enough hires it becomes a reliable signal about which parts of your process are producing good outcomes.
Performance ranking at 12 months. If your organisation runs performance calibration, the distribution of recent hires within performance bands tells you whether you're consistently hiring above the median of your current team or below it. This is the most direct measure of actual hiring quality, and it's available in most organisations — just rarely connected back to the recruiting function.
Interview score correlation. Retroactively mapping interview scores against 6 and 12-month performance data reveals which interviewers are predictive and which are noise. This requires the data to exist, which means capturing structured interview scores consistently — a discipline most organisations have but don't use analytically.
The ROI argument that should be in every budget conversation
The case for prioritising quality over speed isn't philosophical. It's financial, and it's straightforward.
McKinsey research finds that top performers generate 2.5x more output than average performers in standard roles. In complex, high-judgment roles — engineering, sales, product, leadership — that multiplier is often higher.
The salary difference between a top performer and an average performer in the same role is rarely 2.5x. Which means every hire where you select an average candidate over a top performer is leaving substantial economic value on the table — value that never shows up in the hiring dashboard but absolutely shows up in business outcomes.
Put it this way: if a hiring process that takes two weeks longer and costs £3,000 more per hire reliably produces candidates who perform 40% better in their first year, the ROI on the slower, more expensive process is not close. It's not even a difficult calculation.
The industry frames quality and speed as tradeoffs. They don't have to be. But achieving both requires starting by measuring the right thing — and accepting that a 30-day time-to-fill figure tells you almost nothing about whether your hiring function is doing its job.
What to do with Monday's open req?
This isn't an argument for slowing everything down. Speed matters. Open roles create drag. Time-to-fill has real costs.
The shift isn't from fast to slow. It's from optimising for speed as the primary goal to optimising for quality while taking speed seriously as a constraint.
Practically, that means: define what a good hire looks like before you open the requisition, not after. Build evaluation criteria around evidence of performance, not presence of keywords. Create a feedback loop — however simple — that connects your hiring decisions to outcomes six months later. And use time-to-hire as a guardrail, not a KPI.
The recruiting function that can demonstrate it consistently makes hires that perform in the top quartile of their cohort will never have to fight for resources, justify its process, or defend its technology stack.
The recruiting function that can only show it fills roles quickly will always be one reorg away from irrelevance.
Fast hiring and good hiring are not opposites. But they're not the same thing either. The industry has spent a long time pretending they are.

Great hiring starts with great decisions.
Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.


Great hiring starts with great decisions.
Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.

2025 AgentR, All rights reserved