The Interview Is Getting Longer. Your Hires Aren’t Getting Better.

The Interview Is Getting Longer. Your Hires Aren’t Getting Better.

The Interview Is Getting Longer. Your Hires Aren’t Getting Better.

Companies keep adding interview rounds. The best candidates keep leaving before the last one.

Companies keep adding interview rounds. The best candidates keep leaving before the last one.

Companies keep adding interview rounds. The best candidates keep leaving before the last one.

5 min read

5 min read

5 min read

Get weekly updates

Get weekly updates

Opinions

Opinions

Opinions

Published on:

Published on:

Published on:

Read Time:

Read Time:

Read Time:

Category:

Category:

Category:

The Interview Is Getting Longer. Your Hires Aren’t Getting Better.

Offer acceptance has fallen to 51%. Technical roles now average 17.6 interview rounds per hire. The data is unambiguous: companies are adding rounds to a process that was already broken, and the best candidates are walking out before it ends.



There is a belief, largely unexamined, sitting at the centre of how most companies hire.

The belief is this: more interviews mean better decisions. If one conversation is not enough to assess someone, add another. If four rounds are not conclusive, schedule a fifth. If the team is still uncertain, bring in a panel. The instinct to add evaluation is so deeply embedded in corporate hiring culture that questioning it feels almost reckless, like suggesting you should make a major financial decision with less due diligence, not more.

But the data says something different. And the gap between what companies believe about interview volume and what the research actually shows is one of the most expensive misalignments in modern hiring.


 

The numbers don’t lie


Start with the macro trend.

68.5 days

average hiring process length in 2025 (exceeds 90 days for senior roles)

 

52%

increase in interview rounds for technical roles since 2021

 

The interview load has followed a parallel trajectory. Here is where different role types currently sit:

•       Data roles: 19.5 interviews per hire

•       Product management: 18 interviews per hire

•       Engineering: 17.9 interviews per hire

 

These numbers are not the result of any single decision. They are the accumulated product of thousands of small additions: a VP who asked to be included, a peer panel that became standard practice, a culture fit screen layered on top of a technical assessment layered on top of a phone screen layered on top of an initial recruiter call. Each addition felt justified in isolation. The cumulative effect is a process that takes the better part of three months to complete.

And candidates are leaving before it does.


 

The attrition the interview generates


51%

offer acceptance rate in Q2 2025, down from 74% just two years prior

 

That means roughly one in every two candidates who received a job offer said no. Not because the compensation was wrong. Not because they found something better. In 36% of cases, the reason was a negative experience during the interview process itself.


The interview, the process designed to find the right candidate, is causing the right candidate to decline.

 

The attrition starts earlier than the offer stage:

•       42% of candidates withdrew because scheduling took too long.

•       47% cited poor communication as their reason for disengaging.

•       The interview stage now accounts for 32% of all candidate drop-off, more than application abandonment, scheduling delays, and onboarding friction combined.

•       A quarter of candidates disengage between the interview and the offer, after making it all the way through the process.

 

There is no other function in a well-run organisation that would accept a 49% conversion rate on its most important output and respond by adding complexity to the workflow. Hiring does it routinely, and calls the added complexity rigour.



The Google finding that changed nothing

In 2016, Google’s internal people analytics team published a finding that should have restructured how every organisation thinks about interview volume.

After analysing years of hiring data, the research concluded that four interviews are sufficient to predict with 86% confidence whether a candidate is a qualified hire. More than four interviews generated no statistically meaningful additional information about the candidate.

 

86%

predictive confidence achieved after just 4 interviews (Google People Analytics, 2016). Additional rounds added no meaningful signal.

 

Google capped its standard process at four rounds. It was a research-based decision made by a company with more hiring data than almost any organisation on earth.

And then every other company in the industry kept adding rounds anyway.

The research did not travel, because the belief it challenged was too fundamental. More evaluation is better is not a policy position that most companies have consciously adopted. It is an ambient assumption so deeply embedded in hiring culture that a single study, even a Google study, could not dislodge it.

 

The result is an industry that knows diminishing returns exist, has evidence of where they begin, and continues to schedule around them.


 

What the interviews are actually measuring

The meta-analytic record on this is not new, but it is not widely internalised either.

 

•       Unstructured interviews (conversational, intuition-based formats that still dominate most hiring processes) carry a predictive validity of approximately 0.38 for job performance.

•       Structured interviews (consistent questions evaluated against a predetermined framework) reach approximately 0.51.

 

Both are better than chance. Neither is high. And the gap between them is not compensated for by adding more rounds of the same format.

Running six unstructured interviews does not produce the validity of six structured interviews. It produces the variance of six different conversations by six different people forming six different impressions, none of which is systematically comparable to the others.

The interrater reliability of an unstructured panel is low. Two people interviewing the same candidate on the same day can reach opposite conclusions and both feel confident in their judgment. Adding more interviewers does not resolve this problem. It multiplies it.


52%

of candidates say 4 to 5 rounds is too many

 

~1 in 3

say even 2 to 3 rounds crosses a line

 

Candidates have an intuitive sense that something is not working, that the additional scrutiny is not producing better decisions, just more friction. The data supports their intuition.


 

The candidate the long process loses

Every week of process length creates selection pressure. Not for the most qualified candidates, but for the most available ones.


The long process doesn’t filter for quality. It filters for availability. And availability is not a proxy for capability.

 

The candidate who can wait 68.5 days for a hiring decision is, by definition, not the candidate currently receiving other offers. Top performers, those with strong track records who are being actively recruited and have multiple processes running in parallel, disengage earlier and at higher rates than average candidates. They have:


•       Less tolerance for disorganised scheduling.

•       Less patience for redundant rounds.

•       More options to exercise when a process signals that their time is not valued.

 

The passive candidate who agreed to explore a conversation, attended three rounds, and found their enthusiasm cooling by the time round four was being scheduled did not formally reject the company. They just stopped responding.


 

The accountability gap behind the escalation

If the research is clear and the candidate behaviour is documented, why does interview volume keep growing? The answer is incentive design.

•       The hiring manager who requests another panel has protected themselves. If the hire goes wrong, they were thorough.

•       The recruiter who schedules an additional loop has reduced their personal exposure.

•       The HR team that adds a cultural assessment to an already-extended process is demonstrating diligence.

 

Nobody owns the cost of the candidate who dropped out in round five. Nobody is accountable for the offer acceptance rate from a process they extended by three weeks. The harm from a long, friction-heavy interview falls on the candidate, and eventually on hire quality data, but it does not surface in a way that any individual decision-maker has to answer for.

 

This is how a problem that everyone can see at an aggregate level persists at an individual level. The incentives are not aligned with the outcome.

 


 

What the exit looks like

The solution is not a shorter bad process. Adding speed to an unstructured, inconsistently evaluated, poorly designed interview loop just loses candidates faster without improving quality.

The research has consistently pointed toward the same combination: fewer structured interviews, each designed to gather specific evidence of capability, with a clear decision framework that ensures each round adds genuinely new information rather than repeating the previous one.

 

4 rounds maximum

with structured questions and calibrated scoring outperforms 8 intuition-led conversations, in both predictive validity and offer acceptance.

 

But interview structure alone does not solve the underlying measurement problem. The interview, even a well-constructed one, is still asking candidates to describe what they have done and could do in a conversation setting. It does not assess what their career actually looks like: the trajectory of decisions they made, the contexts they operated in, the progression that either shows sustained growth or reveals a pattern the document has been written to obscure.

Career-pattern evidence is what structured interviews were built to approximate. Evaluating it directly, looking at the sequence of roles, the decisions at each transition, the coherence of what someone built over time, adds a layer of signal that no number of conversation rounds can generate.

 

The companies winning the talent competition in 2026 are not running the most thorough processes. They are running the fastest accurate ones.

 

Where every step gathers new evidence. Where the decision framework is clear before the process starts. Where strong candidates are moved to an offer before a competitor finishes scheduling their third round.

Rigour is not volume. That is the distinction the hiring industry has been slow to internalise, and the one that the data, and the candidates leaving the process early, are making unavoidable.


 

AgentR evaluates candidates on career trajectory and demonstrated capability, reducing the number of rounds needed to make a confident decision, without reducing confidence. If your offer acceptance rate is telling you something your interview process is not, visit agentr.global.

The Interview Is Getting Longer. Your Hires Aren’t Getting Better.

Offer acceptance has fallen to 51%. Technical roles now average 17.6 interview rounds per hire. The data is unambiguous: companies are adding rounds to a process that was already broken, and the best candidates are walking out before it ends.



There is a belief, largely unexamined, sitting at the centre of how most companies hire.

The belief is this: more interviews mean better decisions. If one conversation is not enough to assess someone, add another. If four rounds are not conclusive, schedule a fifth. If the team is still uncertain, bring in a panel. The instinct to add evaluation is so deeply embedded in corporate hiring culture that questioning it feels almost reckless, like suggesting you should make a major financial decision with less due diligence, not more.

But the data says something different. And the gap between what companies believe about interview volume and what the research actually shows is one of the most expensive misalignments in modern hiring.


 

The numbers don’t lie


Start with the macro trend.

68.5 days

average hiring process length in 2025 (exceeds 90 days for senior roles)

 

52%

increase in interview rounds for technical roles since 2021

 

The interview load has followed a parallel trajectory. Here is where different role types currently sit:

•       Data roles: 19.5 interviews per hire

•       Product management: 18 interviews per hire

•       Engineering: 17.9 interviews per hire

 

These numbers are not the result of any single decision. They are the accumulated product of thousands of small additions: a VP who asked to be included, a peer panel that became standard practice, a culture fit screen layered on top of a technical assessment layered on top of a phone screen layered on top of an initial recruiter call. Each addition felt justified in isolation. The cumulative effect is a process that takes the better part of three months to complete.

And candidates are leaving before it does.


 

The attrition the interview generates


51%

offer acceptance rate in Q2 2025, down from 74% just two years prior

 

That means roughly one in every two candidates who received a job offer said no. Not because the compensation was wrong. Not because they found something better. In 36% of cases, the reason was a negative experience during the interview process itself.


The interview, the process designed to find the right candidate, is causing the right candidate to decline.

 

The attrition starts earlier than the offer stage:

•       42% of candidates withdrew because scheduling took too long.

•       47% cited poor communication as their reason for disengaging.

•       The interview stage now accounts for 32% of all candidate drop-off, more than application abandonment, scheduling delays, and onboarding friction combined.

•       A quarter of candidates disengage between the interview and the offer, after making it all the way through the process.

 

There is no other function in a well-run organisation that would accept a 49% conversion rate on its most important output and respond by adding complexity to the workflow. Hiring does it routinely, and calls the added complexity rigour.



The Google finding that changed nothing

In 2016, Google’s internal people analytics team published a finding that should have restructured how every organisation thinks about interview volume.

After analysing years of hiring data, the research concluded that four interviews are sufficient to predict with 86% confidence whether a candidate is a qualified hire. More than four interviews generated no statistically meaningful additional information about the candidate.

 

86%

predictive confidence achieved after just 4 interviews (Google People Analytics, 2016). Additional rounds added no meaningful signal.

 

Google capped its standard process at four rounds. It was a research-based decision made by a company with more hiring data than almost any organisation on earth.

And then every other company in the industry kept adding rounds anyway.

The research did not travel, because the belief it challenged was too fundamental. More evaluation is better is not a policy position that most companies have consciously adopted. It is an ambient assumption so deeply embedded in hiring culture that a single study, even a Google study, could not dislodge it.

 

The result is an industry that knows diminishing returns exist, has evidence of where they begin, and continues to schedule around them.


 

What the interviews are actually measuring

The meta-analytic record on this is not new, but it is not widely internalised either.

 

•       Unstructured interviews (conversational, intuition-based formats that still dominate most hiring processes) carry a predictive validity of approximately 0.38 for job performance.

•       Structured interviews (consistent questions evaluated against a predetermined framework) reach approximately 0.51.

 

Both are better than chance. Neither is high. And the gap between them is not compensated for by adding more rounds of the same format.

Running six unstructured interviews does not produce the validity of six structured interviews. It produces the variance of six different conversations by six different people forming six different impressions, none of which is systematically comparable to the others.

The interrater reliability of an unstructured panel is low. Two people interviewing the same candidate on the same day can reach opposite conclusions and both feel confident in their judgment. Adding more interviewers does not resolve this problem. It multiplies it.


52%

of candidates say 4 to 5 rounds is too many

 

~1 in 3

say even 2 to 3 rounds crosses a line

 

Candidates have an intuitive sense that something is not working, that the additional scrutiny is not producing better decisions, just more friction. The data supports their intuition.


 

The candidate the long process loses

Every week of process length creates selection pressure. Not for the most qualified candidates, but for the most available ones.


The long process doesn’t filter for quality. It filters for availability. And availability is not a proxy for capability.

 

The candidate who can wait 68.5 days for a hiring decision is, by definition, not the candidate currently receiving other offers. Top performers, those with strong track records who are being actively recruited and have multiple processes running in parallel, disengage earlier and at higher rates than average candidates. They have:


•       Less tolerance for disorganised scheduling.

•       Less patience for redundant rounds.

•       More options to exercise when a process signals that their time is not valued.

 

The passive candidate who agreed to explore a conversation, attended three rounds, and found their enthusiasm cooling by the time round four was being scheduled did not formally reject the company. They just stopped responding.


 

The accountability gap behind the escalation

If the research is clear and the candidate behaviour is documented, why does interview volume keep growing? The answer is incentive design.

•       The hiring manager who requests another panel has protected themselves. If the hire goes wrong, they were thorough.

•       The recruiter who schedules an additional loop has reduced their personal exposure.

•       The HR team that adds a cultural assessment to an already-extended process is demonstrating diligence.

 

Nobody owns the cost of the candidate who dropped out in round five. Nobody is accountable for the offer acceptance rate from a process they extended by three weeks. The harm from a long, friction-heavy interview falls on the candidate, and eventually on hire quality data, but it does not surface in a way that any individual decision-maker has to answer for.

 

This is how a problem that everyone can see at an aggregate level persists at an individual level. The incentives are not aligned with the outcome.

 


 

What the exit looks like

The solution is not a shorter bad process. Adding speed to an unstructured, inconsistently evaluated, poorly designed interview loop just loses candidates faster without improving quality.

The research has consistently pointed toward the same combination: fewer structured interviews, each designed to gather specific evidence of capability, with a clear decision framework that ensures each round adds genuinely new information rather than repeating the previous one.

 

4 rounds maximum

with structured questions and calibrated scoring outperforms 8 intuition-led conversations, in both predictive validity and offer acceptance.

 

But interview structure alone does not solve the underlying measurement problem. The interview, even a well-constructed one, is still asking candidates to describe what they have done and could do in a conversation setting. It does not assess what their career actually looks like: the trajectory of decisions they made, the contexts they operated in, the progression that either shows sustained growth or reveals a pattern the document has been written to obscure.

Career-pattern evidence is what structured interviews were built to approximate. Evaluating it directly, looking at the sequence of roles, the decisions at each transition, the coherence of what someone built over time, adds a layer of signal that no number of conversation rounds can generate.

 

The companies winning the talent competition in 2026 are not running the most thorough processes. They are running the fastest accurate ones.

 

Where every step gathers new evidence. Where the decision framework is clear before the process starts. Where strong candidates are moved to an offer before a competitor finishes scheduling their third round.

Rigour is not volume. That is the distinction the hiring industry has been slow to internalise, and the one that the data, and the candidates leaving the process early, are making unavoidable.


 

AgentR evaluates candidates on career trajectory and demonstrated capability, reducing the number of rounds needed to make a confident decision, without reducing confidence. If your offer acceptance rate is telling you something your interview process is not, visit agentr.global.

Great hiring starts with great decisions.

Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.

Great hiring starts with great decisions.

Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.

Great hiring starts with great decisions.

Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.

2026 AgentR, All rights reserved