The Hiring Arms Race Has No Winners
The Hiring Arms Race Has No Winners
The Hiring Arms Race Has No Winners
When AI screens AI-written resumes, the only thing being measured is who has better AI.
When AI screens AI-written resumes, the only thing being measured is who has better AI.
When AI screens AI-written resumes, the only thing being measured is who has better AI.

5 min read
5 min read
5 min read
Get weekly updates
Get weekly updates
Opinions
Opinions
Opinions
Published on:
Published on:
Published on:
Read Time:
Read Time:
Read Time:
Category:
Category:
Category:
Here is the current state of the hiring process, stated plainly.
Candidates are using AI to write resumes tailored to every job description in under two minutes. They're using AI to prepare for interviews, generate cover letters, and optimise application language to pass keyword-matching filters. Some are using prompt injection — embedding invisible instructions in their documents designed to manipulate AI screening tools directly.
On the other side, companies are deploying AI to screen those resumes, score candidates, detect AI-generated content, flag suspicious applications, and filter the pool before a human ever looks at it.
Both sides are escalating. Both sides are spending more. And the quality of hiring decisions is not improving.
This is an arms race. And arms races, by design, have no winners — only an ever-increasing cost of staying in the game.
How we got to mutual escalation?
The arms race didn't start with AI. It started with the keyword-matching ATS and the resume coaching industry that grew up around it.
When systems filter on explicit signals, rational actors optimise for those signals. By the mid-2010s, there was a robust industry helping candidates reverse-engineer ATS logic — which keywords to include, how to format for parser compatibility, how to frame experience in language the system would reward. The game was public. Both sides adapted.
Then generative AI changed the economics of optimisation entirely. What previously required a career coach, several hours, and some understanding of how ATS systems worked now requires a prompt and two minutes. The barrier to gaming the system dropped to near zero. Application volumes exploded. Resume quality — in the narrow sense of keyword optimisation — became essentially equal across all candidates regardless of actual ability.
And the screening tools responded with more sophistication. Better pattern recognition. AI-detection features. Semantic analysis rather than keyword matching. Each escalation on the candidate side prompted an escalation on the screening side, and each escalation on the screening side prompted a response from candidates and the tools built to serve them.
Greenhouse CEO Daniel Chait described this as the AI Doom Loop. It's a precise description. What he didn't fully answer is what the exit looks like.
The cost that doesn't show up anywhere
The direct cost of the arms race is visible: more technology spending, more recruiter time spent filtering, longer time-to-fill despite better tools.
The indirect cost is harder to see and more significant.
Trust has collapsed on both sides of the hiring market simultaneously.
Recruiters don't trust applications. 91% say they can identify candidates who are deceiving them. 34% spend half their working week filtering out low-quality or fraudulent submissions. The application — which used to be the beginning of a genuine signal — has become a document to be treated with suspicion before evaluation begins.
Candidates don't trust the process. They know applications go into black holes. They know their resume is being read by an algorithm before any human sees it. They know the process is opaque, inconsistent, and often arbitrary. So they send more applications, to more companies, using more optimised documents, with less genuine investment in any single application. Which adds volume to the pool, which makes the recruiter's job harder, which drives more automation, which drives more candidate cynicism.
The spiral is self-reinforcing. Neither side is acting irrationally given the incentives they face. Both sides are making the overall system worse.
What escalation actually selects for?
The arms race has a selection effect worth naming clearly.
In any escalating optimisation contest, the winners are the best optimisers — not the best underlying candidates. When resume optimisation is cheap and universal, the screening system stops measuring candidate quality and starts measuring candidate fluency with the screening system.
The candidates who benefit from AI-assisted application optimisation are not uniformly distributed. They tend to be younger, more technically comfortable, more aware of how hiring systems work, and more willing to invest in the application process as a game to be won rather than a signal to be given honestly. These characteristics correlate weakly at best with the qualities that predict job performance.
The candidates who don't optimise — who write plain resumes describing what they actually did, who apply to roles they genuinely fit rather than spray-optimising — are increasingly disadvantaged in a system that has been tuned to reward the optimisation behavior.
The arms race is not just expensive. It is actively selecting for the wrong thing.
Why adding more AI doesn't close the loop?
The instinctive response to an AI arms race is to deploy better AI. More sophisticated detection. Deeper semantic understanding. Multi-modal assessment that goes beyond the document.
These capabilities are real and some of them are genuinely useful. But they don't address the structural problem.
The structural problem is that both sides are optimising for the same artifact: the application document. Better AI reading a better-optimised document is still a document-reading exercise. The signal being evaluated — does this candidate's written representation of themselves match our requirements? — is still the same signal, just processed more expensively.
The exit from the arms race is not a smarter filter. It's a different signal.
Career trajectory, evaluated as a coherent sequence of events in context, is structurally harder to fake than a document. You cannot easily construct a plausible five-year career arc with verifiable progression, specific contextual detail, and coherent intent signals. The trajectory either exists or it doesn't. What someone did, where, with what resources, against what constraints, for how long — these are facts that can be evaluated, not presentations that can be optimised.
Interview intelligence — structured, consistent, evaluated against a predetermined performance framework — adds another layer of evidence that the document cannot provide. The candidate who knows the STAR framework can prepare STAR examples. The candidate who is genuinely experienced handles the follow-up questions, the challenges to their reasoning, the moments where the prepared answer runs out and judgment takes over.
Neither of these is a complete solution. But both are exits from the document optimisation loop — they move the evaluation toward evidence of actual capability, which is where hiring was always supposed to be.
The structural condition for ending the race
The arms race continues because the incentives on both sides sustain it. Candidates optimise because optimisation works. Companies deploy screening technology because volume requires it.
The only way to change the dynamic is to change what the screening rewards.
A hiring process that rewards optimised documents will attract optimised documents. A hiring process that rewards evidence of actual performance — that can distinguish a genuine career from a well-constructed narrative — will attract candidates who have actual performance to show. They're often the same candidate pool. They're very different in the signals they're asked to provide and the behaviours those signals reward.
This isn't idealism. It's incentive design. Build the evaluation around the thing you actually want to select for. The candidates will respond to whatever signal you create.
The hiring arms race is expensive, trust-destroying, and selecting for the wrong qualities. And it will continue for exactly as long as document optimisation determines who advances.
That's a choice. Not a law of nature.
Here is the current state of the hiring process, stated plainly.
Candidates are using AI to write resumes tailored to every job description in under two minutes. They're using AI to prepare for interviews, generate cover letters, and optimise application language to pass keyword-matching filters. Some are using prompt injection — embedding invisible instructions in their documents designed to manipulate AI screening tools directly.
On the other side, companies are deploying AI to screen those resumes, score candidates, detect AI-generated content, flag suspicious applications, and filter the pool before a human ever looks at it.
Both sides are escalating. Both sides are spending more. And the quality of hiring decisions is not improving.
This is an arms race. And arms races, by design, have no winners — only an ever-increasing cost of staying in the game.
How we got to mutual escalation?
The arms race didn't start with AI. It started with the keyword-matching ATS and the resume coaching industry that grew up around it.
When systems filter on explicit signals, rational actors optimise for those signals. By the mid-2010s, there was a robust industry helping candidates reverse-engineer ATS logic — which keywords to include, how to format for parser compatibility, how to frame experience in language the system would reward. The game was public. Both sides adapted.
Then generative AI changed the economics of optimisation entirely. What previously required a career coach, several hours, and some understanding of how ATS systems worked now requires a prompt and two minutes. The barrier to gaming the system dropped to near zero. Application volumes exploded. Resume quality — in the narrow sense of keyword optimisation — became essentially equal across all candidates regardless of actual ability.
And the screening tools responded with more sophistication. Better pattern recognition. AI-detection features. Semantic analysis rather than keyword matching. Each escalation on the candidate side prompted an escalation on the screening side, and each escalation on the screening side prompted a response from candidates and the tools built to serve them.
Greenhouse CEO Daniel Chait described this as the AI Doom Loop. It's a precise description. What he didn't fully answer is what the exit looks like.
The cost that doesn't show up anywhere
The direct cost of the arms race is visible: more technology spending, more recruiter time spent filtering, longer time-to-fill despite better tools.
The indirect cost is harder to see and more significant.
Trust has collapsed on both sides of the hiring market simultaneously.
Recruiters don't trust applications. 91% say they can identify candidates who are deceiving them. 34% spend half their working week filtering out low-quality or fraudulent submissions. The application — which used to be the beginning of a genuine signal — has become a document to be treated with suspicion before evaluation begins.
Candidates don't trust the process. They know applications go into black holes. They know their resume is being read by an algorithm before any human sees it. They know the process is opaque, inconsistent, and often arbitrary. So they send more applications, to more companies, using more optimised documents, with less genuine investment in any single application. Which adds volume to the pool, which makes the recruiter's job harder, which drives more automation, which drives more candidate cynicism.
The spiral is self-reinforcing. Neither side is acting irrationally given the incentives they face. Both sides are making the overall system worse.
What escalation actually selects for?
The arms race has a selection effect worth naming clearly.
In any escalating optimisation contest, the winners are the best optimisers — not the best underlying candidates. When resume optimisation is cheap and universal, the screening system stops measuring candidate quality and starts measuring candidate fluency with the screening system.
The candidates who benefit from AI-assisted application optimisation are not uniformly distributed. They tend to be younger, more technically comfortable, more aware of how hiring systems work, and more willing to invest in the application process as a game to be won rather than a signal to be given honestly. These characteristics correlate weakly at best with the qualities that predict job performance.
The candidates who don't optimise — who write plain resumes describing what they actually did, who apply to roles they genuinely fit rather than spray-optimising — are increasingly disadvantaged in a system that has been tuned to reward the optimisation behavior.
The arms race is not just expensive. It is actively selecting for the wrong thing.
Why adding more AI doesn't close the loop?
The instinctive response to an AI arms race is to deploy better AI. More sophisticated detection. Deeper semantic understanding. Multi-modal assessment that goes beyond the document.
These capabilities are real and some of them are genuinely useful. But they don't address the structural problem.
The structural problem is that both sides are optimising for the same artifact: the application document. Better AI reading a better-optimised document is still a document-reading exercise. The signal being evaluated — does this candidate's written representation of themselves match our requirements? — is still the same signal, just processed more expensively.
The exit from the arms race is not a smarter filter. It's a different signal.
Career trajectory, evaluated as a coherent sequence of events in context, is structurally harder to fake than a document. You cannot easily construct a plausible five-year career arc with verifiable progression, specific contextual detail, and coherent intent signals. The trajectory either exists or it doesn't. What someone did, where, with what resources, against what constraints, for how long — these are facts that can be evaluated, not presentations that can be optimised.
Interview intelligence — structured, consistent, evaluated against a predetermined performance framework — adds another layer of evidence that the document cannot provide. The candidate who knows the STAR framework can prepare STAR examples. The candidate who is genuinely experienced handles the follow-up questions, the challenges to their reasoning, the moments where the prepared answer runs out and judgment takes over.
Neither of these is a complete solution. But both are exits from the document optimisation loop — they move the evaluation toward evidence of actual capability, which is where hiring was always supposed to be.
The structural condition for ending the race
The arms race continues because the incentives on both sides sustain it. Candidates optimise because optimisation works. Companies deploy screening technology because volume requires it.
The only way to change the dynamic is to change what the screening rewards.
A hiring process that rewards optimised documents will attract optimised documents. A hiring process that rewards evidence of actual performance — that can distinguish a genuine career from a well-constructed narrative — will attract candidates who have actual performance to show. They're often the same candidate pool. They're very different in the signals they're asked to provide and the behaviours those signals reward.
This isn't idealism. It's incentive design. Build the evaluation around the thing you actually want to select for. The candidates will respond to whatever signal you create.
The hiring arms race is expensive, trust-destroying, and selecting for the wrong qualities. And it will continue for exactly as long as document optimisation determines who advances.
That's a choice. Not a law of nature.

Great hiring starts with great decisions.
Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.


Great hiring starts with great decisions.
Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.

2025 AgentR, All rights reserved