56% of Recruiters Ignore Your AI Match Score. Trust Is the Real Product.
56% of Recruiters Ignore Your AI Match Score. Trust Is the Real Product.
56% of Recruiters Ignore Your AI Match Score. Trust Is the Real Product.
A number without a reason isn't a decision. It's a guess with better formatting.
A number without a reason isn't a decision. It's a guess with better formatting.
A number without a reason isn't a decision. It's a guess with better formatting.

5 min read
5 min read
5 min read
Get weekly updates
Get weekly updates
Explainers
Explainers
Explainers
Published on:
Published on:
Published on:
Read Time:
Read Time:
Read Time:
Category:
Category:
Category:
Your ATS gives a candidate a score of 84.
What does that mean?
Not rhetorically — literally. What does the 84 represent? Which signals contributed to it? How much did the job title weight versus the keyword density versus the tenure length? What would an 83 look like, and why is this candidate 84 instead?
Most recruiters using AI-powered screening tools cannot answer these questions. Not because they haven't looked — because the system isn't designed to tell them.
And according to Enhancv research, 56% of recruiters either ignore AI match scores entirely or don't have them surfaced in their workflow at all. More than half. The flagship feature of the AI screening category — the one that's supposed to replace human intuition with algorithmic precision — is being skipped by the majority of the people it was built for.
That isn't a user experience problem. It's a trust problem. And it's worth understanding why.
Why recruiters don't trust the score?
Recruiter distrust of AI match scores isn't irrational. It's the correct response to a tool that provides a conclusion without showing its reasoning.
Consider how a good recruiter actually makes decisions. They read a resume and form a view — not just whether the candidate matches the JD, but why. What's the trajectory here? What does this career sequence tell me about how this person thinks? Where are the gaps, and are they disqualifying or just unusual? The reasoning is explicit, at least internally, and it can be challenged. A colleague can say "I think you're underweighting the international experience" and the recruiter can engage with that.
An AI score of 84 doesn't give you any of that. It gives you a number. The reasoning — if there is any — is locked inside a model that the recruiter can neither inspect nor argue with. They're being asked to trust a conclusion they cannot evaluate.
Most experienced recruiters, faced with a choice between their own judgment and an opaque number, sensibly use their own judgment. The score becomes noise.
The irony of building distrust into the design
The AI screening industry set out to solve for human bias in hiring. The implicit claim was: algorithms are more consistent than humans, less susceptible to the irrelevant factors that distort human judgment, more reliable at surfacing the best candidates.
Some of that is true. Algorithms are consistent. They apply the same logic to every candidate without the fatigue, mood variation, and associative bias that affect human reviewers.
But consistency and accuracy are different things. A biased algorithm is consistently biased — it applies the same distorted logic at scale, without the variation that sometimes allows human bias to self-correct.
The University of Washington found something important here: even when people are told an AI system has shown bias, they still tend to follow its recommendations. The authority of the system overrides the awareness of its limitation. Which means the recruiter who ignores the score entirely may be making a more epistemically honest decision than the one who follows it without understanding it.
The system designed to reduce bias may be producing a different kind of bias — the bias of institutional authority, applied uniformly, without transparency or recourse.
What reasoning-based evaluation actually looks like?
The alternative to an opaque score is an explained assessment. Not a number — a structured account of what the evaluation found and why it matters.
The difference looks something like this.
"Score-based output:" Candidate match: 84/100.
"Reasoning-based output:" This candidate has led sales teams in two high-growth B2B SaaS environments, with documented revenue outcomes in both. Their career progression is above the median rate for their sector. The eight-month gap in 2022 coincides with a period where they were consulting independently — three clients are listed, one with a specific revenue outcome. The vocabulary in their resume doesn't closely mirror the JD, but the underlying experience is a strong match for the scope of this role.
The second version gives the recruiter something to work with. They can agree. They can push back. They can add context the evaluation didn't have. They can use the assessment as the beginning of a conversation rather than a verdict to accept or reject.
Crucially, they can trust it — because they can see the reasoning and evaluate it themselves.
This is what explainability means in practice. Not a tooltip that describes which factors contributed to a score. An actual account of what was found, framed in language that connects to how a recruiter thinks about candidate quality.
The feedback loop that explainability creates
There's a second benefit to reasoning-based evaluation that is less obvious but arguably more important: it creates a feedback loop.
When a recruiter can read an explanation and respond to it — agreeing, disagreeing, adding context — that response is itself information. It tells the system where its reasoning was right and where it missed something. Over time, that feedback improves the evaluation. The model learns what this organisation values, in this role, at this point in the company's development.
An opaque score can't be improved through recruiter feedback, because there's nothing to respond to. You can mark a candidate as "good" or "bad" after the fact, but you can't tell the system *why* its reasoning was wrong.
The transparency isn't just ethically important. It's practically necessary for the tool to get better.
Trust as the actual product
The recruitment technology market has been so focused on capability — faster screening, better matching, more sophisticated pattern recognition — that it has consistently underinvested in credibility.
A recruiter who doesn't trust their tools doesn't use them. A tool that isn't used doesn't deliver its claimed benefits. The fastest, most accurate AI screening system in the world produces zero value if the person who is supposed to act on its outputs is ignoring it.
The 56% who ignore AI match scores aren't failing to adopt innovation. They're responding rationally to a product that hasn't earned their trust. The fix isn't better change management or more training. It's building tools that explain themselves — that show their reasoning, invite disagreement, and give the recruiter something to work with rather than a number to accept.
An 84 that no one acts on is worth less than a clear explanation that changes a decision.
Build for trust first. The capability follows.
Your ATS gives a candidate a score of 84.
What does that mean?
Not rhetorically — literally. What does the 84 represent? Which signals contributed to it? How much did the job title weight versus the keyword density versus the tenure length? What would an 83 look like, and why is this candidate 84 instead?
Most recruiters using AI-powered screening tools cannot answer these questions. Not because they haven't looked — because the system isn't designed to tell them.
And according to Enhancv research, 56% of recruiters either ignore AI match scores entirely or don't have them surfaced in their workflow at all. More than half. The flagship feature of the AI screening category — the one that's supposed to replace human intuition with algorithmic precision — is being skipped by the majority of the people it was built for.
That isn't a user experience problem. It's a trust problem. And it's worth understanding why.
Why recruiters don't trust the score?
Recruiter distrust of AI match scores isn't irrational. It's the correct response to a tool that provides a conclusion without showing its reasoning.
Consider how a good recruiter actually makes decisions. They read a resume and form a view — not just whether the candidate matches the JD, but why. What's the trajectory here? What does this career sequence tell me about how this person thinks? Where are the gaps, and are they disqualifying or just unusual? The reasoning is explicit, at least internally, and it can be challenged. A colleague can say "I think you're underweighting the international experience" and the recruiter can engage with that.
An AI score of 84 doesn't give you any of that. It gives you a number. The reasoning — if there is any — is locked inside a model that the recruiter can neither inspect nor argue with. They're being asked to trust a conclusion they cannot evaluate.
Most experienced recruiters, faced with a choice between their own judgment and an opaque number, sensibly use their own judgment. The score becomes noise.
The irony of building distrust into the design
The AI screening industry set out to solve for human bias in hiring. The implicit claim was: algorithms are more consistent than humans, less susceptible to the irrelevant factors that distort human judgment, more reliable at surfacing the best candidates.
Some of that is true. Algorithms are consistent. They apply the same logic to every candidate without the fatigue, mood variation, and associative bias that affect human reviewers.
But consistency and accuracy are different things. A biased algorithm is consistently biased — it applies the same distorted logic at scale, without the variation that sometimes allows human bias to self-correct.
The University of Washington found something important here: even when people are told an AI system has shown bias, they still tend to follow its recommendations. The authority of the system overrides the awareness of its limitation. Which means the recruiter who ignores the score entirely may be making a more epistemically honest decision than the one who follows it without understanding it.
The system designed to reduce bias may be producing a different kind of bias — the bias of institutional authority, applied uniformly, without transparency or recourse.
What reasoning-based evaluation actually looks like?
The alternative to an opaque score is an explained assessment. Not a number — a structured account of what the evaluation found and why it matters.
The difference looks something like this.
"Score-based output:" Candidate match: 84/100.
"Reasoning-based output:" This candidate has led sales teams in two high-growth B2B SaaS environments, with documented revenue outcomes in both. Their career progression is above the median rate for their sector. The eight-month gap in 2022 coincides with a period where they were consulting independently — three clients are listed, one with a specific revenue outcome. The vocabulary in their resume doesn't closely mirror the JD, but the underlying experience is a strong match for the scope of this role.
The second version gives the recruiter something to work with. They can agree. They can push back. They can add context the evaluation didn't have. They can use the assessment as the beginning of a conversation rather than a verdict to accept or reject.
Crucially, they can trust it — because they can see the reasoning and evaluate it themselves.
This is what explainability means in practice. Not a tooltip that describes which factors contributed to a score. An actual account of what was found, framed in language that connects to how a recruiter thinks about candidate quality.
The feedback loop that explainability creates
There's a second benefit to reasoning-based evaluation that is less obvious but arguably more important: it creates a feedback loop.
When a recruiter can read an explanation and respond to it — agreeing, disagreeing, adding context — that response is itself information. It tells the system where its reasoning was right and where it missed something. Over time, that feedback improves the evaluation. The model learns what this organisation values, in this role, at this point in the company's development.
An opaque score can't be improved through recruiter feedback, because there's nothing to respond to. You can mark a candidate as "good" or "bad" after the fact, but you can't tell the system *why* its reasoning was wrong.
The transparency isn't just ethically important. It's practically necessary for the tool to get better.
Trust as the actual product
The recruitment technology market has been so focused on capability — faster screening, better matching, more sophisticated pattern recognition — that it has consistently underinvested in credibility.
A recruiter who doesn't trust their tools doesn't use them. A tool that isn't used doesn't deliver its claimed benefits. The fastest, most accurate AI screening system in the world produces zero value if the person who is supposed to act on its outputs is ignoring it.
The 56% who ignore AI match scores aren't failing to adopt innovation. They're responding rationally to a product that hasn't earned their trust. The fix isn't better change management or more training. It's building tools that explain themselves — that show their reasoning, invite disagreement, and give the recruiter something to work with rather than a number to accept.
An 84 that no one acts on is worth less than a clear explanation that changes a decision.
Build for trust first. The capability follows.

Great hiring starts with great decisions.
Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.


Great hiring starts with great decisions.
Let AgentR surface the patterns, risks, and opportunities, while you focus on the people.

2025 AgentR, All rights reserved