Skip to content
W K
EU AI Act III(4)(a): High Risk Q3

Candidate Screening Agent

Structure the screening process - with full EU AI Act compliance built in.

Analyses applications against requirement profiles and prepares structured shortlists. EU AI Act high-risk system with enhanced governance.

Analyse your process
Airbus Volkswagen Shell Renault Evonik Vattenfall Philips KPMG

Knockout criteria via rules, profile matching via AI, bias assessment and escalation

The agent filters formal completeness and knockout criteria deterministically, classifies the match to the job profile via AI extraction with individually justified partial scores and escalates statistical bias patterns before the shortlist is created.

Outcome: At 200 to 800 applications per position an auditable shortlist instead of a black-box score, complete documentation per candidate under EU AI Act Articles 12 to 14 from the August 2026 deadline.

27% Rules Engine
55% AI Agent
18% Human

The architectural core is the decomposition of the screening process into individual, documented decision steps:

August 2026: deadline for every recruiting automation

From August 2026, every AI system that filters job applications will be a high-risk system under the EU AI Act. Organisations that cannot demonstrate a documented decision architecture by then will have to shut down the automation. Not at some point. On a fixed date.

That is the reality in which recruiting departments are planning right now. And it is uncomfortable, because candidate screening is simultaneously the process where automation delivers the greatest leverage - and carries the greatest risks.

The problem behind the problem

This agent follows the Decision Layer principle: each decision is either rule-based, AI-assisted, or explicitly assigned to a human.

The obvious challenge is familiar: 200, 400, sometimes 800 applications per role. Recruiters who, after the fiftieth CV, no longer apply the same standards as they did for the first. Hiring managers who ask after three weeks why the shortlist is not ready yet.

But the real problem runs deeper. Most organisations that use AI in screening do not know how their algorithms evaluate. They do not know the weighting. They cannot explain why Candidate A is on the shortlist and Candidate B is not. And that is precisely where risk accumulates.

The Mobley v. Workday case makes this tangible. An applicant sues not the prospective employer but the software vendor whose AI screened him out. The US federal court certifies the claim as a class action - alleging systematic discrimination by age, ethnicity, and disability. A University of Washington study shows: in AI-driven CV screenings, names associated with white ethnicity were preferred in 85% of cases. In some occupational groups, Black male applicants were disadvantaged in 100% of test cases.

These are not hypothetical scenarios. They are ongoing proceedings and published research.

Why high-risk does not have to mean high-friction

The EU deliberately classified candidate screening as high-risk - Annex III, Section 4(a). Systems that analyse and filter applications are subject to the full cascade of obligations: risk management system under Art. 9, data quality under Art. 10, technical documentation, record-keeping, transparency towards applicants, human oversight.

That sounds like bureaucracy. In fact, it describes precisely the architecture a responsible screening system needs anyway. The question is not whether but how - and here operational patchwork separates from resilient infrastructure.

The critical distinction: many organisations treat AI screening as a monolithic system. Application in, score out. But a monolithic system cannot be audited, cannot be explained, cannot be steered in a differentiated way. If the score says 72 and nobody knows whether that is because of missing language skills or a gap in the CV, the system is regulatorily worthless.

Screening as a chain of individual decisions

The Candidate Screening Agent works differently. It decomposes the screening process into individual, documented decision steps. Each step has a defined decision-maker: rules engine, AI, or human.

Application received
    |
    v
[Rules engine] Formal completeness
    |
    v
[Rules engine] Knockout criteria (qualification, experience, language)
    |
    v
[AI agent]     Semantic profile matching
    |
    v
[AI agent]     Weighted scoring + rationale per sub-score
    |
    v
[AI agent]     Bias check for statistical patterns
    |
    v
[Human]        Shortlist review and adjustment

The first two stages are rules-based. No machine learning, no black box. A candidate without the required professional experience does not fall through an algorithm - they fall through a rules engine that employee representatives have approved in advance, grounded in established selection guidelines. (UK: under the Equality Act 2010, all selection criteria must be objectively justifiable.)

Only at the profile matching stage does the AI agent come into play. The semantic analysis compares the CV and qualifications against the requirement profile. But - and this is the architectural core - every sub-assessment is individually justified. Not one score but six or eight documented individual assessments that together produce the ranking.

The architecture satisfies EU AI Act Articles 12, 13, and 14 by construction, not retrofit

Art. 14 of the EU AI Act requires human oversight. No committee can manually review 400 applications and simultaneously exercise oversight. But a committee can review a shortlist with documented individual assessments. It can trace why Candidate A scored 89 on competence fit and Candidate B scored 61. It can read the bias report and identify whether age groups are being systematically scored differently.

That is the difference between formal and substantive compliance. Formal compliance ticks off requirements. Substantive compliance builds an architecture in which human oversight actually works - because the information basis for it exists.

Art. 13 requires transparency towards users and affected persons. When every assessment is justified, a rejected applicant can also understand at which criterion their application failed. Not at an aggregate score that explains nothing - but at concrete, named requirements.

And Art. 12 requires records that enable subsequent evaluation. A decision log that captures every step with timestamp, decision-maker type, and rationale fulfils this not as a by-product. It is the core function.

The governance infrastructure as an investment

The Candidate Screening Agent is often the first high-risk agent an organisation puts into production. It thereby forces the build-out of infrastructure that no single agent would justify alone, but that every subsequent high-risk agent reuses.

The bias monitoring engine that detects statistical patterns in scoring here is reused by the Performance Review Documentation Agent, the Merit Cycle Governance Agent, and the Promotion Process Agent. The documented scoring method - every assessment with its rationale - becomes the standard for any agent that prepares decisions affecting individuals. The Equality Act-compliant rejection communications establish templates reused across all candidate-facing communications.

Screening is therefore not an isolated use case. It is the foundation on which all high-risk governance is built - documented, auditable, and defensible before employee representatives, before the deadline arrives.

Micro-Decision Table

Who decides in this agent?

11 decision steps, split by decider

27%(3/11)
Rules Engine
deterministic
55%(6/11)
AI Agent
model-based with confidence
18%(2/11)
Human
explicitly assigned
Human
Rules Engine
AI Agent
Each row is a decision. Expand to see the decision record and whether it can be challenged.
Parse application documents Extract structured data from CV, cover letter, application form AI Agent

Document parsing and data extraction from varied formats

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Match qualifications to requirements Compare extracted qualifications against job requirement profile AI Agent

AI-assisted matching with confidence scores per requirement

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Identify qualification gaps Flag requirements not clearly met by application data AI Agent

Gap analysis comparing profile to requirement list

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Generate structured candidate profile Present extracted data and match assessment to recruiter AI Agent

Automated profile assembly for consistent recruiter review

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Run bias monitoring check Analyse output distribution for demographic bias indicators AI Agent

Statistical fairness analysis on screening outputs

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Flag bias concern Alert compliance team if bias indicators exceed threshold Rules Engine

Threshold-based alerting per defined fairness metrics

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Recruiter reviews profile Evaluate candidate based on structured profile and own assessment Human

Human decision required - AI provides structure, not verdict

Decision Record

Decider ID and role
Decision rationale
Timestamp and context

Challengeable: Yes - via manager, works council, or formal objection process.

Record screening decision Document recruiter's decision with reasoning Human

Mandatory documentation per EU AI Act audit trail requirement

Decision Record

Decider ID and role
Decision rationale
Timestamp and context

Challengeable: Yes - via manager, works council, or formal objection process.

Log decision for audit trail Store complete decision record with all inputs and outputs Rules Engine

Automated logging per high-risk system documentation requirements

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Update candidate status Move candidate to next stage or rejection Rules Engine

Status update based on recorded recruiter decision

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Generate rejection documentation Produce GDPR-compliant rejection notification if applicable AI Agent

Automated notification generation per configured templates

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Decision Record and Right to Challenge

Every decision this agent makes or prepares is documented in a complete decision record. Affected employees can review, understand, and challenge every individual decision.

Which rule in which version was applied?
What data was the decision based on?
Who (human, rules engine, or AI) decided - and why?
How can the affected person file an objection?
How the Decision Layer enforces this architecturally →

Does this agent fit your process?

We analyse your specific HR process and show how this agent fits into your system landscape. 30 minutes, no preparation needed.

Analyse your process

Governance Notes

EU AI Act III(4)(a): High Risk
Classified as high-risk under the EU AI Act, Annex III, Section 4(a) - AI systems intended for use in recruiting or selection of candidates. Full conformity assessment mandatory before deployment. Article 26(7) requires informing worker representatives before introducing the system. Continuous bias monitoring is required, not optional. GDPR Articles 13-14 require informing candidates about automated processing. Article 22 right to not be subject to solely automated decisions applies - the agent must ensure a human makes every screening decision. A fundamental rights impact assessment must be completed. The agent's governance requirements are the strictest in this catalog. The Decision Layer decomposes every process into individual decision steps and defines for each: Human, Rules Engine, or AI Agent. Every decision is documented in a complete decision record. Affected employees can understand and challenge any automated decision.

Assessment

Agent Readiness 64-71%
Governance Complexity 74-81%
Economic Impact 78-85%
Lighthouse Effect 76-83%
Implementation Complexity 51-58%
Transaction Volume Daily

Prerequisites

  • Applicant tracking system (ATS) with structured data model
  • Job requirement profiles with measurable qualification criteria
  • EU AI Act conformity assessment documentation
  • Bias monitoring framework with defined fairness metrics
  • Decision logging infrastructure with full audit trail
  • Works council agreement on AI-supported screening (Art. 26(7) EU AI Act)
  • Data Protection Impact Assessment for automated candidate processing
  • Fundamental rights impact assessment per EU AI Act requirements
  • Human-in-the-loop workflow ensuring no automated screening decisions

Infrastructure Contribution

The Candidate Screening Agent is the litmus test for high-risk governance readiness. If an organisation can deploy this agent with full EU AI Act compliance - conformity assessment, bias monitoring, decision logging, human-in-the-loop - it can deploy any high-risk agent. The governance infrastructure validated here directly transfers to the Performance Review Documentation Agent, Merit Cycle Governance Agent, and Promotion Process Agent. Builds Decision Logging and Audit Trail used by the Decision Layer for traceability and challengeability of every decision.

What this assessment contains: 9 slides for your leadership team

Personalised with your numbers. Generated in 2 minutes directly in your browser. No upload, no login.

  1. 1

    Title slide - Process name, decision points, automation potential

  2. 2

    Executive summary - FTE freed, cost per transaction before/after, break-even date, cost of waiting

  3. 3

    Current state - Transaction volume, error costs, growth scenario with FTE comparison

  4. 4

    Solution architecture - Human - rules engine - AI agent with specific decision points

  5. 5

    Governance - EU AI Act, works council, audit trail - with traffic light status

  6. 6

    Risk analysis - 5 risks with likelihood, impact and mitigation

  7. 7

    Roadmap - 3-phase plan with concrete calendar dates and Go/No-Go

  8. 8

    Business case - 3-scenario comparison (do nothing/hire/automate) plus 3×3 sensitivity matrix

  9. 9

    Discussion proposal - Concrete next steps with timeline and responsibilities

Includes: 3-scenario comparison

Do nothing vs. new hire vs. automation - with your salary level, your error rate and your growth plan. The one slide your CFO wants to see first.

Show calculation methodology

Hourly rate: Annual salary (your input) × 1.3 employer burden ÷ 1,720 annual work hours

Savings: Transactions × 12 × automation rate × minutes/transaction × hourly rate × economic factor

Quality ROI: Error reduction × transactions × 12 × EUR 260/error (APQC Open Standards Benchmarking)

FTE: Saved hours ÷ 1,720 annual work hours

Break-Even: Benchmark investment ÷ monthly combined savings (efficiency + quality)

New hire: Annual salary × 1.3 + EUR 12,000 recruiting per FTE

All data stays in your browser. Nothing is transmitted to any server.

Candidate Screening Agent

Initial assessment for your leadership team

A thorough initial assessment in 2 minutes - with your numbers, your risk profile and industry benchmarks. No vendor logo, no sales pitch.

30K120K
1%15%

All data stays in your browser. Nothing is transmitted.

Agent Blueprint Available

A full blueprint for Candidate Screening Agent is available with micro-decision decomposition, industry variants, and implementation details.

View Blueprint

Frequently Asked Questions

Does the agent reject candidates automatically?

No. The agent structures information for human review. Every screening decision is made by a recruiter. The EU AI Act prohibits fully automated decisions in recruiting - this agent is designed with human-in-the-loop as a fundamental architectural requirement.

Why is this a Q3 agent and not Q1?

The governance requirements for high-risk AI in recruiting are the most demanding in the catalog. Decision logging, bias monitoring, conformity assessment, and works council agreement must be in place before deployment. These capabilities are built and proven in Q1 agents (payroll, time management) first.

How does the bias monitoring work?

The agent continuously analyses its output distribution across demographic groups (where permitted by law). If systematic disparities emerge - for example, candidates from certain backgrounds consistently receiving lower match scores - the system flags this for compliance review. Monitoring is statistical and ongoing, not a one-time certification.

What Happens Next?

1

30 minutes

Initial call

We analyse your process and identify the optimal starting point.

2

1 week

Discover

Mapping your decision logic. Rule sets documented, Decision Layer designed.

3

3-4 weeks

Build

Production agent in your infrastructure. Governance, audit trail, cert-ready from day 1.

4

12-18 months

Self-sufficient

Full access to source code, prompts and rule versions. No vendor lock-in.

Implement This Agent?

We assess your process landscape and show how this agent fits into your infrastructure.