Skip to content
D W
EU AI Act III(4)(a): High Risk Q3

Pre-Hire Due Diligence Agent

Structured background verification - legally compliant, consistently documented.

Coordinates reference checks, validates credentials, and runs compliance screenings. EU AI Act high-risk system with enhanced documentation.

Analyse your process
Airbus Volkswagen Shell Renault Evonik Vattenfall Philips KPMG

Check requirements via rules, certificate validation via AI, compliance routing

The agent determines check requirements deterministically by position and regulation, validates submitted certificates via AI extraction against known issuer databases and routes compliance checks rule-based - the final hiring recommendation remains Human-in-the-Loop in recruiting.

Outcome: Background-check discrepancies against the application are qualitatively documented in the HireRight Global Benchmark Report 2024 - for leadership positions and regulated roles, compliance risk rises significantly, particularly where check steps lack traceability.

33% Rules Engine
45% AI Agent
22% Human

The architecture makes the check process auditable for auditors and compliance:

Degree invented, reference faked, six months too late

The hire is signed, the candidate starts - and six months later it turns out that the stated university degree never existed. The reference letter from the previous employer was manipulated. The claimed financial services suitability cannot be substantiated.

This is not an edge case. Cross-industry surveys show around one-third of all applications contain false information. A Resume Builder survey from January 2025 puts it more precisely: 44 percent of respondents admitted to lying during the hiring process - 24 percent directly in the CV. Since generative AI produces letters of reference, endorsements, and certificates in minutes, the rate keeps rising. The verification gap between what candidates claim and what employers check is widening, not narrowing.

The real cost of skipping verification

This agent follows the Decision Layer principle: each decision is either rule-based, AI-assisted, or explicitly assigned to a human.

What happens when a company of 2,000 employees fills 150 positions per year and leaves one in five statements unverified?

The legal side is clear. Forged certificates are document fraud, punishable with imprisonment of up to five years across most European jurisdictions. Employers can terminate without notice even years after the hire. But the termination is the smaller problem. Until discovery, the company has paid salary, invested in onboarding, assigned projects, and built customer relationships - and then faces an organisational rupture that reaches well beyond the personnel file.

In regulated industries, the risk multiplies. A financial services firm that puts an employee without verified fitness and properness into a licensed role faces not only employment-law consequences but regulatory ones. A pharmaceutical company that fills a GMP-relevant role with someone whose qualifications have not been verified jeopardises its manufacturing authorisation.

Why verification still fails

The paradox: most companies have verification processes. They just do not work reliably.

The typical due-diligence sequence looks like this: the recruiter receives the go-ahead from the business. Then a chain of individual steps begins - reference request by email, certificate to the business unit for review, request to the candidate for a police clearance, sanctions list check by compliance. Each step waits for the previous one. Each step sits with a different person. No step has a defined deadline.

TYPICAL SEQUENCE (sequential, 14-22 days)

Offer  -->  Reference 1  -->  Reference 2  -->  Credential check
                                                    |
                                              Compliance check
                                                    |
                                              Police clearance
                                                    |
                                              Clearance

The consequence: pre-hire checks take two to three weeks. During that time, candidates drop out - especially the good ones, who have alternatives. Recruiters cut corners, skip steps, document incompletely. And in regulated areas, the proof that verification actually happened is often missing at the end.

What needs to change architecturally

The solution is not more staff and not more discipline. The solution is a different process architecture.

The Pre-Hire Due Diligence Agent decomposes the verification process into independent decision steps. Each step has a defined decider - rules engine, AI, or human - and a documented justification.

The first step is rule-based: which checks are required for this specific position? An accounting clerk does not need a police clearance but does need a credential check. A compliance officer at a bank needs financial services suitability, sanctions list screening, and extended reference checks. This rule set is defined once, agreed with worker representatives, and then applied automatically.

Then the checks run in parallel rather than sequentially:

ARCHITECTURE (parallel, 3-7 days)

Offer  -->  [Rules] Determine check catalogue
                |
                +--> [AI] Validate credentials against registries
                |
                +--> [Human] Reference conversation 1
                |
                +--> [Human] Reference conversation 2
                |
                +--> [Rules] Sanctions list screening
                |
                +--> [Candidate] Submit police clearance
                |
                v
             [AI] Consolidate results + risk assessment
                |
                v
             [Human] Clearance or escalation

Credential validation happens automatically - against university registries, chamber databases, issuer directories. No recruiter visually comparing PDFs. No email to the HR department of the issuer with a two-week waiting period. The agent checks, documents the result, and escalates only on discrepancies.

Reference conversations stay with humans. An algorithm cannot hear subtle tones, cannot evaluate hesitation, cannot ask follow-up questions that go beyond the script. But the agent builds the standardised question list, schedules the conversations, documents results in a structured format, and ensures no conversation is forgotten.

GDPR as an architectural requirement, not a hurdle

Pre-employment screening in Europe operates in one of the strictest data protection frameworks in the world. National employment data rules allow processing of personal data only to the extent necessary for establishing the employment relationship. Every check needs a legal basis or explicit, voluntary consent - and voluntariness in the hiring context is legally contested, because the candidate has no real choice.

That means: every single check must document on which legal basis it is performed, what is checked, and what is not. A monolithic background check that queries everything at once is legally vulnerable. A process that justifies and documents each check individually is not.

That is exactly what decomposition into individual decisions delivers. Credential verification has a different legal basis than sanctions list screening. Reference collection requires a different form of consent than the check of publicly available registries. When each step carries its own legitimation, data protection stops being a process blocker and becomes part of the architecture.

High-risk system - and still productive

The EU AI Act classifies systems for evaluating candidates as high-risk under Annex III, Section 4(a). The cascade of obligations is comprehensive: risk management system, technical documentation, transparency toward affected persons, human oversight.

But the Pre-Hire Due Diligence Agent does not make hiring decisions. It validates facts and documents results. Whether a negative verification result leads to rejection is decided by a human. For regulatory exclusion criteria - missing financial services suitability, entry on a sanctions list - the agent flags the result clearly, but does not reject. It provides the documented basis on which humans decide.

This distinguishes due diligence architecturally from candidate screening. Screening evaluates and weights - there, the bias question is central. Due diligence verifies - there, the completeness and documentation question is central. Both high-risk, but with different governance requirements.

What the infrastructure contributes

The validation framework built here for credentials is not a one-off product. The Certification Tracking Agent uses the same check logic for ongoing licences and certifications in the employment relationship. The Audit Compliance Agent uses the same matching mechanism for sanctions lists and registry checks.

More importantly: the consent management engine built here for GDPR-compliant candidate verification becomes the foundation for every agent processing third-party personal data. Built cleanly once, documented, and approved by worker representatives - then reused.

The decision log capturing every verification step with timestamp, legal basis, and result makes the whole due diligence process audit-proof. Not as compliance theatre, but as infrastructure that delivers reliable answers when it matters - in an employment dispute, a regulatory audit, or a works council review. (US: FCRA, ban-the-box laws, and state-specific consent requirements create a parallel compliance stack that the agent’s jurisdiction-specific permissibility matrix handles directly.)

Micro-Decision Table

Who decides in this agent?

9 decision steps, split by decider

33%(3/9)
Rules Engine
deterministic
45%(4/9)
AI Agent
model-based with confidence
22%(2/9)
Human
explicitly assigned
Human
Rules Engine
AI Agent
Each row is a decision. Expand to see the decision record and whether it can be challenged.
Determine required checks Identify which verification activities are required and permitted Rules Engine

Rule matrix mapping role type and jurisdiction to permitted checks

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Obtain candidate consent Collect legally required consent for each verification type Human

Explicit candidate consent required per GDPR and local law

Decision Record

Decider ID and role
Decision rationale
Timestamp and context

Challengeable: Yes - via manager, works council, or formal objection process.

Initiate verification requests Send requests to credential issuers, references, screening providers AI Agent

Automated request generation per verification type

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Track verification progress Monitor response status and flag delays AI Agent

Automated tracking with deadline escalation

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Validate verification results Check returned results for completeness and consistency Rules Engine

Rule-based validation of response format and content

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Flag discrepancies Alert recruiting team of mismatches between claims and verification AI Agent

Automated comparison of candidate-provided data with verification results

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Review flagged issues Assess discrepancies and determine impact on hiring decision Human

Human review required for all verification discrepancies

Decision Record

Decider ID and role
Decision rationale
Timestamp and context

Challengeable: Yes - via manager, works council, or formal objection process.

Document results Store verification outcomes with full audit trail AI Agent

Automated documentation per high-risk system requirements

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Confirm clearance status Report overall verification status (cleared/pending/issue) Rules Engine

Aggregated status from all individual verification results

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Decision Record and Right to Challenge

Every decision this agent makes or prepares is documented in a complete decision record. Affected employees can review, understand, and challenge every individual decision.

Which rule in which version was applied?
What data was the decision based on?
Who (human, rules engine, or AI) decided - and why?
How can the affected person file an objection?
How the Decision Layer enforces this architecturally →

Does this agent fit your process?

We analyse your specific HR process and show how this agent fits into your system landscape. 30 minutes, no preparation needed.

Analyse your process

Governance Notes

EU AI Act III(4)(a): High Risk
Classified as high-risk under the EU AI Act, Annex III, Section 4(a) - the agent participates in the candidate evaluation process. Conformity assessment mandatory. GDPR requirements are particularly strict: candidate consent must be specific per verification type, data minimisation applies (only checks permitted for the specific role), and retention periods for verification data must be defined. In some jurisdictions, criminal background checks are only permissible for specific role categories. The agent must enforce jurisdiction-specific permissibility rules to prevent illegal screening activities. The Decision Layer decomposes every process into individual decision steps and defines for each: Human, Rules Engine, or AI Agent. Every decision is documented in a complete decision record. Affected employees can understand and challenge any automated decision.

Assessment

Agent Readiness 58-65%
Governance Complexity 78-85%
Economic Impact 51-58%
Lighthouse Effect 38-45%
Implementation Complexity 54-61%
Transaction Volume Weekly

Prerequisites

  • Verification type matrix per role and jurisdiction
  • Integration with credential verification services
  • Reference collection workflow and templates
  • Regulatory screening provider interfaces (where applicable)
  • Candidate consent management system
  • EU AI Act conformity assessment documentation
  • Data Protection Impact Assessment for candidate background processing
  • Legal review of permissible checks per jurisdiction

Infrastructure Contribution

The Pre-Hire Due Diligence Agent builds the external verification and consent management infrastructure that supports any agent interfacing with external data sources. The jurisdiction-specific permissibility engine - determining what is legally allowed where - is reusable across compliance and policy agents. Builds Decision Logging and Audit Trail used by the Decision Layer for traceability and challengeability of every decision.

What this assessment contains: 9 slides for your leadership team

Personalised with your numbers. Generated in 2 minutes directly in your browser. No upload, no login.

  1. 1

    Title slide - Process name, decision points, automation potential

  2. 2

    Executive summary - FTE freed, cost per transaction before/after, break-even date, cost of waiting

  3. 3

    Current state - Transaction volume, error costs, growth scenario with FTE comparison

  4. 4

    Solution architecture - Human - rules engine - AI agent with specific decision points

  5. 5

    Governance - EU AI Act, works council, audit trail - with traffic light status

  6. 6

    Risk analysis - 5 risks with likelihood, impact and mitigation

  7. 7

    Roadmap - 3-phase plan with concrete calendar dates and Go/No-Go

  8. 8

    Business case - 3-scenario comparison (do nothing/hire/automate) plus 3×3 sensitivity matrix

  9. 9

    Discussion proposal - Concrete next steps with timeline and responsibilities

Includes: 3-scenario comparison

Do nothing vs. new hire vs. automation - with your salary level, your error rate and your growth plan. The one slide your CFO wants to see first.

Show calculation methodology

Hourly rate: Annual salary (your input) × 1.3 employer burden ÷ 1,720 annual work hours

Savings: Transactions × 12 × automation rate × minutes/transaction × hourly rate × economic factor

Quality ROI: Error reduction × transactions × 12 × EUR 260/error (APQC Open Standards Benchmarking)

FTE: Saved hours ÷ 1,720 annual work hours

Break-Even: Benchmark investment ÷ monthly combined savings (efficiency + quality)

New hire: Annual salary × 1.3 + EUR 12,000 recruiting per FTE

All data stays in your browser. Nothing is transmitted to any server.

Pre-Hire Due Diligence Agent

Initial assessment for your leadership team

A thorough initial assessment in 2 minutes - with your numbers, your risk profile and industry benchmarks. No vendor logo, no sales pitch.

30K120K
1%15%

All data stays in your browser. Nothing is transmitted.

Frequently Asked Questions

Does the agent make hiring decisions based on verification results?

No. The agent collects, documents, and presents verification results. A human recruiter or hiring manager reviews the results and makes the hiring decision. Discrepancies are flagged for human assessment, not automatically resolved.

How does the agent handle jurisdictions where certain checks are prohibited?

The agent applies a jurisdiction-specific permissibility matrix: each combination of check type and jurisdiction is classified as permitted, conditional (requires specific justification), or prohibited. Prohibited checks are never initiated.

What Happens Next?

1

30 minutes

Initial call

We analyse your process and identify the optimal starting point.

2

1 week

Discover

Mapping your decision logic. Rule sets documented, Decision Layer designed.

3

3-4 weeks

Build

Production agent in your infrastructure. Governance, audit trail, cert-ready from day 1.

4

12-18 months

Self-sufficient

Full access to source code, prompts and rule versions. No vendor lock-in.

Implement This Agent?

We assess your process landscape and show how this agent fits into your infrastructure.