Skip to content
W K
GoBD-compliant §203 StGB-compliant Q2

Fraud Detection Agent

Detect duplicate invoices, phantom vendors, expense fraud and AI-fake invoices.

Detects duplicate invoices, phantom vendor patterns, unusual posting patterns, AI-generated fake invoices.

Analyse your process
Airbus Volkswagen Shell Renault Evonik Vattenfall Philips KPMG

Detect anomalies via ML, rule-based SoD and duplicate checks, alert assessment by compliance

The agent detects anomalies in payment behaviour via AI pattern recognition and network analysis, validates SoD violations and duplicates deterministically against the authorisation matrix, and hands every alert with risk score to the compliance officer for investigation.

Outcome: Up to 5 percent of annual revenue in fraud exposure addressable per ACFE benchmark, full review instead of sampling across 100 percent of transactions, and false-positive rate below 15 percent.

30% Rules Engine
50% AI Agent
20% Human

The three layers of rules, AI and human assessment structure the 10 decision steps clearly:

Five percent of annual revenue lost to fraud, sampling finds nothing

Organisations lose an average of five percent of annual revenue to fraud. The Association of Certified Fraud Examiners quantifies this in its 2024 Report to the Nations, based on 1,921 investigated cases with a total damage of USD 3.1 billion (approx. EUR 2.9 billion). Most of these cases were not discovered by internal controls but by tips. Rule-based checking catches what it knows. What it does not know stays invisible - often for years.

Sampling-Based Audits Fail Against Deliberate Concealment

Sampling-based auditing rests on a core assumption: if a sufficiently large share of transactions is correct, you may infer the same for the whole. Fraud invalidates that assumption. A phantom vendor posting small amounts just below the approval threshold for 18 months never shows up in any sample. Threshold splitting - an invoice for EUR 9,900 (USD 10,700) instead of EUR 10,000 to bypass the approval tier - looks unremarkable in isolation.

Only full-population analysis makes these patterns visible. Not through stricter rules, but through statistical anomaly detection: which vendor has no purchase orders from procurement yet regular payments from AP? Which cost centre posts on Friday evenings when nobody reviews? Rule-based systems never ask these questions because nobody formulated them as rules.

AI-Generated Documents Shift the Threat Landscape

Until 2024, forged invoices were detectable by craftsmanship - wrong fonts, missing stamps, inconsistent tax IDs. That has fundamentally changed. AI-generated documents are today visually indistinguishable from real ones, and anti-fraud professionals across industries report a marked increase in GenAI-generated forgeries over the past two years. Chris Juneau, SVP at SAP Concur, put it plainly: “Do not trust your eyes.”

For Finance departments, this means a new line of defence. AI-generated fakes survive visual review and often pass rule-based validation as well. What gives them away are metadata inconsistencies, atypical document structures and statistical anomalies in context - such as a new vendor whose first invoice exactly matches the amount pattern of an existing vendor. This analysis requires AI trained on document authenticity.

Ten Decision Steps Separate Signal from False Alarm

The Decision Layer breaks fraud detection into a chain of ten decisions with three different decider types. Three steps are rule-based: duplicate detection, expense fraud against defined thresholds, and segregation-of-duties violations against the authorisation matrix. Five steps use AI analysis: phantom vendors, posting anomalies, document authenticity, round-tripping in payment networks, and aggregated risk scoring. Two steps rest with the human: the escalation decision and the final assessment of whether an alert is justified.

A concrete scenario: a mid-sized manufacturer processing 40,000 inbound invoices per year. The agent detects that a packaging supplier has been submitting invoices with the same net amount for six months but slightly varying item descriptions. Simultaneously, network analysis reveals the supplier’s bank account is linked to an employee in procurement. Neither signal alone constitutes proof. Combined, they produce a risk score that triggers escalation to the compliance officer.

The Compliance Officer Decides - Not the Algorithm

Six of ten decision steps use AI analysis. That makes this agent the most AI-intensive in the entire catalogue. Yet no algorithm decides whether a suspicion becomes an investigation. The Decision Layer documents every alert with trigger pattern, affected transactions, risk score and timestamp. The compliance officer assesses on this basis whether a false positive is present or an investigation is warranted.

This principle is not optional. ISA 240 obliges the statutory auditor to assess fraud risks. A demonstrably documented detection system - with logged alerts, escalation paths and investigation outcomes - is a concrete signal of the internal control system’s effectiveness. (US: SOX Section 404 imposes comparable documentation requirements for internal controls over financial reporting.) Comprehensive alert documentation strengthens not only the defence but also audit readiness under professional auditing standards and the requirements for early risk detection systems.

Micro-Decision Table

Who decides in this agent?

10 decision steps, split by decider

30%(3/10)
Rules Engine
deterministic
50%(5/10)
AI Agent
model-based with confidence
20%(2/10)
Human
explicitly assigned
Human
Rules Engine
AI Agent
Each row is a decision. Expand to see the decision record and whether it can be challenged.
Detect duplicate invoices Is there a duplicate or slightly varied invoice? Rules Engine Vendor

Exact duplicates = R, variants (slightly changed vendor) = A

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Vendor

Phantom vendor detection Are there vendors without genuine business relationships? AI Agent Vendor

Pattern analysis of order history and vendor activity

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Challengeable by: Vendor

Unusual posting patterns Are there postings at unusual times or with threshold splitting? AI Agent Auditor

ML anomaly detection against historical behaviour patterns

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Challengeable by: Auditor

Detect AI fake invoices Is the document an AI-generated forgery? AI Agent Vendor

LLM analysis of document authenticity, metadata check

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Challengeable by: Vendor

Detect expense fraud Is there a duplicate submission or inflated amount? Rules Engine Employee

Rule violations = R, pattern recognition = A

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Employee

Round-tripping detection Are there circular money flows? AI Agent Auditor

Network analysis of payment flows

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Challengeable by: Auditor

Segregation-of-duties violations Is the requester, approver and payer the same person? Rules Engine Auditor

Authorisation matrix matching

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Auditor

Calculate risk score How high is the fraud risk of this transaction? AI Agent

ML-based scoring from all detection modules

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Alert to compliance officer Must a suspected case be investigated? Human Auditor

Investigation decision requires human judgement

Decision Record

Decider ID and role
Decision rationale
Timestamp and context

Challengeable: Yes - via manager, works council, or formal objection process.

Challengeable by: Auditor

False positive assessment Is this a genuine suspected case or a false alarm? Human

Judgement in assessing the overall picture

Decision Record

Decider ID and role
Decision rationale
Timestamp and context

Challengeable: Yes - via manager, works council, or formal objection process.

Decision Record and Right to Challenge

Every decision this agent makes or prepares is documented in a complete decision record. Affected parties (employees, suppliers, auditors) can review, understand, and challenge every individual decision.

Which rule in which version was applied?
What data was the decision based on?
Who (human, rules engine, or AI) decided - and why?
How can the affected person file an objection?
How the Decision Layer enforces this architecturally →

Does this agent fit your process?

We analyse your specific finance process and show how this agent fits into your system landscape. 30 minutes, no preparation needed.

Analyse your process

Governance Notes

GoBD-compliant §203 StGB-compliant

GoBD-relevant: fraud detection processes tax-relevant transaction data. The results - especially suspected cases and investigation outcomes - are sensitive data and must be treated confidentially.

For professional secrecy holders (Paragraph 203 StGB), suspected cases must not be disclosed to third parties. LLM inference for document authenticity checking must run in EU data centres. The agent reports suspected cases exclusively to the internal compliance officer. The investigation decision always remains with the human.

§203 StGB-relevant data is encrypted end-to-end and never passed to AI models in plain text.

Process Documentation Contribution

The Fraud Detection Agent documents for the GoBD procedural documentation: which detection modules are active, which thresholds are configured, which suspected cases were identified and how they were assessed. The documentation itself is part of the ICS evidence.

Assessment

Agent Readiness 71-78%
Governance Complexity 31-38%
Economic Impact 74-81%
Lighthouse Effect 41-48%
Implementation Complexity 41-48%
Transaction Volume Daily

Prerequisites

  • Access to transaction data from ERP (postings, orders, payments)
  • Access to vendor master data and order history
  • Authorisation system with SoD matrix
  • Configured thresholds for risk scores and escalation

Infrastructure Contribution

The Fraud Detection Agent is the most A-intensive agent in the entire catalog. It uses the anomaly detection of the ICS Monitoring Agent and transaction data from all AP/AR agents. The ML scoring framework is reused for risk assessments in other domains. The document authenticity check becomes the standard for all incoming documents.

Builds Decision Logging and Audit Trail used by the Decision Layer for traceability and challengeability of every decision.

What this assessment contains: 9 slides for your leadership team

Personalised with your numbers. Generated in 2 minutes directly in your browser. No upload, no login.

  1. 1

    Title slide - Process name, decision points, automation potential

  2. 2

    Executive summary - FTE freed, cost per transaction before/after, break-even date, cost of waiting

  3. 3

    Current state - Transaction volume, error costs, growth scenario with FTE comparison

  4. 4

    Solution architecture - Human - rules engine - AI agent with specific decision points

  5. 5

    Governance - EU AI Act, GoBD/statutory, audit trail - with traffic light status

  6. 6

    Risk analysis - 5 risks with likelihood, impact and mitigation

  7. 7

    Roadmap - 3-phase plan with concrete calendar dates and Go/No-Go

  8. 8

    Business case - 3-scenario comparison (do nothing/hire/automate) plus 3×3 sensitivity matrix

  9. 9

    Discussion proposal - Concrete next steps with timeline and responsibilities

Includes: 3-scenario comparison

Do nothing vs. new hire vs. automation - with your salary level, your error rate and your growth plan. The one slide your CFO wants to see first.

Show calculation methodology

Hourly rate: Annual salary (your input) × 1.3 employer burden ÷ 1,720 annual work hours

Savings: Transactions × 12 × automation rate × minutes/transaction × hourly rate × economic factor

Quality ROI: Error reduction × transactions × 12 × EUR 260/error (APQC Open Standards Benchmarking)

FTE: Saved hours ÷ 1,720 annual work hours

Break-Even: Benchmark investment ÷ monthly combined savings (efficiency + quality)

New hire: Annual salary × 1.3 + EUR 12,000 recruiting per FTE

All data stays in your browser. Nothing is transmitted to any server.

Fraud Detection Agent

Initial assessment for your leadership team

A thorough initial assessment in 2 minutes - with your numbers, your risk profile and industry benchmarks. No vendor logo, no sales pitch.

30K120K
1%15%

All data stays in your browser. Nothing is transmitted.

Frequently Asked Questions

How high is the false positive rate?

In the initial phase, the false positive rate is typically 15-25%. With increasing training volume and feedback loops, it drops to 5-10%. Human assessment of every suspected case ensures no unjustified consequences are drawn.

Can the agent also detect internal fraud cases?

Yes. Segregation-of-duties checks, threshold splitting and posting time analysis explicitly target internal patterns. Round-tripping detection identifies money flows potentially used to conceal internal transactions.

Are detected suspected cases automatically reported to authorities?

No. The agent reports suspected cases exclusively to the internal compliance officer. The decision on further steps - internal investigation, criminal complaint, reporting to supervisory authorities - remains with the human. For Paragraph 203-relevant cases, additional confidentiality requirements apply.

What Happens Next?

1

30 minutes

Initial call

We analyse your process and identify the optimal starting point.

2

1 week

Discover

Mapping your decision logic. Rule sets documented, Decision Layer designed.

3

3-4 weeks

Build

Production agent in your infrastructure. Governance, audit trail, cert-ready from day 1.

4

12-18 months

Self-sufficient

Full access to source code, prompts and rule versions. No vendor lock-in.

Implement This Agent?

We assess your finance process landscape and show how this agent fits your infrastructure.