Skip to content
D K
GoBD-compliant §203 StGB-compliant Q1

Account Coding Agent

GL account, cost centre, tax code - automatically coded, with confidence score.

Assigns incoming invoices to the correct GL account (SKR03/04 or custom), cost centre and tax code.

Analyse your process
Airbus Volkswagen Shell Renault Evonik Vattenfall Philips KPMG

Rule-based account validation, LLM classification only for unclear line items

The agent assigns GL account, cost centre and tax code to incoming invoices deterministically against the chart of accounts and the input-VAT matrix, and uses LLM classification only where the line item does not match a standard case.

Outcome: Up to 90 percent of invoices coded without manual intervention, throughput from 6 minutes down to under 1 minute per document, and one documented input-VAT decision per transaction.

80% Rules Engine
20% AI Agent
0% Human

The split between rule path and AI classification follows a clear logic:

24,000 euros of back-tax per tax audit from miscoded postings

Incorrect coding costs five figures at every tax audit

Small and mid-sized businesses pay an average of EUR 24,000 (USD 26,000) in back taxes after a standard audit. For VAT-focused audits the figure rises to around EUR 25,000 per company, and more than half of all audits result in additional assessments. The most frequent cause: incorrect account coding. Wrong GL account means wrong tax treatment. Wrong tax treatment means denied input VAT recovery. Denied input VAT across several years adds up quickly into six-figure territory.

The problem is not negligence. A company processing 10,000 incoming invoices per month makes 10,000 coding decisions - every month. With manual processing, the Institute of Finance and Management (IOFM) puts the error rate at roughly 2%. That is 200 invoices every month where GL account, cost centre or tax code is wrong. Every single one is a finding a tax auditor can pick up.

Ten decisions per invoice - each with tax consequences

Account coding is often treated as a data-entry step. In reality it is a chain of ten individual decisions, each triggering different legal consequences.

A real-world example: an invoice for consulting services from an EU service provider arrives. The accounting team must decide: Which GL account? Which cost centre? Which profit centre? Reverse charge under (US: the customer-accounts-for-VAT mechanism) or standard VAT? Is input VAT deductible? Is the amount above the low-value asset threshold? Does it need period-based accrual? Only when every sub-decision is correct does the final journal entry hold.

Each of these decisions follows its own logic. GL account assignment flows from the service description and the chart of accounts. The tax code follows VAT law. Capitalisation follows commercial and tax accounting rules. Anyone looking at a single decision in isolation misses the interactions. Anyone making all ten manually needs experience, concentration and time - for every single invoice.

Rule-based coding, human review only for interpretation cases

The Decision Layer breaks every coding process into those ten decision steps and defines for each one: rule engine, AI, or human. For account coding the split is clear. Nine out of ten steps are solvable by rules. Tax code determination under VAT law, input VAT deduction checks, low-value asset thresholds, period-end accrual rules - these are deterministic decisions with a single correct answer.

The one step that needs AI support is interpretation. When a service description on an invoice reads “Q3 project support” and the chart of accounts offers fifteen possible GL accounts, a rule engine is not enough. Here the language model evaluates, based on historical codings, which account fits - and returns a confidence score. If the score falls below the defined threshold, the agent escalates to a clerk. Above the threshold, it posts automatically.

The result: the accounts team no longer processes 10,000 invoices. It processes the 300 where the agent is not confident enough. The rest flows through - checked, coded, documented.

Every coding becomes audit evidence

A tax auditor does not ask whether a coding is correct. The auditor asks why it was made this way rather than another. That is where manual accounting is weakest: the justification exists only in the head of the clerk who processed the invoice eight months ago.

The agent documents the full decision path for every coding: GL account applied with rationale, tax code with statutory reference, cost centre, confidence score and whether the decision was automatic or manual. That satisfies German GoBD (German record-keeping standard) bookkeeping principles for procedural documentation. The auditor does not see a result - the auditor sees the path to that result, for every one of the 120,000 invoices in a year.

The chart of accounts engine as the foundation for all posting agents

Account coding is the first step in accounts payable, but not the only one. Travel expenses, entertainment costs, asset additions, provisions, accruals - every one of these processes needs the same core logic: service to GL account to tax code. Anyone who builds this mapping cleanly for coding is already building the infrastructure for every subsequent posting agent.

The mapping framework the Account Coding Agent uses becomes the standard building block. The confidence scoring and the escalation pattern - post automatically or hand over to a human - become the blueprint. Not every Finance agent has to answer the question of how to handle uncertainty again. The Account Coding Agent answers it once, and all the others build on it.

Micro-Decision Table

Who decides in this agent?

10 decision steps, split by decider

80%(8/10)
Rules Engine
deterministic
20%(2/10)
AI Agent
model-based with confidence
0%(0/10)
Human
explicitly assigned
Human
Rules Engine
AI Agent
Each row is a decision. Expand to see the decision record and whether it can be challenged.
GL account assignment (standard) Which GL account matches the service? Rules Engine Auditor

Rule-based assignment with clear chart of accounts (SKR03/04)

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Auditor

GL account assignment (interpretation) How should the service description be interpreted? AI Agent Auditor

LLM classification for ambiguous service descriptions

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Challengeable by: Auditor

Cost centre assignment Which cost centre bears the cost? Rules Engine

Derived from purchase order or contract; AI suggestion when unassigned

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Profit centre assignment Which profit centre is responsible? Rules Engine

Derived from cost centre

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Determine tax code 19%, 7%, intra-community, reverse charge or exempt? Rules Engine Auditor

UStG rules applied deterministically

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Auditor

Check input tax deduction Is input tax deductible per Paragraph 15 UStG? Rules Engine Auditor

Deterministic check of deduction prerequisites

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Auditor

Check capitalisation requirement Must it be capitalised? Low-value asset or fixed asset? Rules Engine Auditor

Thresholds per HGB and EStG (above EUR 800 net)

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Auditor

Check accrual boundary Is there a prepaid expense item? Rules Engine Auditor

HGB Paragraph 250 - service period vs. invoice date

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Challengeable by: Auditor

Confidence assessment How confident is the overall coding? AI Agent

LLM rates its own confidence across all decision steps

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Routing decision Auto-post or clerk review? Rules Engine

Confidence threshold determines the routing path

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Decision Record and Right to Challenge

Every decision this agent makes or prepares is documented in a complete decision record. Affected parties (employees, suppliers, auditors) can review, understand, and challenge every individual decision.

Which rule in which version was applied?
What data was the decision based on?
Who (human, rules engine, or AI) decided - and why?
How can the affected person file an objection?
How the Decision Layer enforces this architecturally →

Does this agent fit your process?

We analyse your specific finance process and show how this agent fits into your system landscape. 30 minutes, no preparation needed.

Analyse your process

Governance Notes

GoBD-compliant §203 StGB-compliant

GoBD relevance: high - incorrect coding leads directly to incorrect tax reporting. A frequent point of objection in tax audits. UStG rules for tax codes are deterministic and stored versioned in the rule engine. When legislation changes (e.g. tax rate change), a new rule version is deployed without altering existing postings. Paragraph 203 StGB relevant for client data.

§203 StGB-relevant data is encrypted end-to-end and never passed to AI models in plain text.

Process Documentation Contribution

The Account Coding Agent documents for every posting: chart of accounts applied (version), chosen GL account with rationale, tax code with legal basis, confidence score and whether posting was automatic or manual. During a tax audit, it is traceable why every individual coding was made.

Assessment

Agent Readiness 82-89%
Governance Complexity 26-33%
Economic Impact 76-83%
Lighthouse Effect 31-38%
Implementation Complexity 31-38%
Transaction Volume Daily

Prerequisites

  • ERP system with chart of accounts (SKR03/04 or custom)
  • Cost centre structure and profit centre mapping
  • Historical coding data for AI training (min. 12 months)
  • Defined confidence thresholds for auto-posting

Infrastructure Contribution

The Account Coding Agent builds the central posting logic. The chart of accounts engine (rule versioning, tax code assignment, thresholds) is reused by the Credit Note Agent, Entertainment Expense Agent and all other agents that create journal entries. The confidence assessment becomes the standard pattern for all AI-assisted decisions in the Finance catalog.

What this assessment contains: 9 slides for your leadership team

Personalised with your numbers. Generated in 2 minutes directly in your browser. No upload, no login.

  1. 1

    Title slide - Process name, decision points, automation potential

  2. 2

    Executive summary - FTE freed, cost per transaction before/after, break-even date, cost of waiting

  3. 3

    Current state - Transaction volume, error costs, growth scenario with FTE comparison

  4. 4

    Solution architecture - Human - rules engine - AI agent with specific decision points

  5. 5

    Governance - EU AI Act, GoBD/statutory, audit trail - with traffic light status

  6. 6

    Risk analysis - 5 risks with likelihood, impact and mitigation

  7. 7

    Roadmap - 3-phase plan with concrete calendar dates and Go/No-Go

  8. 8

    Business case - 3-scenario comparison (do nothing/hire/automate) plus 3×3 sensitivity matrix

  9. 9

    Discussion proposal - Concrete next steps with timeline and responsibilities

Includes: 3-scenario comparison

Do nothing vs. new hire vs. automation - with your salary level, your error rate and your growth plan. The one slide your CFO wants to see first.

Show calculation methodology

Hourly rate: Annual salary (your input) × 1.3 employer burden ÷ 1,720 annual work hours

Savings: Transactions × 12 × automation rate × minutes/transaction × hourly rate × economic factor

Quality ROI: Error reduction × transactions × 12 × EUR 260/error (APQC Open Standards Benchmarking)

FTE: Saved hours ÷ 1,720 annual work hours

Break-Even: Benchmark investment ÷ monthly combined savings (efficiency + quality)

New hire: Annual salary × 1.3 + EUR 12,000 recruiting per FTE

All data stays in your browser. Nothing is transmitted to any server.

Account Coding Agent

Initial assessment for your leadership team

A thorough initial assessment in 2 minutes - with your numbers, your risk profile and industry benchmarks. No vendor logo, no sales pitch.

30K120K
1%15%

All data stays in your browser. Nothing is transmitted.

Frequently Asked Questions

Does the agent work with our custom chart of accounts?

Yes. The agent works with SKR03, SKR04 or custom charts of accounts. The mapping is configured in the rule engine. Standard charts are ready immediately; custom charts are set up during the implementation project.

What happens with incorrect coding?

Every coding is tagged with a confidence score. Below the defined threshold, it is not auto-posted but routed to a clerk. Subsequent corrections are made GoBD-compliantly via reversal and re-posting - never by overwriting.

How does the agent handle tax audits?

The agent generates the procedural documentation automatically. For every posting it is traceable: which chart of accounts (version), which rule, which input, which result. This is not a retrospective report but technical proof of the decision-making process.

What Happens Next?

1

30 minutes

Initial call

We analyse your process and identify the optimal starting point.

2

1 week

Discover

Mapping your decision logic. Rule sets documented, Decision Layer designed.

3

3-4 weeks

Build

Production agent in your infrastructure. Governance, audit trail, cert-ready from day 1.

4

12-18 months

Self-sufficient

Full access to source code, prompts and rule versions. No vendor lock-in.

Implement This Agent?

We assess your finance process landscape and show how this agent fits your infrastructure.