Skip to content
K
EU AI Act: Not High Risk Q3

Training Needs Analysis Agent

Identify skill gaps before they become performance gaps.

Identifies training needs from skill gaps, performance data, and business goals - prioritised by strategic relevance for L&D planning.

Analyse your process
Airbus Volkswagen Shell Renault Evonik Vattenfall Philips KPMG

Data source routing, skill gap detection via AI, prioritisation rule

The agent consolidates data sources rule-based (skill profiles, performance, business goals), extracts skill gaps via AI analysis against role requirement profiles and prioritises needs by strategic relevance - the final budget and prioritisation decision remains Human-in-the-Loop.

Outcome: According to the PwC 27th Annual CEO Survey 2024, 52 percent of CEOs see a skills shortage as a central barrier to business reinvention; at the same time, training planning in most companies is still based on subjective assessments by individual managers.

14% Rules Engine
72% AI Agent
14% Human

The structural problem is not missing training but missing assignment between gap and offering:

79 percent skill gaps, only a third measure the ROI

79 percent of companies report skills gaps in their workforce. At the same time, only one third of all organisations can quantify the ROI of their training programmes. Between these two numbers sits a structural problem: companies invest five-figure amounts per head per year in training without systematically knowing which skills are actually missing, which will be missing tomorrow, and which investment has the greatest leverage.

Needs analysis is the blind spot between business strategy and L&D budget. And this blind spot is becoming more expensive, because 39 percent of all professional skills are considered obsolete by 2030.

Why ad-hoc needs assessment does not scale

In a typical mid-sized company with 1,500 employees, training needs arise through three channels: a manager reports that their team does not know a particular software. The compliance department sends the annual list of mandatory trainings. And somewhere in autumn, the executive team asks whether the organisation is ready for next year’s strategic goals. The Head of L&D then juggles between operational pressure, regulatory obligations, and strategic ambitions - with a budget that is not enough for all three.

The problem is not the budget. It is the method. Three structural errors make needs assessment unreliable in most organisations:

Missing data foundation. Skills profiles often exist only as free text in job descriptions or as outdated self-assessments from the last review conversation. A systematic comparison of actual skills against required skills does not happen because the data sits in different systems - performance reviews in the HR system, certificates in the personnel file, project experience in the manager’s head.

No connection to business strategy. When business planning foresees expansion into a new market, the L&D budget must reflect language skills, regulatory knowledge, and cultural training. When a technology migration is coming, technical reskilling in specific teams is needed - not a general digitalisation webinar for everyone. This translation from business goals to skill needs rarely happens systematically. Usually it does not happen at all.

Mandatory trainings eat the development budget. Occupational safety, data protection, anti-money-laundering, industry-specific regulation - the list of mandatory trainings grows every year. In regulated industries, they consume 40 to 60 percent of the total L&D budget. What is left is distributed by broad allocation: a catalogue of standard trainings that employees can pick from. The result is an L&D programme that meets compliance but does not close skill gaps.

Systematic gap analysis makes internal reskilling 30 to 50 percent cheaper than new hires

Internal reskilling is 30 to 50 percent cheaper than comparable new hires - when onboarding, attrition, and time-to-productivity are included. But reskilling requires the organisation to know whom to develop for what. That is exactly what a data-driven needs analysis delivers.

The difference is not incremental. It is structural.

Ad-hoc needs assessment             Systematic needs analysis
──────────────────────────────      ──────────────────────────────
Manager flags individual need       Actual-vs-target across all units
Mandatory trainings as duty         Mandatory and development prioritised separately
Budget = last year +/- 5%           Budget follows strategic weighting
Effect unclear                      ROI per intervention type measurable
Works council learns about plan     Works council sees the needs analysis

The gap analysis compares current skills with target requirements - not at the individual level, but aggregated by unit, location, and skill area. From this emerges a picture that answers three questions: where are the biggest gaps today? Where are new gaps forming through strategic changes? And which investment has the greatest leverage - measured as the ratio of skill gain to budget spent?

The architecture: agent calculates, humans decide

The Decision Layer separates cleanly in needs assessment between what is computable and what requires business judgement. The question of which systems provide skill, performance, and business data is answered by a rule set. The weighting of resulting gaps by reach, urgency, and strategic relevance is analysis. Matching mandatory trainings against regulatory requirements is a rule set.

But: which needs actually enter the L&D plan, how the budget is distributed between mandatory and development, which units get priority - those are decisions for the Head of L&D. With data rather than gut feel, but as a deliberate business decision.

And a decision that is not made alone. Under works constitution law in many EU member states, the works council has consultation rights in identifying training needs and co-determination rights in implementing educational activities. An agent that identifies needs transparently and traceably makes this participation easier - because the data foundation is visible and does not disappear into an Excel table only one person understands.

Anyone who does not know which skills are missing cannot invest sensibly. Anyone who does invests less and achieves more. (US: similar skills-based approaches are emerging under state and federal workforce investment programmes that tie funding to documented skill gaps.)

Micro-Decision Table

Who decides in this agent?

7 decision steps, split by decider

14%(1/7)
Rules Engine
deterministic
72%(5/7)
AI Agent
model-based with confidence
14%(1/7)
Human
explicitly assigned
Human
Rules Engine
AI Agent
Each row is a decision. Expand to see the decision record and whether it can be challenged.
Collect skill requirements Assemble required competencies per role from job architecture Rules Engine

Requirements from standardised competency framework

Decision Record

Rule ID and version number
Input data that triggered the rule
Calculation result and applied formula

Challengeable: Yes - rule application verifiable. Objection possible for incorrect data or wrong rule version.

Assess current capability Map workforce skills from profiles, certifications, and assessments AI Agent

Automated skill inventory compilation from multiple sources

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Identify gaps Calculate deficit between required and current skill levels AI Agent

Quantitative gap analysis per skill, team, and organisation

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Incorporate strategic priorities Weight gaps by strategic importance and urgency AI Agent

Priority scoring based on business strategy and workforce plan inputs

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Validate priorities with leadership Confirm or adjust training priorities Human

Human validation of strategic relevance and business context

Decision Record

Decider ID and role
Decision rationale
Timestamp and context

Challengeable: Yes - via manager, works council, or formal objection process.

Generate needs analysis report Produce prioritised training needs by level and domain AI Agent

Automated report generation from gap and priority analysis

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Feed into L&D planning Translate needs into training program recommendations AI Agent

Recommendation generation mapped to available training options

Decision Record

Model version and confidence score
Input data and classification result
Decision rationale (explainability)
Audit trail with full traceability

Challengeable: Yes - fully documented, reviewable by humans, objection via formal process.

Decision Record and Right to Challenge

Every decision this agent makes or prepares is documented in a complete decision record. Affected employees can review, understand, and challenge every individual decision.

Which rule in which version was applied?
What data was the decision based on?
Who (human, rules engine, or AI) decided - and why?
How can the affected person file an objection?
How the Decision Layer enforces this architecturally →

Does this agent fit your process?

We analyse your specific HR process and show how this agent fits into your system landscape. 30 minutes, no preparation needed.

Analyse your process

Governance Notes

EU AI Act: Not High Risk
Not classified as high-risk under the EU AI Act - the agent analyses aggregate skill data without making employment decisions. GDPR applies to individual-level skill and performance data used in the analysis. Aggregation should be applied when individual-level detail is not necessary. Works council information rights may apply to the introduction of systematic skill gap analysis if it could be perceived as employee evaluation.

Assessment

Agent Readiness 61-68%
Governance Complexity 38-45%
Economic Impact 51-58%
Lighthouse Effect 46-53%
Implementation Complexity 41-48%
Transaction Volume Quarterly

Prerequisites

  • Job architecture with competency requirements per role
  • Skills and competency profiles per employee
  • Performance assessment data
  • Workforce planning outputs (future skill requirements)
  • Training catalog (available programs and formats)
  • L&D budget framework

Infrastructure Contribution

The Training Needs Analysis Agent connects the skills infrastructure (from Skills & Career Profile Agent) with the learning infrastructure (Training Effectiveness Agent, Learning Path Recommendation Agent) to create a closed-loop L&D system where investment is driven by measured needs rather than assumptions. Builds Decision Logging and Audit Trail used by the Decision Layer for traceability and challengeability of every decision.

What this assessment contains: 9 slides for your leadership team

Personalised with your numbers. Generated in 2 minutes directly in your browser. No upload, no login.

  1. 1

    Title slide - Process name, decision points, automation potential

  2. 2

    Executive summary - FTE freed, cost per transaction before/after, break-even date, cost of waiting

  3. 3

    Current state - Transaction volume, error costs, growth scenario with FTE comparison

  4. 4

    Solution architecture - Human - rules engine - AI agent with specific decision points

  5. 5

    Governance - EU AI Act, works council, audit trail - with traffic light status

  6. 6

    Risk analysis - 5 risks with likelihood, impact and mitigation

  7. 7

    Roadmap - 3-phase plan with concrete calendar dates and Go/No-Go

  8. 8

    Business case - 3-scenario comparison (do nothing/hire/automate) plus 3×3 sensitivity matrix

  9. 9

    Discussion proposal - Concrete next steps with timeline and responsibilities

Includes: 3-scenario comparison

Do nothing vs. new hire vs. automation - with your salary level, your error rate and your growth plan. The one slide your CFO wants to see first.

Show calculation methodology

Hourly rate: Annual salary (your input) × 1.3 employer burden ÷ 1,720 annual work hours

Savings: Transactions × 12 × automation rate × minutes/transaction × hourly rate × economic factor

Quality ROI: Error reduction × transactions × 12 × EUR 260/error (APQC Open Standards Benchmarking)

FTE: Saved hours ÷ 1,720 annual work hours

Break-Even: Benchmark investment ÷ monthly combined savings (efficiency + quality)

New hire: Annual salary × 1.3 + EUR 12,000 recruiting per FTE

All data stays in your browser. Nothing is transmitted to any server.

Training Needs Analysis Agent

Initial assessment for your leadership team

A thorough initial assessment in 2 minutes - with your numbers, your risk profile and industry benchmarks. No vendor logo, no sales pitch.

30K120K
1%15%

All data stays in your browser. Nothing is transmitted.

Frequently Asked Questions

Does the agent assess individual employees' competence?

The agent uses existing skill and performance data to identify gaps. It does not conduct assessments itself. Individual-level analysis serves development planning, not evaluation.

How does the agent handle skills that the organisation does not have yet but will need?

Future skill requirements come from workforce planning inputs and strategic initiatives. The agent identifies emerging gaps by comparing the current skill inventory against projected future needs - not just against current role requirements.

What Happens Next?

1

30 minutes

Initial call

We analyse your process and identify the optimal starting point.

2

1 week

Discover

Mapping your decision logic. Rule sets documented, Decision Layer designed.

3

3-4 weeks

Build

Production agent in your infrastructure. Governance, audit trail, cert-ready from day 1.

4

12-18 months

Self-sufficient

Full access to source code, prompts and rule versions. No vendor lock-in.

Implement This Agent?

We assess your process landscape and show how this agent fits into your infrastructure.