Trust in Intelligence

A European hub for auditing and governing AI systems in public services

Public administrations are under pressure to deploy AI and generative AI while meeting the AI Act and protecting fundamental rights. Trust in Intelligence builds a shared audit hub that gives governments and partners practical tools, patterns and pilots to test, audit and govern AI systems across Europe and to stay ready for future waves of AI and automated decision systems.

Why AI systems in public services are hard to trust

Across Europe, public administrations are starting to use AI and generative AI for contact centres, document handling, decision support and internal knowledge work. The pressure to deliver faster and more efficient services is real, but so are the risks for people and for institutions.

The EU AI Act introduces clear obligations for high risk AI systems, including risk management, documentation, human oversight and monitoring. Even limited risk systems such as chatbots must meet transparency obligations so that people know when they are interacting with AI.

In practice most administrations do not yet have shared methods, tools or skills to audit AI systems. They face complex supply chains, fast changing models and unclear accountability. Without a practical way to test and govern these systems, trust from citizens and from oversight bodies remains fragile.

Rapid change and external vendors

Systems change quickly and are often delivered by external vendors

Methods still emerging

Obligations under the AI Act are clear, but concrete methods are still emerging

Skills gap

Skills for AI auditing are scarce inside administrations

Fragmentation risk

Each country is inventing its own approach, which creates fragmentation and duplication and makes it harder to share learning across Europe

Why Europe needs trustworthy AI agent auditing now

AI agents are entering public services faster than our ability to audit them. Public administrations are already experimenting with AI agents in case handling, citizen advice, benefits and permits, yet many such systems are deployed without structured audit processes or effective human oversight.

01

The AI Act demands evidence, not just principles

The AI Act requires robust risk management, documentation and human oversight for high risk AI systems. Administrations need concrete ways to show how they assess, monitor and update AI agents over time.

02

Auditors and teams are under equipped

Most audit and compliance professionals do not yet have practical tools or learning journeys for AI agents. They need simple checklists, shared language and repeatable patterns they can trust.

03

Each deployment becomes a separate experiment

Without clear audit methods, every deployment carries unknown risk. There is no shared learning, no standard practices and no way to build confidence across administrations.

Our response

This project responds directly to these pressures. We bring together public administrations, research partners and GovTech teams to create an audit hub that works under real constraints in real institutions and can evolve with new types of AI systems over time.

Our deliverables and infrastructure

๐ŸŒ

European audit and compliance hub

A shared digital and human space where public sector teams can access audit methods, example cases, training resources and peer support.

It becomes a long term reference point for AI Act implementation and shared European learning and can grow as new patterns, tools and examples emerge.

๐Ÿ”

Diagnostic tools for AI agents

A practical set of tools that help administrations ask the right questions about AI agents in their own context.

They support risk screening, scenario testing, behaviour evaluation and checks for appropriate human oversight and documentation.

๐Ÿ”„

Real-time compliance twin

A digital twin concept that reflects how an AI agent behaves in real operations.

It provides early warning signals, supports continuous monitoring and maintains living documentation instead of static reports.

๐Ÿš€

Public sector pilots in three to four countries

Pilots take place inside real administrations where AI agents are already emerging.

They test tools and audit workflows under real conditions, with frontline staff, real data and actual service pressures, and show how a shared approach can work in different legal and administrative contexts.

๐ŸŽ“

AI auditor skills pathway

A structured learning journey for auditors, risk managers, digital teams and operational staff.

It combines short modules, hands-on exercises and peer learning spaces.

๐Ÿ“Š

Governance and evidence framework

A practical, rigorous framework connecting AI Act requirements to everyday administrative practice.

It supports roles and responsibilities, decision logging, risk management and evidence building.

All components of the hub are designed to adapt to new forms of AI and automated decision systems. The focus is on patterns, governance and documentation that can be applied to future models and agent ecosystems, so administrations do not need to start from zero each time a new system appears.

Join the initiative

Become part of the Trust in Intelligence Hub

Are you a public administration, research institute, GovTech builder or civil society organisation working at the frontier of AI, audit and public services? We invite you to contribute, whether by piloting, building or advising the audit infrastructure and learning journey.

Register your interest