As AI adoption accelerates across the public sector, robust governance is essential. Internal auditors are strategically positioned to provide insight into whether AI systems are ethical, compliant and deliver value without exposing the organisation to undue risk.
AI systems learn from data, do not guarantee consistent output and evolve over time. Unlike traditional software, they can work well in testing, yet fail unpredictably in real-world settings, raising concerns about fairness, bias, security and resilience.
For internal auditors, this creates both a new challenge and a unique opportunity to contribute to supporting the safe, ethical and accountable use of AI-enabled public services.
Internal auditor insights
Internal audit provides a vital line of defence in the governance of AI. Positioned within the organisation yet operating independently of delivery teams, internal auditors can help leaders see blind spots and test whether the assurances provided around AI are credible. This perspective allows audit insight to challenge assumptions, identify control weaknesses and highlight risks to citizens and service users before they escalate.
As AI systems evolve over time, internal audit’s role is not limited to one-off reviews. Ongoing insight helps organisations assess whether monitoring is robust, escalation routes are clear, and remediation plans are in place when models drift or new risks emerge. In this way, internal audit contributes not only to compliance but also to the continuous strengthening of governance and resilience in AI adoption.
Internal audit also strengthens organisational accountability. Trust in AI adoption depends not only on AI system performance but also on whether decision-making processes are transparent and risks have been responsibly managed. By reviewing governance arrangements, risk registers and the evidence gathered through testing, auditors help confirm that AI projects meet standards of fairness, robustness and explainability.
What standard should internal audit teams compare AI systems against?
AI testing framework
The UK government’s AI playbook and recently released AI testing framework offer non-binding common standards that can be adapted to each context. Developed by the Cross-Government Testing Community, the framework offers a structured approach to this challenge. It provides principles, quality attributes and modular strategies to help teams rigorously evaluate AI systems throughout their life cycle.
For internal audit teams, the framework offers the following:
- A structured way to assess AI risk. The framework’s principles and attributes provide a shared basis for internal auditors to assess whether proper controls exist across the AI life cycle and whether these controls operate effectively.
- Clarity on proportionate assurance. The modular strategy helps auditors consider whether testing effort matches the level of risk.
- Evidence to support accountability. Continuous assurance encourages teams to provide ongoing evidence, not just one-off sign-off at go-live.
- Alignment across government. By using a shared approach, audit findings become more comparable and consistent across government.
Like the UK’s non-binding, principles-based AI regime, the testing framework needs to be adapted to the level of risk of the respective AI products or systems. Unfortunately, the voluntary nature of the framework can also result in uncertainty and difficulty for tech teams, assurance teams and internal auditors to robustly translate the guidelines into operational requirements.
Capability building
A key challenge of building trust in the public sector’s use of AI is the skills gap. AI testing and auditing demands new technical and ethical awareness, from understanding bias to appreciating what ‘good’ looks like in supplier contract transparency.
At the most basic level, internal auditors will need a general understanding of how diverse types of AI systems operate. To help facilitate this, the government has launched the free GetTech Certified scheme this autumn, offering civil servants access to free resources to build core digital and AI competencies. Internal auditors will also need to be familiar with key concepts around bias, fairness, explainability, transparency, accountability, privacy and emerging regulatory requirements relevant to their organisations.
Finally, internal audit teams will need to build deep AI audit skills like algorithmic bias detection, model interpretability and supplier contract analysis. These skills are often too specialised or resource-intensive for individual teams to cultivate in isolation, making cross-organisational collaboration essential. By forming audit networks and sharing tools and training platforms, public sector organisations can accelerate learning and ensure consistent, high-quality AI governance across government.
What’s next for internal auditors?
As AI continues to reshape public services, internal audit has a vital role to play in enabling responsible deployment of AI technologies. By getting involved early, adapting the principles and quality standards in the government’s AI testing framework and building their own confidence and skills, internal auditors can help their organisations navigate the complexities of AI with clarity and assurance. In doing so, they not only strengthen governance but also contribute to building the public trust that underpins responsible and effective use of AI across government.
Author
 
                Florence Bastos, CIPFA Public Finance Advisor
Florence is a financial advisor in CIPFA’s Policy and Technical team. She joined CIPFA in 2023, bringing a wealth of experience from her private sector, NHS, and local authority background. Florence’s expertise lies in strategic financial management, which she leverages to improve organisational performance. She also provides financial and commercial insights that facilitate sound investment and contracting decisions in the public sector. Florence qualified as a Chartered Accountant while working in the tax practice at PwC.
