Quick answer: AI governance is the set of policies, processes and controls that ensure AI is used safely, legally and accountably in your business. For UK SMEs, it doesn't mean replicating a 200-page enterprise framework. A proportionate approach covers six essentials: an approved-tools list, a one-page AI use policy, a named owner, a basic risk register, staff awareness training, and a quarterly review cycle. This guide shows you how to build each one, with a 30-day action plan to get started.
If you search "AI governance" right now, you'll find frameworks designed for FTSE 100 boards, consultancy pitches running into six figures, and academic papers that treat your 30-person business as an afterthought.
That's not what this guide is about. This is a practical walkthrough for UK business owners and managers who know they need to get a grip on how their team uses AI, but don't have a dedicated compliance department to make it happen.
We wrote this because our own research tells us the problem is urgent. Our survey of 200 UK desk workers (February 2026) found that 54.5% lack a clear, enabling AI policy, and 32% are already using AI tools without their employer's knowledge. Meanwhile, Trustmarque research found that while 93% of UK organisations now use AI, only 7% have fully embedded governance. The gap between AI adoption and AI oversight is a governance vacuum, and it's your risk to manage.
What AI governance actually means for your business
Strip away the jargon and AI governance comes down to a straightforward question: who can use which AI tools, with what data, under what rules, and who is responsible when something goes wrong?
For a 500-person enterprise, answering that question might involve a cross-functional committee, a dedicated AI ethics board, and a team of compliance specialists. For a 15-person marketing agency or a 50-person logistics firm, it means something far simpler: a clear policy document, an approved list of tools, a named person who owns the process, and a regular check to make sure things are working.
Think of AI governance as a natural extension of the controls you've already got in place for data protection, information security and acceptable use of technology. You're not building something from scratch. You're adding an AI layer to existing practices.
"There is no off-the-shelf AI governance framework that you can purchase and install overnight. But there are very clear guiding principles that should be in every organisation's approach to governing AI."
International Compliance Association, A Practical Guide to AI Governance
AI governance for an SME covers four practical areas:
- Visibility: knowing which AI tools your team is actually using (including the ones they haven't told you about)
- Policy: written rules about what is and isn't acceptable, communicated to everyone
- Accountability: a named person who owns AI decisions and can respond to regulators or clients
- Review: a regular cycle of checking whether your approach still fits the reality of how AI is being used
That's it. Four pillars, no dedicated department required. The rest of this guide shows you how to build each one in proportion to your business size.
The governance gap: what happens without it
The numbers paint a clear picture. AI is already in your business whether you've given it permission or not.
Sources: Trustmarque UK AI Index (2025); Red Eagle Tech / Pollfish UK Workplace AI Survey (February 2026, n=200)
This gap between adoption and oversight creates specific, measurable risks:
Data leakage and security exposure
When employees paste customer data, financial records or internal documents into consumer AI tools like ChatGPT or Claude using personal accounts, that data leaves your control entirely. You've got no contractual protections with the AI provider, no audit trail, and no way to recall the information. Research shows that 44% of UK businesses have experienced data leakage linked to shadow AI, with the average cost of a breach linked to high levels of shadow AI reaching roughly half a million pounds.
Regulatory exposure under UK GDPR
If AI is making or materially influencing decisions about people, such as screening job applicants, scoring customer creditworthiness, or prioritising support requests, you already have obligations under UK GDPR Article 22 and the Data Use and Access Act 2025. Without governance, you may not even know which AI systems trigger these requirements, let alone have the documentation to demonstrate compliance.
Real-world consequences: what the ICO is doing
This isn't theoretical. In 2025, the ICO issued fines to Advanced Computer Software Group (£3.07 million, notably the first fine against a data processor rather than a controller), LastPass UK (£1.2 million), and Capita settled a £14 million voluntary fine after a breach affecting 6.6 million people. The ICO found Capita took 58 hours to respond to a security alert against a one-hour target. These aren't AI-specific fines, but they show how seriously the ICO takes data governance failures, and AI tools processing personal data fall squarely within their scope.
Meanwhile, UK tribunals in 2025 saw fabricated legal citations generated by AI (Oxford Hotel Investments Ltd v Great Yarmouth Borough Council) and inconsistent AI-generated answers causing confusion in proceedings (Lee v Blackpool B&B). Public AI chatbots gave UK consumers incorrect ISA contribution advice and wrong tax guidance in November 2025, leading 31% of UK accountants to report weekly client mistakes caused by incorrect AI-generated financial advice.
The Conscientious Worker Penalty
Our research identified a pattern we call the Conscientious Worker Penalty. When employers fail to provide clear AI guidelines, risk-tolerant employees use AI anyway and gain productivity advantages, while rule-following employees hold back to stay compliant and fall behind. The governance vacuum doesn't create a level playing field. It rewards rule-breaking and penalises the cautious majority.
The governance vacuum in numbers
Among UK workers whose employers have no clear AI policy (41% of all workers surveyed), roughly 70% are "conscientious abstainers" who avoid AI entirely to stay on the right side of unwritten rules. These employees are losing ground to risk-tolerant colleagues who use AI regardless. A clear governance framework sorts this out by giving everyone permission to use AI safely.
The upside: governance as competitive advantage
Governance pays for itself. Research by EY found that organisations with formal AI oversight report higher revenue growth, greater cost savings and stronger employee satisfaction compared with those without. Organisations with mature AI governance experience 23% fewer AI-related incidents and bring AI capabilities to market 31% faster.
There's a commercial angle too. As larger organisations and public-sector buyers tighten their supply-chain requirements, the ability to evidence AI governance (risk assessments, documented human oversight, data protection controls) makes your business more attractive in procurement. Customers are paying attention: consumer surveys show people would stop using companies that misuse AI. Proportionate governance builds trust that translates into contracts.
UK regulatory environment you need to know about
The UK doesn't yet have a standalone AI law. Instead, AI is governed through existing legislation and sector-specific regulators applying cross-cutting principles. Here's what matters for your business right now.
UK GDPR and automated decision-making
UK GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing where those decisions have legal or similarly significant effects. If your AI tools are screening CVs, approving loan applications, or determining customer pricing, you need a lawful basis, a data protection impact assessment (DPIA), and meaningful human involvement in the decision.
Data Use and Access Act 2025
The DUAA received Royal Assent on 19 June 2025, with key data protection provisions coming into force from 5 February 2026. It introduces new Articles 22A to 22D, reworking the rules around automated decision-making. The practical changes for SMEs:
- Transparency: you must inform individuals when automated decisions are being made
- Representations: individuals have the right to make representations about automated decisions
- Human intervention: meaningful human review must be available on request
- Contestability: individuals can challenge automated decisions
- Complaints procedure: from June 2026, organisations must have a formal internal data protection complaints procedure
The five AI principles
The UK government's cross-sector AI framework is built on five principles that sector regulators are expected to apply:
- Safety, security and robustness: AI systems should function securely and as intended
- Transparency and explainability: organisations should be able to explain how and why AI is being used
- Fairness: AI shouldn't produce discriminatory outcomes
- Accountability and governance: clear lines of responsibility for AI decisions
- Contestability and redress: people affected by AI should be able to challenge decisions
These principles aren't legally binding on their own, but regulators (the ICO, FCA, CMA, Ofcom and others) are using them as the basis for sector-specific guidance and enforcement action.
ICO enforcement priorities
The ICO has published AI-specific data protection guidance and provides a free AI and Data Protection Toolkit for risk assessment. They're focused on whether organisations carry out DPIAs for high-risk AI processing, whether they understand their supply-chain roles when using third-party AI services, and whether privacy-enhancing techniques are being applied. Updated ADM guidance reflecting the DUAA changes is expected by Spring 2026.
EU AI Act (relevant if you trade with the EU)
If your business offers AI-powered services to EU customers, the EU AI Act applies to you regardless of where you're based. High-risk system obligations become enforceable from 2 August 2026, with potential fines up to 35 million euros or 7% of worldwide turnover. Even if you're not a direct provider, you should assess whether any AI systems you use or resell fall into the EU's high-risk categories.
What this means in practice
You don't need to wait for a UK AI law to start governing AI. The legal obligations already exist through UK GDPR, the DUAA, the Equality Act, and sector-specific regulations. A proportionate governance framework gives you the documentation and controls to demonstrate compliance with all of them. Starting now means you're building on solid ground rather than scrambling to catch up when enforcement tightens.
The governance minimalist approach
Enterprise AI governance frameworks are built for organisations with dedicated compliance teams, legal departments and six-figure budgets. Transplanting that approach into a 20-person business is impractical and can actively harm AI adoption by creating so much friction that employees either abandon AI entirely or route around the rules.
The governance minimalist approach applies the 80/20 principle: roughly 20% of governance controls address roughly 80% of your risk. The goal isn't comprehensive coverage of every theoretical scenario. It's sensible coverage of the risks that actually apply to your business.
"AI governance for a small business is not a 200-page policy manual or a team of consultants sitting in your office for six months. It is a proportionate set of controls that match your size, your risk, and your regulatory environment."
LogiSam, AI Governance for SMEs
In practice, this means:
- One-page policy, not a manual: your AI acceptable use policy should fit on a single page and be written in language every employee can understand. We've published a free AI policy template you can adapt.
- Spreadsheet, not software: your AI tools inventory and risk register can live in a shared spreadsheet. You don't need a governance platform until your AI footprint justifies one.
- Named person, not a committee: one person owns AI governance. In a micro-business, that's the founder. In a 50-person company, it's a senior manager with a part-time brief.
- Quarterly check, not continuous monitoring: unless you're processing high-risk personal data at scale, a quarterly review cycle keeps governance current without eating up disproportionate management time.
This approach aligns with how UK regulators expect SMEs to operate. The ICO's own guidance emphasises proportionality. The UK government's AI Management Essentials (AIME) self-assessment tool is explicitly designed for organisations without specialist AI expertise. Neither expects you to build an enterprise governance function. If you want a quick gauge of where your business stands, our free AI readiness assessment takes five minutes and covers policy, data, technical readiness, people and governance.
Five core components of SME AI governance
Every SME AI governance programme needs five components. The depth and formality of each should match your business size, the sensitivity of the data you handle, and your regulatory environment.
1. AI use policy
A clear, written document that tells every employee what they can and can't do with AI. This should cover approved tools (and how to request new ones), data handling rules (what data can and can't be entered into AI systems), prohibited uses (such as processing special-category personal data on consumer AI tools), human review requirements for important outputs, and how to report concerns or incidents.
Get your AI policy sorted
Our free AI acceptable use policy template covers everything above and is ready to adapt to your business. Grab a copy, share it with your team, and you'll have your first governance component ticked off in an afternoon. A policy that sits unread in a shared drive isn't governance - discuss it in your next all-hands meeting.
AI acceptable use policy template
A comprehensive AI policy template with 10 essential sections, data classification guidance, and a 30-day implementation rollout plan. Designed for UK businesses.
2. AI tools inventory and risk register
You can't govern what you can't see. An AI tools inventory lists every AI tool in use (or likely in use) across your business, including consumer tools employees may be using without approval. For each tool, record its purpose, the data it processes, who uses it, and whether it has enterprise data protections.
Your risk register categorises each tool by risk level:
- Low risk: AI features embedded in existing business software (e.g. spell-check, smart compose, auto-categorisation)
- Medium risk: approved standalone AI tools with enterprise contracts and data protections
- High risk: consumer AI tools, open-source models, or any tool processing personal or sensitive data
Getting started: ask each team to list the AI tools they use (including personal accounts). You'll almost certainly discover tools you didn't know about. That discovery is the entire point.
If you need help with visibility, our IT operations services include network monitoring and endpoint protection that can identify unauthorised tool usage across your environment.
3. Data governance for AI
AI governance and data governance are closely linked. Your AI data governance rules should specify which data categories are permitted in AI systems (and which are absolutely prohibited), how personal data is handled under UK GDPR when processed by AI, when a data protection impact assessment is required, and what contractual protections must be in place with AI suppliers.
The non-negotiable rule: special-category personal data (health data, biometric data, data revealing racial or ethnic origin, political opinions, religious beliefs, trade union membership, or sexual orientation) must never be entered into consumer AI tools. Full stop.
Need a hand getting your data governance foundations in place? Our technology governance services can help you build the right framework for your business.
4. Tool approval process
When someone in your team finds a new AI tool they want to use, they need a clear, fast pathway to get it assessed and approved (or declined). Without this, employees will start using tools without asking, which is exactly how shadow AI takes hold.
Keep it fast: a tool approval process that takes three months isn't governance; it's a bureaucratic incentive for shadow AI. Aim for a lightweight assessment (data handling, supplier terms, security basics) that can be completed within a week for standard tools. Reserve deeper reviews for tools that process personal data or make consequential decisions.
5. Training and awareness
A policy is only as good as the understanding behind it. Staff training doesn't need to be a formal certification programme. A 30-minute session covering your AI use policy, the dos and don'ts of data handling, how to request new tools, and what to do if something goes wrong covers the essentials.
Current reality: Red Eagle Tech research found that only 7.5% of UK workers have had extensive AI training from their employer. That gap is a risk, but it's also a chance to get ahead. The businesses that train their teams effectively will see better AI outcomes, fewer incidents, and stronger employee confidence.
We run an agentic AI training course designed for teams who want practical, hands-on experience with AI tools in a business context.
Who does what: governance roles in a small business
Enterprise governance models assume dedicated roles: Chief AI Officer, AI Ethics Board, Data Governance Committee. That's not realistic for most SMEs. Here's how to distribute governance responsibilities based on your size.
| Business size | AI governance owner | Key responsibilities | Review cadence |
|---|---|---|---|
| Micro (1-9) | Founder or operations lead | Owns the policy, maintains the inventory in a spreadsheet, handles approval requests directly | Quarterly review (30 minutes) |
| Small (10-49) | Appointed AI lead (part-time role) | Maintains policy and inventory, runs quarterly governance reviews, coordinates staff training, manages tool approval process | Quarterly governance meeting, monthly leadership check-in |
| Medium (50-249) | Designated AI officer or IT/operations manager | Owns the governance framework, reports to senior leadership, manages supplier assessments, coordinates DPIAs, runs training programme | Quarterly board report, monthly operational review |
What matters is that someone owns this. AI governance without a named owner is governance in theory only. That person doesn't need to be a specialist. They need to be senior enough to make decisions, practical enough to keep things right-sized, and given enough time to do the job properly.
Your governance review calendar
AI governance isn't a one-off project. The tools change, the regulations evolve, and your team's AI usage will shift over time. A structured review cycle keeps governance relevant without turning it into a full-time occupation.
Flag any new AI tools adopted or requested. Note any incidents, near-misses or employee concerns. Check for regulatory announcements that might affect your approach.
Update the AI tools inventory. Review the risk register for any changes in tool risk profiles. Assess whether the AI use policy still reflects actual practice. Review any DPIAs or supplier assessments due. Check training completion and plan next sessions.
Review the entire governance framework against business objectives. Assess regulatory developments (DUAA implementation, ICO guidance updates, EU AI Act changes). Evaluate whether governance resources and roles still match AI complexity. Compare your approach against what others in your sector are doing. Update supplier contracts and data processing agreements.
Record the outcomes of each review. This documentation becomes your evidence of ongoing governance, which is exactly what a regulator or client audit would ask to see.
Measuring what matters: AI pulse survey questions
Reviews and audits tell you what's happening with your tools and policies. They don't tell you what's happening with your people. If your staff are confused, anxious or quietly working around the rules, a governance review won't catch it. A staff survey will.
Add these questions to your next engagement pulse survey. They're designed to measure the things that governance reviews miss: whether people actually understand the policy, whether they feel safe asking questions, and whether the gap between policy and practice is growing.
| Question | What it measures |
|---|---|
| I understand which AI tools I'm allowed to use at work and how I'm expected to use them. | Policy clarity. Our research found 41% of UK workers have no clear AI policy - this question tells you whether yours are among them. |
| I feel confident that I could raise a question or concern about AI use without it reflecting badly on me. | Psychological safety around AI. Workers who don't feel safe asking questions default to either avoidance or covert use. |
| I have received enough training to use AI tools effectively in my role. | Training adequacy. A "no" here combined with a "yes" on policy clarity points to an execution gap rather than a communication gap. |
| I know who to ask if I'm unsure whether a particular use of AI is appropriate. | Escalation clarity. If people don't know who to ask, they either guess or avoid. Both are governance failures. |
| There are tasks in my role where AI tools could save me significant time, but I'm not currently using them. | Suppressed demand. A high "agree" rate here is the clearest signal that your policy or training isn't converting awareness into action. |
| My manager actively supports our team's use of approved AI tools. | Manager enablement. Policy set at the top means nothing if line managers aren't reinforcing it. This question catches the gap between leadership intent and daily reality. |
Six questions, five-point agreement scale (strongly disagree to strongly agree), plus one optional free-text box: "Is there anything else you'd like to tell us about AI at work?" That free-text response is often where the most useful insights come from.
Run this quarterly, timed to land a week before your governance review meeting. That way the results feed directly into the review discussion and your governance cycle has both a technical lens (the audit) and a human lens (the survey).
Common governance mistakes (and how to avoid them)
Over-governance: killing innovation with bureaucracy
The most common mistake medium-sized businesses make is importing enterprise governance wholesale. When every AI tool request requires a three-month committee review, employees stop asking and start using tools covertly. Over-governance doesn't reduce risk. It pushes AI underground where it's invisible and uncontrolled.
Fix: match governance intensity to risk. Low-risk embedded AI features (spell-check, auto-formatting) need only a blanket approval. Medium-risk standalone tools need a lightweight assessment. Reserve detailed reviews for high-risk tools processing personal or sensitive data.
Under-governance: hoping the problem goes away
On the flip side, plenty of small businesses do nothing at all. The assumption that "we're too small for AI governance" ignores the reality that employees are already using AI. A business with ten employees and no AI policy still has AI governance. It's just governance by absence, where every employee makes their own rules.
Fix: start with the one-page policy and the tools inventory. Even the simplest governance is better than none. You can always add depth later as your AI usage grows.
Policy without communication
Writing a policy and filing it in a shared drive achieves nothing. If your team doesn't know the rules exist, the rules don't exist. Our research found that 41% of UK workers operate in an AI policy vacuum, and many of those employers may technically have a policy somewhere. The gap between having a policy and communicating it is where the conscientious worker penalty takes hold: good employees who would follow the rules can't follow rules they've never been told about.
Fix: don't announce your AI policy by email and call it done. Brief your leadership team first so they can answer questions. Then brief line managers, giving them a short script and FAQ so they're not caught out. Then hold team sessions where people can ask questions in a smaller setting. Only after that, publish the written policy. This sequence - leadership, managers, teams, document - means nobody hears about the policy for the first time from a PDF.
Fear-based governance
Governance framed as "here's everything you mustn't do" creates anxiety and suppresses useful AI adoption. The most effective governance frameworks are enabling: they tell employees what they can do, how to do it safely, and where to get help.
Fix: lead with permission, not prohibition. Structure your policy around approved tools and safe practices first, then add the necessary restrictions. The goal is confident, safe AI use, not fearful avoidance.
Sector-specific considerations
The five governance components apply to every SME, but the depth and emphasis shift depending on your sector.
Financial services
The FCA requires firms to demonstrate that AI used in regulated activities (credit decisions, financial promotions, client suitability assessments) is explainable, fair and subject to appropriate oversight. AI tools that influence financial decisions about customers will almost certainly require DPIAs, documented human oversight, and audit trails. Your governance framework should include specific FCA-aligned controls for any AI touching client-facing financial processes.
Healthcare and life sciences
Processing health data with AI triggers the highest level of UK GDPR protection (special-category data). Any AI system that contributes to clinical decisions may also fall under medical device regulation. NHS Digital and the MHRA have published specific AI guidance. SMEs in health tech should build governance around explicit consent mechanisms, rigorous DPIAs, clinical validation of AI outputs, and clear escalation routes for AI-influenced clinical decisions.
Legal services
The SRA requires solicitors to maintain competence and supervision over all work, including AI-assisted work. Entering client-privileged information into consumer AI tools is a professional conduct risk. Law firms should prohibit consumer AI for client matters, approve only tools with appropriate data protections, and require human review of all AI-generated legal content before it reaches clients.
Recruitment and HR
AI screening of job applicants is one of the highest-risk AI use cases for SMEs. The Equality Act 2010 applies to AI-driven hiring decisions, and the ICO has flagged recruitment AI as an enforcement priority. If you use AI tools that filter, score or rank candidates, you need documented fairness testing, human oversight of every consequential decision, and a clear process for candidates to challenge automated outcomes.
Retail and e-commerce
AI in retail typically involves lower regulatory risk (product recommendations, demand forecasting, chatbots) unless you're using AI for dynamic pricing that could be considered unfair under consumer protection law, or processing customer data at scale. The CMA has signalled interest in AI-driven pricing practices. Ensure your governance covers transparency in customer-facing AI interactions and fair treatment in automated pricing.
Free tools and resources to get you started
You don't need to start from scratch or pay a consultant to build your governance basics. The UK government and regulators provide practical tools designed for businesses without specialist AI expertise.
ICO AI and Data Protection Toolkit
Free risk assessment toolkit to help you identify and manage data protection risks from AI. Use this as your compliance baseline. Includes audit methodology, DPIA guidance, and practical checklists. Access the ICO AI toolkit.
AIME self-assessment (GOV.UK)
The AI Management Essentials tool evaluates your organisation across three areas: internal processes, managing risks, and communication. Designed for organisations of any size, including SMEs and start-ups. Use this to benchmark your current governance maturity and identify gaps. Access the AIME tool on GOV.UK.
AI.GOV.UK knowledge hub
The UK government's AI knowledge hub includes practical how-to governance resources, the five AI principles framework, and the Data and AI Ethics Framework for documenting responsibilities across the AI lifecycle. Browse the AI governance resources.
Red Eagle Tech AI policy template
Our free AI acceptable use policy template gives you a ready-to-adapt starting document covering approved tools, data handling, prohibited uses, and incident reporting. Grab a copy, customise it, and share with your team.
Red Eagle Tech AI readiness assessment
Not sure where to start? Our free AI readiness assessment scores your organisation across five dimensions including policy, data governance and technical readiness. Takes five minutes and gives you a clear picture of where the gaps are.
Funding and support
BridgeAI (via Innovate UK) offers grant funding from £25,000 to £50,000 for AI projects, plus free scientific advice. The Small Business Charter AI Upskilling Fund has made £6.4 million available for SME AI training. AI Adoption Vouchers for SMEs are expected from Q2 2026. Check eligibility through Innovate UK Business Connect.
Getting started: your first 30 days
If you've read this far and your business doesn't yet have AI governance in place, here's a practical 30-day plan to get the basics sorted.
Run an AI tools audit. Ask every team to list the AI tools they use, including personal accounts, browser extensions, and AI features embedded in existing software. Record each tool's name, purpose, data it processes, and whether it has enterprise-grade data protections. You'll almost certainly find tools you didn't know about.
Write (or adapt) your AI acceptable use policy. Use our free template as a starting point. Cover approved tools, prohibited uses, data handling rules, the tool approval process, and incident reporting. Keep it to one page. Share it with your team and discuss it in a meeting.
Assign your AI governance owner. Create your risk register by categorising each tool from the audit as low, medium, or high risk. Implement immediate controls: block special-category personal data on consumer AI tools and require enterprise accounts for any tool processing business-sensitive information. If any AI tools are making automated decisions about people, flag them for DPIA assessment.
Run your rollout in the right order. Start with a 15-minute leadership briefing: here's the policy, here's why, here's what you'll be asked. Then brief line managers with a short FAQ sheet so they can handle questions from their teams. Then hold 30-minute team sessions covering the policy, the approved tools list, the dos and don'ts, and how to request new tools. Keep the tone enabling, not restrictive: lead with what people can do, not what they can't. After the team sessions, publish the written policy on your intranet or shared drive. Schedule your first quarterly governance review. Set up a simple reporting channel (a shared email alias or a form) where staff can flag concerns or suggest new tools. Well done, you now have a working AI governance framework.
This 30-day plan gets you from zero to a working governance framework. It won't be perfect, and it doesn't need to be. The quarterly review cycle is where you refine, adjust and build depth over time. The important thing is that you've started.
Need help getting started?
Red Eagle Tech helps UK businesses put practical AI governance frameworks in place, from policy creation to tool assessment to staff training. If you'd rather have expert support than go it alone, give us a shout.
Not sure where you stand?
Our free AI readiness assessment takes five minutes and scores your organisation across policy, data, technical, people and governance dimensions. It's a quick way to spot the gaps before you start building your framework.
AI readiness assessment
How prepared is your organisation for AI? Evaluate your readiness across policy, data, technical, people and governance dimensions.
Frequently asked questions
Sources
- Red Eagle Tech / Pollfish (February 2026). UK Workplace AI Usage Survey. n=200 UK desk-based workers.
- Trustmarque (2025). UK AI Index: 93% of UK organisations now use AI but only 7% have fully embedded governance.
- UK Government (2025). AI Management Essentials (AIME) Self-Assessment Tool.
- ICO (2024-2026). Guidance on AI and Data Protection; AI and Data Protection Risk Toolkit; AI Audit Guide.
- ICO (January 2026). AI'll Get That: Tech Futures Report on Agentic AI.
- ICO (2024). AI Tools in Recruitment Audit: 296 recommendations on fair processing, DPIAs and transparency.
- UK Government (2025). Data Use and Access Act 2025 (DUAA). Royal Assent 19 June 2025.
- UK Government (2023-2025). A Pro-Innovation Approach to AI Regulation: Five AI Principles.
- UK Government (2025). Data and AI Ethics Framework.
- UK Government (January 2026). AI Opportunities Action Plan.
- British Chambers of Commerce (September 2025). BCC AI Business Survey, June-July 2025.
- EY (October 2025). Responsible AI Governance and Business Performance.
- Skadden (February 2026). Recent ICO Data Breach Enforcement (Capita, Advanced Computer Software Group, LastPass).
- Giskard (November 2025). When AI Financial Advice Goes Wrong: UK Consumer AI Chat Failures.
- City AM (2026). Businesses Are Suffering Financial Losses from Faulty AI Advice.
- Small Business Charter (2026). £6.4 Million AI Upskilling Fund Launched for SMEs.
- LogiSam (2025). AI Governance for SMEs: 5 Steps to Govern AI Safely.
- International Compliance Association (2025). A Practical Guide to AI Governance.
- NIST (2024). AI Risk Management Framework Version 2.0.
- EU AI Act (2024). Regulation (EU) 2024/1689. High-risk obligations enforceable from 2 August 2026.