Custom AI solutions for UK SMEs
How UK businesses are implementing AI solutions that deliver measurable results.
Free AI policy template for UK businesses.
We've created a practical AI acceptable use policy template designed specifically for UK businesses. Unlike generic templates, ours is informed by original research data and addresses the real behavioural dynamics that determine whether AI policies actually work:
Free download, no signup required. Available in Word and HTML formats.
According to Microsoft research, 71% of UK employees have used unapproved AI tools at work. Among executives and senior managers, the figure is 93%. Data breaches involving shadow AI cost organisations £670,000 more than standard incidents (IBM). Yet fewer than 1 in 5 UK SMEs have a formal AI governance framework.
Our research found that 41% of UK desk workers operate in a total policy vacuum - no official AI guidance at all. When employers leave that vacuum, the workforce splits: risk-takers use shadow AI to get ahead, while conscientious employees abstain entirely - losing over 400 hours of productivity per year.
This template is designed to close that gap. Every section provides explicit, affirmative guidance on what employees can do, not just what they can't.
Best for:
Best for:
A well-structured AI policy follows a logical flow from scope and permissions through to ongoing governance. Our template includes all 10 sections recommended by the ICO, CIPD, and leading employment law practitioners:
Why the policy exists, who it applies to, and which tools and activities are covered. Includes contractors and temporary workers.
Register of sanctioned tools with subscription tier requirements, plus a lightweight process for requesting new tool approvals.
Clear, specific prohibitions with concrete examples. What must never be entered into any AI tool, regardless of subscription tier.
Green/amber/red framework mapping data categories to what can and cannot be entered into AI tools, with anonymisation examples.
Where human review of AI outputs is mandatory before use, publication, or client delivery. Addresses automation bias.
When employees must disclose AI use to clients, colleagues, or the public. Intellectual property and attribution guidance.
How to report policy violations, data exposures, or AI-generated misinformation. Designed to encourage self-reporting.
Mandatory AI foundations training, role-specific guidance, and annual refresher requirements with completion tracking.
Six-monthly review cycle, version numbering, change log, and named policy owner. Keeps your policy current as AI evolves.
Week-by-week rollout guide covering audit, configuration, training, and verification. Move from template to working policy in a month.
Most AI policy templates available online are either generic compliance checklists or dense legal documents that no employee will read. This template is different because it's designed by technology consultants who understand how AI is actually used in UK businesses:
Informed by original UK research data on AI adoption patterns, including the finding that 71% of UK employees have used unapproved AI tools at work.
Includes transition guidance for moving from unapproved to approved tools, rather than pretending shadow AI doesn't exist.
Provides explicit, affirmative permission for common use cases. Addresses the Conscientious Worker Penalty - where 41% of workers in a policy vacuum are split between covert AI users and conscientious employees paralysed by ambiguity.
Aligned with UK GDPR, the Data Use and Access Act 2025, ICO guidance, and the principles from the UK Government's AI White Paper.
Tip: Don't aim for perfection on the first version. The best AI policies are living documents that improve through use:
The template appendix includes a detailed week-by-week plan. Here's the overview:
Survey current AI tool usage across all departments. Identify shadow AI. Review existing policies for conflicts.
Customise the template. Populate approved tools list and data classifications. Review with leadership and legal.
Launch the policy with all-hands communication. Deliver role-specific training. Establish reporting channels.
Check compliance with department heads. Address questions and edge cases. Document lessons learned for v1.1.
Research shows that poorly designed AI policies can be worse than having no policy at all. Avoid these common mistakes:
Blanket bans drive AI use underground. 71% of UK employees have already used unapproved AI tools. A restrictive policy doesn't stop usage - it stops visible usage, which is worse.
"Use AI responsibly" tells employees nothing. Specify which tools, which data, and which use cases are permitted. Ambiguity punishes your most conscientious employees.
A free ChatGPT account and ChatGPT Enterprise have fundamentally different data retention policies. Your policy must specify which tier is approved, not just which tool.
AI tools evolve monthly. A policy without a review schedule and version control becomes obsolete within weeks. Plan for six-monthly reviews at minimum.
Read the complete template below, or download it in HTML or Word format.
This policy establishes clear guidelines for the acceptable use of artificial intelligence (AI) tools within [Organisation Name]. Its purpose is to:
This policy applies to all:
This policy covers all AI tools and services, including but not limited to:
This policy should be read alongside:
| Edge Case | Your Organisation's Position |
|---|---|
| Contractors and freelancers | [e.g., Must comply with this policy. AI policy compliance to be included in contractor agreements. Contractors may not use personal AI tool subscriptions for work involving our data.] |
| Personal devices (BYOD) | [e.g., Approved AI tools may be used on personal devices only if the device meets our minimum security requirements. Free-tier AI tools must not be used on any device for work purposes.] |
| Remote workers | [e.g., Same policy applies regardless of location. Remote workers must use VPN when accessing AI tools with amber-classified data.] |
| Embedded AI features | [e.g., AI features embedded in approved software (e.g., Smart Compose in Gmail, Copilot in M365) are approved by default under the same data classification rules as the host software.] |
| AI for personal learning | [e.g., Employees may use approved AI tools for professional development during work hours, provided no confidential data is entered. We encourage completion of the UK Government's free AI Skills Boost programme.] |
| Client-facing vs internal use | [e.g., Higher review requirements apply to client-facing AI outputs. All client-facing content must be reviewed by account lead before delivery. Internal use follows standard review requirements.] |
| International teams | [e.g., Team members operating in the EU must also comply with EU AI Act requirements. The most stringent applicable requirement applies. Contact compliance team for guidance.] |
| Tool | Subscription Tier | Approved For | Data Restrictions | Review Required? |
|---|---|---|---|---|
| [e.g., Microsoft 365 Copilot] | [e.g., Business Premium licence] | [e.g., All departments] | [e.g., Green and amber data only] | [e.g., Yes, for client-facing outputs] |
| [e.g., ChatGPT] | [e.g., Team account only - NOT personal free accounts] | [e.g., All departments] | [e.g., Green data only] | [e.g., Yes, for external communications] |
| [e.g., GitHub Copilot] | [e.g., Business licence] | [e.g., Engineering only] | [e.g., Internal code only, no client code] | [e.g., Yes, all outputs must pass code review] |
| [e.g., Midjourney] | [e.g., Pro plan] | [e.g., Marketing only] | [e.g., Non-client-facing concept work only] | [e.g., No, for internal concepts; yes, for published content] |
Any AI tool not listed in Section 2.1 is not approved for use with any work-related data or tasks. If you believe a tool should be added, submit an evaluation request using the process below.
To request approval for a new AI tool:
Evaluation criteria include:
We recognise that in the absence of a clear policy, employees may have used unapproved AI tools to maintain productivity. For the first 30 days following the publication of this policy, we are operating an amnesty period:
The following uses of AI tools are prohibited under all circumstances, regardless of which tool or subscription tier:
| Prohibition | Examples | Why |
|---|---|---|
| Entering special category personal data | Health records, ethnic origin, political opinions, religious beliefs, trade union membership, biometric data, sexual orientation | UK GDPR Article 9 imposes heightened protections; DUAA 2025 maintains restrictions for special category data in automated decisions |
| Entering customer financial data | Bank account numbers, payment card details, credit scores, salary information | Financial data requires specific legal bases for processing and heightened security controls |
| Entering authentication credentials | Passwords, API keys, access tokens, encryption keys, security certificates | Credentials entered into AI tools may be stored, logged, or exposed through data breaches |
| Entering legally privileged material | Legal advice, litigation strategy, settlement discussions, regulatory correspondence | Legal privilege may be waived if privileged material is shared with third-party AI tools |
| Making automated decisions about people without human review | Hiring decisions, performance ratings, disciplinary actions, credit decisions, customer risk scoring | UK GDPR and DUAA 2025 require meaningful human involvement in decisions significantly affecting individuals |
| Creating misleading or deceptive content | Deepfakes, impersonation of real people, fabricated testimonials, fake reviews | Legal liability under consumer protection, defamation, and fraud laws |
| Using AI outputs in regulatory submissions without senior review | Tax returns, regulatory filings, statutory declarations, compliance reports | AI tools generate plausible but sometimes inaccurate content; regulatory submissions require verification |
The following uses require manager approval before proceeding:
If you are unsure whether a specific use case is permitted, do not avoid the tool entirely. Instead, contact [designated contact / email / channel] for guidance. We aim to respond within [1 business day].
| Classification | Description | Examples | AI Tool Rules |
|---|---|---|---|
| GREEN (Open) |
Information that is already public or has no confidentiality requirement | Published marketing content, press releases, public website copy, job adverts, general industry knowledge | May be entered into any approved AI tool, any tier. No prior approval needed. |
| AMBER (Restricted) |
Internal information that could cause harm if disclosed but is not subject to heightened legal protection | Internal meeting notes, project plans, process documentation, anonymised performance data, draft strategies, internal communications, standard B2B business contact information (e.g., a client's work email address) | May be entered into approved business-grade tools only (e.g., Microsoft Copilot Enterprise, ChatGPT Team/Enterprise). Not free-tier tools. Manager approval recommended for sensitive items. |
| RED (Prohibited) |
Information subject to heightened legal protection, contractual confidentiality, or significant business sensitivity | Sensitive personal data (e.g., consumer details, home addresses), special category data, customer financial data, health records, employee personal data, trade secrets, proprietary algorithms, passwords, legal advice, contractual terms | Must never be entered into any AI tool under any circumstances. If AI analysis is needed, data must first be fully anonymised (see Section 4.2). |
You may use AI tools to analyse data that would otherwise be classified as red, provided you first remove all identifying information. Effective anonymisation means a reasonable person could not identify any individual or organisation from the data.
| Before Anonymisation (DO NOT enter) | After Anonymisation (May enter into approved tools) |
|---|---|
| "Customer John Smith (account 12345) complained about delivery delays to Manchester on 15 March" | "A customer complained about delivery delays to a northern city in mid-March" |
| "Sarah Jones in Finance earned £45,000 last year and received a 3% raise" | "A mid-level finance employee earned between £40k-50k and received a standard annual increase" |
| "Our contract with Acme Ltd includes a 12% discount on orders over £50,000" | "Our supplier agreements typically include volume discounts of 10-15% on large orders" |
If you are unsure how to classify data, use this quick test:
AI-generated content must be reviewed by a qualified human before being:
| Output Type | Reviewer | Review Requirements |
|---|---|---|
| Content sent to clients or external stakeholders | [e.g., Line manager or account lead] | Check accuracy, tone, and appropriateness. Verify any factual claims against primary sources. |
| Published content (website, social media, marketing) | [e.g., Marketing manager or content lead] | Full editorial review including brand alignment, factual accuracy, and disclosure requirements. |
| Financial analysis or recommendations | [e.g., Finance director or qualified accountant] | Verify calculations independently. Cross-check against source data. AI-generated financial figures must never be trusted without verification. |
| Legal or compliance content | [e.g., Legal counsel or compliance officer] | Senior review required. AI-generated legal advice is not a substitute for qualified legal counsel. |
| Decisions affecting individuals (hiring, performance, customer scoring) | [e.g., Senior manager with HR] | Document whether AI recommendation was accepted, modified, or overridden, with reasoning. Required under DUAA 2025. |
| Code and technical outputs | [e.g., Senior developer] | Standard code review process. Check for security vulnerabilities, licensing issues, and correctness. |
| Autonomous actions (Agentic AI) executing workflows (e.g., sending emails, making purchases, altering database records) | [e.g., IT security lead] | AI agents must never be granted autonomous "write" or "send" permissions without explicit IT security clearance. All AI agents must operate in "human-in-the-loop" (draft only) mode by default. |
The following uses of approved AI tools do not require prior review:
Automation bias occurs when human reviewers defer to AI recommendations without meaningful scrutiny. When reviewing AI outputs, you should:
You must disclose that AI tools were used in the following situations:
You do not need to disclose AI use for:
All AI-generated content created using [Organisation Name]'s tools, accounts, or data is the property of [Organisation Name], not the individual employee.
Employees should be aware that:
Report any of the following to [email address / person / channel]:
You can report AI-related incidents through any of these channels:
| Training | Who | When | Duration |
|---|---|---|---|
| AI Foundations Briefing | All staff | Within 30 days of this policy's effective date (or within 30 days of joining) | [e.g., 30 minutes] |
| Role-Specific AI Training | Staff in departments using AI tools | Within 60 days of this policy's effective date | [e.g., 1 hour per department] |
| Annual Refresher | All staff | Annually, aligned with policy review cycle | [e.g., 20 minutes] |
| Data Classification Workshop | Managers and team leads | Within 30 days of this policy's effective date | [e.g., 45 minutes] |
The AI Foundations Briefing covers:
Role-Specific Training covers:
The UK Government offers free AI foundations training as part of its goal to upskill 10 million workers by 2030. Employees are encouraged to supplement organisational training with these resources for professional development.
This policy will be formally reviewed:
This policy is owned by [Name, Role], who is responsible for maintaining, reviewing, and communicating updates. Feedback and improvement suggestions should be directed to [email address].
| Version | Date | Author | Summary of Changes |
|---|---|---|---|
| 1.0 | [Date] | [Name] | Initial policy published |
| [1.1] | [Date] | [Name] | [Summary of changes] |
| Level | Situation | Typical Response |
|---|---|---|
| Level 1 Unintentional |
First-time violation due to misunderstanding, lack of training, or genuine mistake. Self-reported. | Informal guidance, additional training on relevant policy section, documentation of incident for learning purposes. |
| Level 2 Repeated or Negligent |
Second violation, or first violation where the employee should reasonably have known the policy. Not self-reported. | Formal written warning, mandatory policy retraining, temporary restriction on AI tool access pending completion of training. |
| Level 3 Serious |
Deliberate violation, or any violation involving regulated personal data, customer harm, or significant commercial risk. | Formal disciplinary action in accordance with [Organisation Name]'s disciplinary procedure, which may include suspension or termination of employment. |
| Day | Action | Owner | Output |
|---|---|---|---|
| 1-2 | Survey all departments: which AI tools are currently in use (including personal/unapproved tools)? | [IT / Compliance] | AI tool usage inventory |
| 3-4 | Review existing policies (data protection, IT acceptable use, confidentiality) for potential conflicts with AI policy | [Compliance / Legal] | Policy conflict analysis |
| 5-7 | Classify organisational data into green/amber/red categories. Identify which data types each department handles. | [Department heads + IT] | Completed data classification table |
| Day | Action | Owner | Output |
|---|---|---|---|
| 8-10 | Customise this template: populate approved tools register, data classifications, and organisation-specific sections | [IT + Compliance] | Draft AI policy v0.1 |
| 11-12 | Review draft with leadership team. Check alignment with business objectives and risk appetite. | [Senior leadership] | Leadership-approved draft v0.2 |
| 13-14 | Legal review (if applicable). Ensure consistency with employment contracts and regulatory obligations. | [Legal advisor] | Final policy v1.0 |
| Day | Action | Owner | Output |
|---|---|---|---|
| 15 | All-hands communication: announce the new policy. Explain the purpose (enable, not restrict). Share the document. | [Senior leader / CEO] | Policy announcement |
| 16-18 | Deliver AI Foundations Briefing to all staff (can be done in department groups) | [IT / Training lead] | Training completion records |
| 19-21 | Deliver role-specific training to departments that use AI tools regularly | [Department leads + IT] | Department-specific training completion |
| Day | Action | Owner | Output |
|---|---|---|---|
| 22-24 | Department heads verify: all team members have completed training, approved tools are correctly configured, reporting channels are operational | [Department heads] | Compliance verification checklist |
| 25-27 | Collect feedback from employees: what's unclear? What's impractical? What use cases aren't covered? | [Compliance / HR] | Feedback log |
| 28-30 | Address feedback, update policy if needed (issue v1.1), document lessons learned, set next review date | [Policy owner] | Updated policy + review calendar |
| Field | Response |
|---|---|
| Requested by | [Name, Department] |
| Date requested | [Date] |
| Tool name and provider | [e.g., Claude by Anthropic] |
| Subscription tier requested | [e.g., Team plan] |
| Intended use case | [What will it be used for?] |
| Department(s) that will use it | [Which teams?] |
| Estimated number of users | [Number] |
| What data will be entered? | [Green / Amber / Red classification] |
| Why can't an existing approved tool meet this need? | [Justification] |
| Does the provider retain user input for training? | [Yes / No / Depends on tier] |
| Provider security certifications | [SOC 2, ISO 27001, etc.] |
| Cost per user per month | [Amount] |
| Decision | [Approved / Conditionally Approved / Not Approved] |
| Conditions (if applicable) | [Any restrictions or requirements] |
| Evaluated by | [Name, Role] |
| Date decided | [Date] |
| Term | Definition |
|---|---|
| AI (Artificial Intelligence) | Technology that enables computers to perform tasks that typically require human intelligence, such as understanding language, generating text, analysing data, or creating images. |
| Generative AI | AI systems that can create new content (text, images, code, audio) based on patterns learned from training data. Examples include ChatGPT, Claude, Midjourney, and Microsoft Copilot. |
| Shadow AI | The use of AI tools by employees without organisational knowledge or approval. Often driven by lack of approved alternatives or overly restrictive policies. |
| Hallucination | When an AI tool generates information that sounds authoritative but is factually incorrect. AI tools do not "know" things - they predict likely responses, which can include plausible but false statements. |
| Automation Bias | The tendency for humans to defer to AI recommendations without sufficient critical scrutiny, particularly when the AI output is presented with confidence. |
| Data Anonymisation | The process of removing personally identifiable information from data so that individuals cannot be identified from the remaining information. |
| UK GDPR | The UK General Data Protection Regulation, the primary UK law governing the processing of personal data. Retained from EU law and supplemented by the Data Protection Act 2018. |
| DUAA 2025 | The Data Use and Access Act 2025, UK legislation that reformed automated decision-making requirements, shifting from prohibition-based to principles-based governance with mandatory ex-post safeguards. |
| Special Category Data | Personal data revealing racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic or biometric data, health data, or sexual orientation. Subject to heightened protections under UK GDPR Article 9. |
| Conscientious Worker Penalty | The phenomenon where vague AI policies cause cautious employees to avoid AI entirely while less cautious employees use it without safeguards, resulting in worse outcomes than a clear, permissive policy. |
| Agentic AI | AI systems that can autonomously plan, execute, and adapt actions to achieve goals with minimal human intervention. The ICO confirmed in January 2026 that agentic systems remain fully subject to UK GDPR. Policies should be reviewed as agentic AI becomes more prevalent in workplace tools. |
By signing below, the parties confirm that they have reviewed this AI Acceptable Use Policy and approve it for implementation across [Organisation Name].
| Name: | |
| Role: | |
| Signature: | |
| Date: |
| Name: | |
| Role: | |
| Signature: | |
| Date: |
Assess your organisation's digital readiness across five key dimensions: technology, skills, processes, data, and strategy.
A comprehensive SRS template with all 8 essential sections, guidance notes, and example content. Perfect for commissioning bespoke software.
Whether you need help customising this template, training your team, or implementing technical controls, we can help you get AI governance right first time.
Get in touch