AI Acceptable Use Policy template download


Free AI policy template for UK businesses.


What this template includes

We've created a practical AI acceptable use policy template designed specifically for UK businesses. Unlike generic templates, ours is informed by original research data and addresses the real behavioural dynamics that determine whether AI policies actually work:

  • 10 essential sections covering everything from approved tools to incident reporting
  • Data classification framework with a clear green/amber/red system for what data can enter which AI tools
  • Approved tools register with a lightweight process for evaluating new tools
  • 30-day implementation rollout plan so you can move from template to working policy quickly
  • UK-specific guidance covering GDPR, the Data Use and Access Act 2025, and ICO expectations
  • Guidance boxes in every section explaining what to include and why
  • Shadow AI transition guidance for moving from unapproved to approved tool usage

Free download, no signup required. Available in Word and HTML formats.

AI acceptable use policy template for UK businesses showing 41% operate without any AI policy, 400+ hours of productivity lost per year, and 30 days from download to full rollout
A clear AI policy closes the gap between shadow AI risk and lost productivity

The shadow AI problem

According to Microsoft research, 71% of UK employees have used unapproved AI tools at work. Among executives and senior managers, the figure is 93%. Data breaches involving shadow AI cost organisations £670,000 more than standard incidents (IBM). Yet fewer than 1 in 5 UK SMEs have a formal AI governance framework.

The Conscientious Worker Penalty

Our research found that 41% of UK desk workers operate in a total policy vacuum - no official AI guidance at all. When employers leave that vacuum, the workforce splits: risk-takers use shadow AI to get ahead, while conscientious employees abstain entirely - losing over 400 hours of productivity per year.

This template is designed to close that gap. Every section provides explicit, affirmative guidance on what employees can do, not just what they can't.

Download the AI policy template

A complete template with all 10 essential sections, data classification guidance, and a 30-day implementation plan. Free, no signup required. Choose your preferred format below.

Which format should you use?

Word format (.docx)

Best for:

  • Editing and customising the template for your organisation
  • Adding your approved tools and data classifications
  • Sharing with leadership for sign-off
  • Version-controlled updates as your policy evolves
HTML format

Best for:

  • Viewing the template structure before downloading
  • Reference while customising in another tool
  • Printing a clean, formatted copy
  • Teams that prefer browser-based documents

The 10 essential sections

A well-structured AI policy follows a logical flow from scope and permissions through to ongoing governance. Our template includes all 10 sections recommended by the ICO, CIPD, and leading employment law practitioners:

1Purpose and scope

Why the policy exists, who it applies to, and which tools and activities are covered. Includes contractors and temporary workers.

2Approved AI tools list

Register of sanctioned tools with subscription tier requirements, plus a lightweight process for requesting new tool approvals.

3Prohibited uses

Clear, specific prohibitions with concrete examples. What must never be entered into any AI tool, regardless of subscription tier.

4Data classification guidance

Green/amber/red framework mapping data categories to what can and cannot be entered into AI tools, with anonymisation examples.

5Human oversight requirements

Where human review of AI outputs is mandatory before use, publication, or client delivery. Addresses automation bias.

6Transparency and disclosure

When employees must disclose AI use to clients, colleagues, or the public. Intellectual property and attribution guidance.

7Incident reporting procedure

How to report policy violations, data exposures, or AI-generated misinformation. Designed to encourage self-reporting.

8Training requirements

Mandatory AI foundations training, role-specific guidance, and annual refresher requirements with completion tracking.

9Review schedule and version control

Six-monthly review cycle, version numbering, change log, and named policy owner. Keeps your policy current as AI evolves.

1030-day implementation plan

Week-by-week rollout guide covering audit, configuration, training, and verification. Move from template to working policy in a month.

Why this template is different

Most AI policy templates available online are either generic compliance checklists or dense legal documents that no employee will read. This template is different because it's designed by technology consultants who understand how AI is actually used in UK businesses:

Research-backed

Informed by original UK research data on AI adoption patterns, including the finding that 71% of UK employees have used unapproved AI tools at work.

Addresses shadow AI

Includes transition guidance for moving from unapproved to approved tools, rather than pretending shadow AI doesn't exist.

Enables, not restricts

Provides explicit, affirmative permission for common use cases. Addresses the Conscientious Worker Penalty - where 41% of workers in a policy vacuum are split between covert AI users and conscientious employees paralysed by ambiguity.

UK-specific

Aligned with UK GDPR, the Data Use and Access Act 2025, ICO guidance, and the principles from the UK Government's AI White Paper.

How to use the template

  1. Audit current AI use first - survey your teams to understand what tools are already being used (including shadow AI) before customising the template
  2. Download the Word version and customise the placeholder sections with your organisation's specific tools, data classifications, and approval processes
  3. Complete the data classification table - categorise your data into green (open), amber (restricted), and red (prohibited) for AI tool usage
  4. Populate the approved tools register - list the specific tools, subscription tiers, and approved use cases for each
  5. Review with leadership and legal - ensure the policy aligns with your employment contracts and existing policies
  6. Follow the 30-day rollout plan in the appendix to move from draft to operational policy
  7. Delete the guidance boxes before distributing the finalised policy to staff

Tip: Don't aim for perfection on the first version. The best AI policies are living documents that improve through use:

  1. Start with the core sections (scope, approved tools, data classification, prohibited uses)
  2. Launch with a "v1.0" and commit to reviewing after 3 months
  3. Gather feedback from employees on what's unclear or impractical
  4. Update based on real usage patterns and incidents

30-day implementation roadmap

The template appendix includes a detailed week-by-week plan. Here's the overview:

Week 1Audit and discovery

Survey current AI tool usage across all departments. Identify shadow AI. Review existing policies for conflicts.

Week 2Draft and review

Customise the template. Populate approved tools list and data classifications. Review with leadership and legal.

Week 3Communication and training

Launch the policy with all-hands communication. Deliver role-specific training. Establish reporting channels.

Week 4Verify and refine

Check compliance with department heads. Address questions and edge cases. Document lessons learned for v1.1.

Common pitfalls to avoid

Research shows that poorly designed AI policies can be worse than having no policy at all. Avoid these common mistakes:

Creating a "Department of No"

Blanket bans drive AI use underground. 71% of UK employees have already used unapproved AI tools. A restrictive policy doesn't stop usage - it stops visible usage, which is worse.

Being too vague

"Use AI responsibly" tells employees nothing. Specify which tools, which data, and which use cases are permitted. Ambiguity punishes your most conscientious employees.

Ignoring subscription tiers

A free ChatGPT account and ChatGPT Enterprise have fundamentally different data retention policies. Your policy must specify which tier is approved, not just which tool.

Treating it as a one-off

AI tools evolve monthly. A policy without a review schedule and version control becomes obsolete within weeks. Plan for six-monthly reviews at minimum.

When to use this template

  • Starting from scratch - your organisation has no AI policy and needs one before employees create their own rules
  • Replacing a blanket ban - you've been saying "don't use AI" but know employees are using it anyway
  • After an incident - a data exposure or compliance concern has highlighted the need for formal governance
  • Regulatory preparation - you want to demonstrate compliance with ICO expectations and the Data Use and Access Act 2025
  • Before an AI rollout - you're about to deploy Microsoft Copilot, ChatGPT Team, or other enterprise AI tools and need governance in place first
  • Board or client requirement - your board, insurers, or clients are asking for evidence of AI governance

Frequently asked questions

There is no single UK law that mandates an AI policy by name. However, UK GDPR requires documented governance for any processing of personal data, and the Data Use and Access Act 2025 requires organisations to implement safeguards for automated decision-making including transparency, contestability, and human review. The ICO expects every organisation using AI tools to have documented governance frameworks. In practice, not having an AI policy creates regulatory, employment, and data protection risks that are straightforward to avoid.

The ICO's own internal AI policy is approximately 10 pages. Research shows that policies over 15-20 pages are rarely read in full. Aim for under 10 pages of core policy, with supplementary appendices (like the implementation plan and approved tools register) kept separate. A policy that nobody reads provides zero protection - brevity is a feature, not a limitation.

Allow specific tools with clear guardrails. Research consistently shows that blanket bans are counterproductive: 71% of UK employees have already used unapproved AI tools at work. A ban doesn't stop usage; it drives it underground where there are no safeguards at all. The most effective approach is to build an approved tools list with specific subscription tiers and clear data handling rules for each.

Regardless of which tool or subscription tier, employees should never enter: special category personal data (health records, ethnic origin, political opinions, trade union membership), customer financial data (bank details, payment card numbers), authentication credentials (passwords, API keys, access tokens), legally privileged communications, and any data subject to specific contractual confidentiality obligations. The template includes a complete data classification table with clear green/amber/red categories.

Every six months at minimum, with interim updates when new AI tools are deployed, regulatory guidance changes, or significant incidents occur. The template includes a version control table and review schedule to help you maintain an audit trail of changes. AI tools evolve rapidly - a policy written in January may be outdated by July if it doesn't account for new capabilities and risks.

This template is designed primarily for UK-based organisations operating under UK law (UK GDPR, Data Use and Access Act 2025, ICO guidance). If your organisation also operates in the EU or serves EU customers, you may need to consider additional EU AI Act requirements, particularly around risk classification and conformity assessments for high-risk AI systems. The UK's principles-based approach differs from the EU's prescriptive risk-based framework, so organisations operating across both jurisdictions may need supplementary guidance.

The template is designed for UK SMEs (typically under 250 employees) but the framework scales well to larger organisations. Enterprises may need to add department-specific appendices, integrate with existing governance frameworks (ISO 27001, SOC 2), and implement more sophisticated technical controls such as data loss prevention policies in Microsoft Purview. The core 10-section structure remains appropriate at any organisational scale.

Full template preview

Read the complete template below, or download it in HTML or Word format.

1. Purpose and Scope

1.1 Purpose

Guidance: Explain why this policy exists. Connect it to your organisational values and business objectives. The most effective purpose statements frame the policy as enabling responsible AI use, not restricting it.

This policy establishes clear guidelines for the acceptable use of artificial intelligence (AI) tools within [Organisation Name]. Its purpose is to:

  • Enable employees to use AI tools productively and confidently, with clear boundaries
  • Protect customer data, employee privacy, intellectual property, and commercially sensitive information
  • Ensure compliance with UK data protection legislation (UK GDPR, Data Protection Act 2018, Data Use and Access Act 2025)
  • Address the ICO's five core principles for AI governance: safety, transparency, fairness, accountability, and contestability
  • Prevent shadow AI by providing clear, approved alternatives to unapproved tools
Why this matters: Research shows that 41% of UK desk workers operate in a total policy vacuum - no official AI guidance at all. This vacuum creates a two-tier workforce: risk-takers use AI tools without guardrails, while conscientious employees abstain entirely, losing over 400 hours of productivity per year. This policy exists to close that gap - providing explicit permission for bounded use cases so that all employees can benefit from AI within safe limits.

1.2 Scope

Guidance: Be specific about who and what this policy covers. Include all worker categories and all types of AI tools, including embedded features within existing software.
1.2.1 Who This Policy Applies To

This policy applies to all:

  • Full-time and part-time employees
  • Contractors and freelancers working on behalf of [Organisation Name]
  • Temporary and agency workers
  • Directors and senior leadership
  • Volunteers (where applicable)
1.2.2 What This Policy Covers

This policy covers all AI tools and services, including but not limited to:

  • Standalone AI applications: ChatGPT, Claude, Google Gemini, Midjourney, DALL-E, and similar services accessed via web browser or app
  • Embedded AI features: Microsoft 365 Copilot, Grammarly, Adobe Firefly, GitHub Copilot, and AI features built into existing software
  • Custom AI tools: Any AI models, chatbots, or automation tools built or deployed specifically for [Organisation Name]
  • AI-powered search: AI-enhanced search features in Bing, Google, and other platforms
1.2.3 Related Policies
Guidance: List your existing policies that intersect with AI governance. This prevents conflicts and shows employees where to find related guidance.

This policy should be read alongside:

  • [Data Protection / Privacy Policy]
  • [IT Acceptable Use Policy]
  • [Information Security Policy]
  • [Intellectual Property Policy]
  • [Confidentiality / NDA Agreements]
  • [Social Media Policy]

1.3 Edge Cases and Special Circumstances

Guidance: These common edge cases should be addressed during your implementation. Decide your organisation's position on each and document it here. These are core policy rules, not optional extras.
Edge Case Your Organisation's Position
Contractors and freelancers [e.g., Must comply with this policy. AI policy compliance to be included in contractor agreements. Contractors may not use personal AI tool subscriptions for work involving our data.]
Personal devices (BYOD) [e.g., Approved AI tools may be used on personal devices only if the device meets our minimum security requirements. Free-tier AI tools must not be used on any device for work purposes.]
Remote workers [e.g., Same policy applies regardless of location. Remote workers must use VPN when accessing AI tools with amber-classified data.]
Embedded AI features [e.g., AI features embedded in approved software (e.g., Smart Compose in Gmail, Copilot in M365) are approved by default under the same data classification rules as the host software.]
AI for personal learning [e.g., Employees may use approved AI tools for professional development during work hours, provided no confidential data is entered. We encourage completion of the UK Government's free AI Skills Boost programme.]
Client-facing vs internal use [e.g., Higher review requirements apply to client-facing AI outputs. All client-facing content must be reviewed by account lead before delivery. Internal use follows standard review requirements.]
International teams [e.g., Team members operating in the EU must also comply with EU AI Act requirements. The most stringent applicable requirement applies. Contact compliance team for guidance.]

2. Approved AI Tools

Guidance: List every AI tool your organisation has sanctioned. Critically, specify the subscription tier - a free ChatGPT account has fundamentally different data retention from ChatGPT Enterprise. Be specific about which departments can use which tools and for what purposes.

2.1 Approved Tools Register

Tool Subscription Tier Approved For Data Restrictions Review Required?
[e.g., Microsoft 365 Copilot] [e.g., Business Premium licence] [e.g., All departments] [e.g., Green and amber data only] [e.g., Yes, for client-facing outputs]
[e.g., ChatGPT] [e.g., Team account only - NOT personal free accounts] [e.g., All departments] [e.g., Green data only] [e.g., Yes, for external communications]
[e.g., GitHub Copilot] [e.g., Business licence] [e.g., Engineering only] [e.g., Internal code only, no client code] [e.g., Yes, all outputs must pass code review]
[e.g., Midjourney] [e.g., Pro plan] [e.g., Marketing only] [e.g., Non-client-facing concept work only] [e.g., No, for internal concepts; yes, for published content]
Important: Free-tier AI tools (such as the free version of ChatGPT) typically retain user inputs for model training. Enterprise or business tiers typically operate under contractual commitments not to train on customer data. Your policy must specify which tier is approved - "ChatGPT" alone is not specific enough.

2.2 Tools Not on This List

Any AI tool not listed in Section 2.1 is not approved for use with any work-related data or tasks. If you believe a tool should be added, submit an evaluation request using the process below.

2.3 New Tool Approval Process

Guidance: Make this process lightweight. If requesting a new tool requires three approvals and a two-week wait, employees will simply use unapproved tools instead. A simple form, a clear timeframe, and a written decision builds trust.

To request approval for a new AI tool:

  1. Complete the AI Tool Evaluation Form (Appendix B)
  2. Submit to [email address / form link / person]
  3. Evaluation will be completed within [5 business days]
  4. You will receive a written decision: approved, conditionally approved, or not approved, with reasoning

Evaluation criteria include:

  • Data handling and retention policies of the tool provider
  • Whether the tool trains on user input data
  • Compliance with UK GDPR requirements
  • Security certifications (SOC 2, ISO 27001, or equivalent)
  • Business justification and expected benefit
  • Whether an existing approved tool can meet the same need
  • Data residency and transfer locations (e.g., does the tool process or store data outside the UK/EEA without a valid transfer mechanism?)

2.4 Transitioning Existing Unapproved Tools (The Amnesty Period)

Guidance: If you are publishing this policy for the first time, your employees may already be using unapproved AI tools. An amnesty period encourages honest disclosure so you can assess risk and transition workflows safely. Without an amnesty, employees will hide existing usage to avoid disciplinary action under Section 10.

We recognise that in the absence of a clear policy, employees may have used unapproved AI tools to maintain productivity. For the first 30 days following the publication of this policy, we are operating an amnesty period:

  • You may declare any unapproved AI tools you have been using to [IT contact / email / form] without risk of disciplinary action
  • IT will evaluate declared tools for official approval and help you transition your workflows to approved alternatives where needed
  • Any data exposure risks identified during this period will be treated as learning opportunities, not policy breaches
  • After the amnesty period ends on [date], continued use of unapproved tools will be subject to the breach consequences in Section 10
Why we're doing this: The purpose of this amnesty is to get a complete picture of AI tool usage across the organisation so we can manage risk properly. Punishing people for past behaviour that wasn't governed by any policy would be unfair and would discourage the honest reporting we need.

3. Prohibited Uses

Guidance: Be specific and concrete. "Do not enter restricted data" is too vague - employees can't determine what counts as restricted. Provide explicit examples so employees can make confident decisions. These prohibitions apply regardless of which tool or subscription tier is used.

3.1 Absolute Prohibitions

The following uses of AI tools are prohibited under all circumstances, regardless of which tool or subscription tier:

Prohibition Examples Why
Entering special category personal data Health records, ethnic origin, political opinions, religious beliefs, trade union membership, biometric data, sexual orientation UK GDPR Article 9 imposes heightened protections; DUAA 2025 maintains restrictions for special category data in automated decisions
Entering customer financial data Bank account numbers, payment card details, credit scores, salary information Financial data requires specific legal bases for processing and heightened security controls
Entering authentication credentials Passwords, API keys, access tokens, encryption keys, security certificates Credentials entered into AI tools may be stored, logged, or exposed through data breaches
Entering legally privileged material Legal advice, litigation strategy, settlement discussions, regulatory correspondence Legal privilege may be waived if privileged material is shared with third-party AI tools
Making automated decisions about people without human review Hiring decisions, performance ratings, disciplinary actions, credit decisions, customer risk scoring UK GDPR and DUAA 2025 require meaningful human involvement in decisions significantly affecting individuals
Creating misleading or deceptive content Deepfakes, impersonation of real people, fabricated testimonials, fake reviews Legal liability under consumer protection, defamation, and fraud laws
Using AI outputs in regulatory submissions without senior review Tax returns, regulatory filings, statutory declarations, compliance reports AI tools generate plausible but sometimes inaccurate content; regulatory submissions require verification
Real-world example: In 2023, Samsung engineers inputted proprietary source code and internal meeting notes into ChatGPT's free tier on three separate occasions. The data became part of OpenAI's training data and could not be recovered. Samsung banned all employees from public generative AI tools and initiated development of an in-house alternative. Clear data classification and approved tool policies would have prevented this entirely.

3.2 Conditional Restrictions

The following uses require manager approval before proceeding:

  • Entering confidential (amber-classified) data into approved business-grade tools
  • Using AI-generated content in client deliverables
  • Using AI to draft communications to regulators or government bodies
  • Using AI to analyse employee performance data (even if anonymised)
  • [Add organisation-specific conditional restrictions]

3.3 When in Doubt

If you are unsure whether a specific use case is permitted, do not avoid the tool entirely. Instead, contact [designated contact / email / channel] for guidance. We aim to respond within [1 business day].

Note: This "when in doubt" process exists because research shows that policy ambiguity creates a two-tier workforce - conscientious employees abstain entirely while less cautious colleagues proceed without safeguards. Asking for guidance is always the right call - it will never result in disciplinary action.

4. Data Classification

Guidance: This section is the most operationally important part of the policy. Employees need a simple mental model to decide what data they can enter into AI tools. The green/amber/red framework below provides that model. Customise the examples for your organisation.

4.1 Data Classification Framework

Classification Description Examples AI Tool Rules
GREEN
(Open)
Information that is already public or has no confidentiality requirement Published marketing content, press releases, public website copy, job adverts, general industry knowledge May be entered into any approved AI tool, any tier. No prior approval needed.
AMBER
(Restricted)
Internal information that could cause harm if disclosed but is not subject to heightened legal protection Internal meeting notes, project plans, process documentation, anonymised performance data, draft strategies, internal communications, standard B2B business contact information (e.g., a client's work email address) May be entered into approved business-grade tools only (e.g., Microsoft Copilot Enterprise, ChatGPT Team/Enterprise). Not free-tier tools. Manager approval recommended for sensitive items.
RED
(Prohibited)
Information subject to heightened legal protection, contractual confidentiality, or significant business sensitivity Sensitive personal data (e.g., consumer details, home addresses), special category data, customer financial data, health records, employee personal data, trade secrets, proprietary algorithms, passwords, legal advice, contractual terms Must never be entered into any AI tool under any circumstances. If AI analysis is needed, data must first be fully anonymised (see Section 4.2).

4.2 Anonymisation Guidance

Guidance: Employees can often benefit from AI analysis of sensitive data by anonymising it first. Provide concrete examples so they know what effective anonymisation looks like.

You may use AI tools to analyse data that would otherwise be classified as red, provided you first remove all identifying information. Effective anonymisation means a reasonable person could not identify any individual or organisation from the data.

Before Anonymisation (DO NOT enter) After Anonymisation (May enter into approved tools)
"Customer John Smith (account 12345) complained about delivery delays to Manchester on 15 March" "A customer complained about delivery delays to a northern city in mid-March"
"Sarah Jones in Finance earned £45,000 last year and received a 3% raise" "A mid-level finance employee earned between £40k-50k and received a standard annual increase"
"Our contract with Acme Ltd includes a 12% discount on orders over £50,000" "Our supplier agreements typically include volume discounts of 10-15% on large orders"
Caution: Simply removing names is not always sufficient. If the remaining data includes enough detail to identify someone (e.g., "the only female director" or "the client in our Leeds office"), it has not been effectively anonymised. Consider whether the combination of remaining details could identify an individual. Under UK GDPR, pseudonymised data (where identifiers are replaced with codes) is still personal data and still subject to data protection requirements.

4.3 Common Anonymisation Mistakes

Guidance: These are the most frequent errors organisations make when anonymising data for AI use. Share these with your team to improve data handling practices.
  • Removing only names - dates, locations, job titles, and transaction amounts can still identify individuals when combined
  • Confusing pseudonymisation with anonymisation - replacing "John Smith" with "Employee A" is pseudonymisation, not anonymisation; the data is still personal data under UK GDPR
  • Anonymising one dataset but not another - if your anonymised dataset can be cross-referenced with another dataset to identify individuals, it has not been effectively anonymised
  • Assuming AI won't infer identity - AI tools can sometimes infer identity from contextual clues that humans might not notice

4.4 Data Classification Decision Aid

If you are unsure how to classify data, use this quick test:

  1. Is it already publicly available? → GREEN
  2. Could it identify a specific person? → RED (unless anonymised)
  3. Is it subject to a confidentiality agreement? → RED
  4. Would disclosure cause significant commercial harm? → RED
  5. Is it internal but not sensitive? → AMBER
  6. Still unsure? → Treat as AMBER and seek guidance from [designated contact]

5. Human Oversight Requirements

Guidance: AI outputs can be convincingly wrong. This section establishes where human review is mandatory. Be specific about who must review what, and what "review" actually means in practice. This section also addresses the Data Use and Access Act 2025's requirement for meaningful human involvement in significant automated decisions.

5.1 Mandatory Human Review

AI-generated content must be reviewed by a qualified human before being:

Output Type Reviewer Review Requirements
Content sent to clients or external stakeholders [e.g., Line manager or account lead] Check accuracy, tone, and appropriateness. Verify any factual claims against primary sources.
Published content (website, social media, marketing) [e.g., Marketing manager or content lead] Full editorial review including brand alignment, factual accuracy, and disclosure requirements.
Financial analysis or recommendations [e.g., Finance director or qualified accountant] Verify calculations independently. Cross-check against source data. AI-generated financial figures must never be trusted without verification.
Legal or compliance content [e.g., Legal counsel or compliance officer] Senior review required. AI-generated legal advice is not a substitute for qualified legal counsel.
Decisions affecting individuals (hiring, performance, customer scoring) [e.g., Senior manager with HR] Document whether AI recommendation was accepted, modified, or overridden, with reasoning. Required under DUAA 2025.
Code and technical outputs [e.g., Senior developer] Standard code review process. Check for security vulnerabilities, licensing issues, and correctness.
Autonomous actions (Agentic AI) executing workflows (e.g., sending emails, making purchases, altering database records) [e.g., IT security lead] AI agents must never be granted autonomous "write" or "send" permissions without explicit IT security clearance. All AI agents must operate in "human-in-the-loop" (draft only) mode by default.

5.2 Uses That Do Not Require Prior Review

Guidance: It is equally important to state what does NOT require review. If every use of AI requires approval, employees will either avoid AI entirely (the Conscientious Worker Penalty) or bypass the process. Explicitly permitting low-risk uses builds trust and compliance.

The following uses of approved AI tools do not require prior review:

  • Brainstorming and idea generation for internal purposes
  • Drafting internal emails or messages (where the employee reviews before sending)
  • Researching general (non-confidential) topics
  • Summarising publicly available information
  • Grammar and spelling checks on your own writing
  • Learning and professional development activities
  • [Add organisation-specific permitted uses]

5.3 Guarding Against Automation Bias

Automation bias occurs when human reviewers defer to AI recommendations without meaningful scrutiny. When reviewing AI outputs, you should:

  • Treat AI outputs as a first draft, not a finished product
  • Verify factual claims against primary sources - AI tools can present false information confidently
  • Question whether the AI's recommendation makes sense given your professional knowledge
  • Be aware that AI outputs may contain outdated information, biases from training data, or logical errors
  • Document your review, including any changes made, for audit purposes
  • If you almost never override an AI recommendation, examine whether you are conducting genuine review or simply approving outputs by default
Automation bias warning: Research shows that humans are particularly susceptible to automation bias when under time pressure, when they lack training on the AI system's limitations, or when organisational culture discourages overriding AI recommendations. Meaningful review requires adequate time, training, and genuine authority to disagree with AI outputs.

6. Transparency and Disclosure

Guidance: This section addresses when employees must disclose AI use. Be pragmatic: requiring disclosure for every email spell-checked by AI is impractical; requiring disclosure for AI-generated client deliverables is reasonable. Balance transparency with operational practicality.

6.1 When Disclosure Is Required

You must disclose that AI tools were used in the following situations:

  • Client deliverables where AI substantially contributed to the content (not just minor editing or grammar checks)
  • Published content (blogs, reports, white papers) where AI generated significant portions of the text
  • Recruitment decisions where AI tools were used to screen, score, or rank candidates
  • Any decision affecting an individual where automated processing contributed to the outcome (required under DUAA 2025)
  • [Add organisation-specific disclosure requirements]

6.2 When Disclosure Is Not Required

You do not need to disclose AI use for:

  • Grammar and spelling checks
  • Internal brainstorming and idea generation
  • Research and information gathering for your own use
  • Minor editing or reformatting of your own writing

6.3 Intellectual Property

All AI-generated content created using [Organisation Name]'s tools, accounts, or data is the property of [Organisation Name], not the individual employee.

Employees should be aware that:

  • While UK law (CDPA 1988 s.9(3)) provides some protection for computer-generated works, content generated entirely by AI may not be eligible for copyright protection if it lacks sufficient human originality, and this protection is often not recognised internationally
  • AI tools may generate content that inadvertently infringes third-party intellectual property - following the principles from the Getty Images v Stability AI litigation, prompting AI to reproduce copyrighted styles or works creates infringement risk
  • Human review of AI outputs should include a check for potential IP issues (e.g., content that closely resembles known copyrighted works)
  • Do not rely on AI-generated content as core intellectual property without understanding that ownership may be contested

7. Incident Reporting

Guidance: The incident reporting process should encourage self-reporting, not punish it. If employees fear severe consequences for reporting mistakes, they will hide them. Violations discovered through self-reporting should be treated more favourably than violations discovered through audit. The goal is to learn and improve, not to catch people out.

7.1 What to Report

Report any of the following to [email address / person / channel]:

  • Accidental entry of red-classified data into an AI tool
  • Discovery that an AI output contained inaccurate information that was or could have been shared externally
  • Use of an unapproved AI tool for work purposes
  • Suspicion that AI-generated content has been published without required review
  • Discovery of a data breach related to AI tool usage
  • Any situation where you believe this policy has been violated (by yourself or others)

7.2 How to Report

You can report AI-related incidents through any of these channels:

  • Email: [incident reporting email]
  • Form: [link to incident report form]
  • In person: Speak confidentially with [designated person / department]

7.3 What Happens After a Report

  1. All reports will be acknowledged within [1 business day]
  2. An investigation will assess what happened, the potential impact, and root causes
  3. Findings will be documented and, where appropriate, used to update this policy or improve training
  4. The reporter will be informed of the outcome and any actions taken
  5. Significant incidents will be logged in a central register for pattern analysis
Our commitment: Self-reporting a genuine mistake will always be treated more favourably than a violation discovered through audit. We encourage a culture of transparency where reporting mistakes helps everyone learn and improve our practices.

8. Training Requirements

Guidance: Training is the single most important factor in policy compliance. Organisations with role-specific AI training see compliance rates of 50-60%, compared to 15-25% for organisations that distribute policies without training. Specify what training is required, for whom, and how often.

8.1 Mandatory Training

Training Who When Duration
AI Foundations Briefing All staff Within 30 days of this policy's effective date (or within 30 days of joining) [e.g., 30 minutes]
Role-Specific AI Training Staff in departments using AI tools Within 60 days of this policy's effective date [e.g., 1 hour per department]
Annual Refresher All staff Annually, aligned with policy review cycle [e.g., 20 minutes]
Data Classification Workshop Managers and team leads Within 30 days of this policy's effective date [e.g., 45 minutes]

8.2 Training Content

The AI Foundations Briefing covers:

  • What AI tools are, how they work at a high level, and their limitations
  • This policy's key requirements: approved tools, data classification, prohibited uses
  • How to identify and report policy concerns
  • Where to get help and ask questions

Role-Specific Training covers:

  • Which approved tools are relevant to the department's work
  • Specific use cases and worked examples for the department
  • Data classification decisions common to the department's data
  • Review requirements specific to the department's outputs

8.3 Free Resources

The UK Government offers free AI foundations training as part of its goal to upskill 10 million workers by 2030. Employees are encouraged to supplement organisational training with these resources for professional development.

9. Review Schedule and Version Control

9.1 Review Cycle

This policy will be formally reviewed:

  • Every six months (at minimum), with the next scheduled review on [date]
  • Whenever a new AI tool is deployed across the organisation
  • Whenever relevant UK legislation or ICO guidance changes
  • Following any significant AI-related incident

9.2 Policy Owner

This policy is owned by [Name, Role], who is responsible for maintaining, reviewing, and communicating updates. Feedback and improvement suggestions should be directed to [email address].

9.3 Change Log

Version Date Author Summary of Changes
1.0 [Date] [Name] Initial policy published
[1.1] [Date] [Name] [Summary of changes]

10. Breach Consequences

Guidance: A graduated framework maintains fairness while creating incentives for compliance. Policies that threaten immediate termination for any violation are rarely followed because employees perceive them as unreasonable. Recognise that mistakes happen, especially with new technology, and distinguish between genuine errors and deliberate violations.

10.1 Graduated Response Framework

Level Situation Typical Response
Level 1
Unintentional
First-time violation due to misunderstanding, lack of training, or genuine mistake. Self-reported. Informal guidance, additional training on relevant policy section, documentation of incident for learning purposes.
Level 2
Repeated or Negligent
Second violation, or first violation where the employee should reasonably have known the policy. Not self-reported. Formal written warning, mandatory policy retraining, temporary restriction on AI tool access pending completion of training.
Level 3
Serious
Deliberate violation, or any violation involving regulated personal data, customer harm, or significant commercial risk. Formal disciplinary action in accordance with [Organisation Name]'s disciplinary procedure, which may include suspension or termination of employment.
Important: Self-reporting a policy violation will always be considered a mitigating factor. We recognise that AI is a new and evolving domain, and our primary goal is to improve practices through learning, not to punish people for genuine mistakes.

Appendix A: 30-Day Implementation Plan

Guidance: This plan provides a practical week-by-week framework for moving from template to operational policy. Adapt timeframes to your organisation's size and complexity. For organisations under 50 employees, the plan may be achievable in 2-3 weeks; for larger organisations, allow the full 30 days.

Week 1: Audit and Discovery (Days 1-7)

Day Action Owner Output
1-2 Survey all departments: which AI tools are currently in use (including personal/unapproved tools)? [IT / Compliance] AI tool usage inventory
3-4 Review existing policies (data protection, IT acceptable use, confidentiality) for potential conflicts with AI policy [Compliance / Legal] Policy conflict analysis
5-7 Classify organisational data into green/amber/red categories. Identify which data types each department handles. [Department heads + IT] Completed data classification table

Week 2: Draft and Review (Days 8-14)

Day Action Owner Output
8-10 Customise this template: populate approved tools register, data classifications, and organisation-specific sections [IT + Compliance] Draft AI policy v0.1
11-12 Review draft with leadership team. Check alignment with business objectives and risk appetite. [Senior leadership] Leadership-approved draft v0.2
13-14 Legal review (if applicable). Ensure consistency with employment contracts and regulatory obligations. [Legal advisor] Final policy v1.0

Week 3: Communication and Training (Days 15-21)

Day Action Owner Output
15 All-hands communication: announce the new policy. Explain the purpose (enable, not restrict). Share the document. [Senior leader / CEO] Policy announcement
16-18 Deliver AI Foundations Briefing to all staff (can be done in department groups) [IT / Training lead] Training completion records
19-21 Deliver role-specific training to departments that use AI tools regularly [Department leads + IT] Department-specific training completion

Week 4: Verify and Refine (Days 22-30)

Day Action Owner Output
22-24 Department heads verify: all team members have completed training, approved tools are correctly configured, reporting channels are operational [Department heads] Compliance verification checklist
25-27 Collect feedback from employees: what's unclear? What's impractical? What use cases aren't covered? [Compliance / HR] Feedback log
28-30 Address feedback, update policy if needed (issue v1.1), document lessons learned, set next review date [Policy owner] Updated policy + review calendar

Appendix B: AI Tool Evaluation Form

Guidance: Use this form when an employee requests approval for a new AI tool. Keep the process lightweight to avoid driving employees to use unapproved tools instead.
Field Response
Requested by [Name, Department]
Date requested [Date]
Tool name and provider [e.g., Claude by Anthropic]
Subscription tier requested [e.g., Team plan]
Intended use case [What will it be used for?]
Department(s) that will use it [Which teams?]
Estimated number of users [Number]
What data will be entered? [Green / Amber / Red classification]
Why can't an existing approved tool meet this need? [Justification]
Does the provider retain user input for training? [Yes / No / Depends on tier]
Provider security certifications [SOC 2, ISO 27001, etc.]
Cost per user per month [Amount]
Evaluation Decision
Decision [Approved / Conditionally Approved / Not Approved]
Conditions (if applicable) [Any restrictions or requirements]
Evaluated by [Name, Role]
Date decided [Date]

Appendix C: Glossary

Term Definition
AI (Artificial Intelligence) Technology that enables computers to perform tasks that typically require human intelligence, such as understanding language, generating text, analysing data, or creating images.
Generative AI AI systems that can create new content (text, images, code, audio) based on patterns learned from training data. Examples include ChatGPT, Claude, Midjourney, and Microsoft Copilot.
Shadow AI The use of AI tools by employees without organisational knowledge or approval. Often driven by lack of approved alternatives or overly restrictive policies.
Hallucination When an AI tool generates information that sounds authoritative but is factually incorrect. AI tools do not "know" things - they predict likely responses, which can include plausible but false statements.
Automation Bias The tendency for humans to defer to AI recommendations without sufficient critical scrutiny, particularly when the AI output is presented with confidence.
Data Anonymisation The process of removing personally identifiable information from data so that individuals cannot be identified from the remaining information.
UK GDPR The UK General Data Protection Regulation, the primary UK law governing the processing of personal data. Retained from EU law and supplemented by the Data Protection Act 2018.
DUAA 2025 The Data Use and Access Act 2025, UK legislation that reformed automated decision-making requirements, shifting from prohibition-based to principles-based governance with mandatory ex-post safeguards.
Special Category Data Personal data revealing racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic or biometric data, health data, or sexual orientation. Subject to heightened protections under UK GDPR Article 9.
Conscientious Worker Penalty The phenomenon where vague AI policies cause cautious employees to avoid AI entirely while less cautious employees use it without safeguards, resulting in worse outcomes than a clear, permissive policy.
Agentic AI AI systems that can autonomously plan, execute, and adapt actions to achieve goals with minimal human intervention. The ICO confirmed in January 2026 that agentic systems remain fully subject to UK GDPR. Policies should be reviewed as agentic AI becomes more prevalent in workplace tools.

Appendix D: Sign-Off

By signing below, the parties confirm that they have reviewed this AI Acceptable Use Policy and approve it for implementation across [Organisation Name].

Policy Owner Sign-Off

Name:
Role:
Signature:
Date:

Senior Leadership Sign-Off

Name:
Role:
Signature:
Date:

Related tools

Digital maturity assessment showing five dimensions of digital capability

Digital maturity assessment

Assess your organisation's digital readiness across five key dimensions: technology, skills, processes, data, and strategy.

5 minutes Take assessment
Infographic showing clear requirements bridging the gap between business vision and developer understanding

Software requirements specification (SRS) template

A comprehensive SRS template with all 8 essential sections, guidance notes, and example content. Perfect for commissioning bespoke software.

Word / HTML Download

Need help implementing AI governance?

Whether you need help customising this template, training your team, or implementing technical controls, we can help you get AI governance right first time.

Get in touch

Find us