AI Acceptable Use Policy

[Organisation Name]

Policy Owner: [Name / Role]
Approved by: [Senior Leader / Board]
Version: 1.0
Effective Date: [Date]
Next Review Date: [Date + 6 months]

Version Date Author Changes
0.1 [Date] [Name] Initial draft
0.2 [Date] [Name] Reviewed by legal / leadership
1.0 [Date] [Name] Approved and published

Table of Contents

1. Purpose and Scope (including Edge Cases)

2. Approved AI Tools (including Shadow AI Amnesty)

3. Prohibited Uses

4. Data Classification

5. Human Oversight Requirements

6. Transparency and Disclosure

7. Incident Reporting

8. Training Requirements

9. Review Schedule and Version Control

10. Breach Consequences

Appendix A: 30-Day Implementation Plan

Appendix B: AI Tool Evaluation Form

Appendix C: Glossary

Appendix D: Sign-Off

1. Purpose and Scope

1.1 Purpose

Guidance: Explain why this policy exists. Connect it to your organisational values and business objectives. The most effective purpose statements frame the policy as enabling responsible AI use, not restricting it.

This policy establishes clear guidelines for the acceptable use of artificial intelligence (AI) tools within [Organisation Name]. Its purpose is to:

Why this matters: Research shows that 41% of UK desk workers operate in a total policy vacuum - no official AI guidance at all. This vacuum creates a two-tier workforce: risk-takers use AI tools without guardrails, while conscientious employees abstain entirely, losing over 400 hours of productivity per year. This policy exists to close that gap - providing explicit permission for bounded use cases so that all employees can benefit from AI within safe limits.

1.2 Scope

Guidance: Be specific about who and what this policy covers. Include all worker categories and all types of AI tools, including embedded features within existing software.

1.2.1 Who This Policy Applies To

This policy applies to all:

1.2.2 What This Policy Covers

This policy covers all AI tools and services, including but not limited to:

1.2.3 Related Policies

Guidance: List your existing policies that intersect with AI governance. This prevents conflicts and shows employees where to find related guidance.

This policy should be read alongside:

1.3 Edge Cases and Special Circumstances

Guidance: These common edge cases should be addressed during your implementation. Decide your organisation's position on each and document it here. These are core policy rules, not optional extras.
Edge Case Your Organisation's Position
Contractors and freelancers [e.g., Must comply with this policy. AI policy compliance to be included in contractor agreements. Contractors may not use personal AI tool subscriptions for work involving our data.]
Personal devices (BYOD) [e.g., Approved AI tools may be used on personal devices only if the device meets our minimum security requirements. Free-tier AI tools must not be used on any device for work purposes.]
Remote workers [e.g., Same policy applies regardless of location. Remote workers must use VPN when accessing AI tools with amber-classified data.]
Embedded AI features [e.g., AI features embedded in approved software (e.g., Smart Compose in Gmail, Copilot in M365) are approved by default under the same data classification rules as the host software.]
AI for personal learning [e.g., Employees may use approved AI tools for professional development during work hours, provided no confidential data is entered. We encourage completion of the UK Government's free AI Skills Boost programme.]
Client-facing vs internal use [e.g., Higher review requirements apply to client-facing AI outputs. All client-facing content must be reviewed by account lead before delivery. Internal use follows standard review requirements.]
International teams [e.g., Team members operating in the EU must also comply with EU AI Act requirements. The most stringent applicable requirement applies. Contact compliance team for guidance.]

2. Approved AI Tools

Guidance: List every AI tool your organisation has sanctioned. Critically, specify the subscription tier - a free ChatGPT account has fundamentally different data retention from ChatGPT Enterprise. Be specific about which departments can use which tools and for what purposes.

2.1 Approved Tools Register

Tool Subscription Tier Approved For Data Restrictions Review Required?
[e.g., Microsoft 365 Copilot] [e.g., Business Premium licence] [e.g., All departments] [e.g., Green and amber data only] [e.g., Yes, for client-facing outputs]
[e.g., ChatGPT] [e.g., Team account only - NOT personal free accounts] [e.g., All departments] [e.g., Green data only] [e.g., Yes, for external communications]
[e.g., GitHub Copilot] [e.g., Business licence] [e.g., Engineering only] [e.g., Internal code only, no client code] [e.g., Yes, all outputs must pass code review]
[e.g., Midjourney] [e.g., Pro plan] [e.g., Marketing only] [e.g., Non-client-facing concept work only] [e.g., No, for internal concepts; yes, for published content]
Important: Free-tier AI tools (such as the free version of ChatGPT) typically retain user inputs for model training. Enterprise or business tiers typically operate under contractual commitments not to train on customer data. Your policy must specify which tier is approved - "ChatGPT" alone is not specific enough.

2.2 Tools Not on This List

Any AI tool not listed in Section 2.1 is not approved for use with any work-related data or tasks. If you believe a tool should be added, submit an evaluation request using the process below.

2.3 New Tool Approval Process

Guidance: Make this process lightweight. If requesting a new tool requires three approvals and a two-week wait, employees will simply use unapproved tools instead. A simple form, a clear timeframe, and a written decision builds trust.

To request approval for a new AI tool:

  1. Complete the AI Tool Evaluation Form (Appendix B)
  2. Submit to [email address / form link / person]
  3. Evaluation will be completed within [5 business days]
  4. You will receive a written decision: approved, conditionally approved, or not approved, with reasoning

Evaluation criteria include:

2.4 Transitioning Existing Unapproved Tools (The Amnesty Period)

Guidance: If you are publishing this policy for the first time, your employees may already be using unapproved AI tools. An amnesty period encourages honest disclosure so you can assess risk and transition workflows safely. Without an amnesty, employees will hide existing usage to avoid disciplinary action under Section 10.

We recognise that in the absence of a clear policy, employees may have used unapproved AI tools to maintain productivity. For the first 30 days following the publication of this policy, we are operating an amnesty period:

Why we're doing this: The purpose of this amnesty is to get a complete picture of AI tool usage across the organisation so we can manage risk properly. Punishing people for past behaviour that wasn't governed by any policy would be unfair and would discourage the honest reporting we need.

3. Prohibited Uses

Guidance: Be specific and concrete. "Do not enter restricted data" is too vague - employees can't determine what counts as restricted. Provide explicit examples so employees can make confident decisions. These prohibitions apply regardless of which tool or subscription tier is used.

3.1 Absolute Prohibitions

The following uses of AI tools are prohibited under all circumstances, regardless of which tool or subscription tier:

Prohibition Examples Why
Entering special category personal data Health records, ethnic origin, political opinions, religious beliefs, trade union membership, biometric data, sexual orientation UK GDPR Article 9 imposes heightened protections; DUAA 2025 maintains restrictions for special category data in automated decisions
Entering customer financial data Bank account numbers, payment card details, credit scores, salary information Financial data requires specific legal bases for processing and heightened security controls
Entering authentication credentials Passwords, API keys, access tokens, encryption keys, security certificates Credentials entered into AI tools may be stored, logged, or exposed through data breaches
Entering legally privileged material Legal advice, litigation strategy, settlement discussions, regulatory correspondence Legal privilege may be waived if privileged material is shared with third-party AI tools
Making automated decisions about people without human review Hiring decisions, performance ratings, disciplinary actions, credit decisions, customer risk scoring UK GDPR and DUAA 2025 require meaningful human involvement in decisions significantly affecting individuals
Creating misleading or deceptive content Deepfakes, impersonation of real people, fabricated testimonials, fake reviews Legal liability under consumer protection, defamation, and fraud laws
Using AI outputs in regulatory submissions without senior review Tax returns, regulatory filings, statutory declarations, compliance reports AI tools generate plausible but sometimes inaccurate content; regulatory submissions require verification
Real-world example: In 2023, Samsung engineers inputted proprietary source code and internal meeting notes into ChatGPT's free tier on three separate occasions. The data became part of OpenAI's training data and could not be recovered. Samsung banned all employees from public generative AI tools and initiated development of an in-house alternative. Clear data classification and approved tool policies would have prevented this entirely.

3.2 Conditional Restrictions

The following uses require manager approval before proceeding:

3.3 When in Doubt

If you are unsure whether a specific use case is permitted, do not avoid the tool entirely. Instead, contact [designated contact / email / channel] for guidance. We aim to respond within [1 business day].

Note: This "when in doubt" process exists because research shows that policy ambiguity creates a two-tier workforce - conscientious employees abstain entirely while less cautious colleagues proceed without safeguards. Asking for guidance is always the right call - it will never result in disciplinary action.

4. Data Classification

Guidance: This section is the most operationally important part of the policy. Employees need a simple mental model to decide what data they can enter into AI tools. The green/amber/red framework below provides that model. Customise the examples for your organisation.

4.1 Data Classification Framework

Classification Description Examples AI Tool Rules
GREEN
(Open)
Information that is already public or has no confidentiality requirement Published marketing content, press releases, public website copy, job adverts, general industry knowledge May be entered into any approved AI tool, any tier. No prior approval needed.
AMBER
(Restricted)
Internal information that could cause harm if disclosed but is not subject to heightened legal protection Internal meeting notes, project plans, process documentation, anonymised performance data, draft strategies, internal communications, standard B2B business contact information (e.g., a client's work email address) May be entered into approved business-grade tools only (e.g., Microsoft Copilot Enterprise, ChatGPT Team/Enterprise). Not free-tier tools. Manager approval recommended for sensitive items.
RED
(Prohibited)
Information subject to heightened legal protection, contractual confidentiality, or significant business sensitivity Sensitive personal data (e.g., consumer details, home addresses), special category data, customer financial data, health records, employee personal data, trade secrets, proprietary algorithms, passwords, legal advice, contractual terms Must never be entered into any AI tool under any circumstances. If AI analysis is needed, data must first be fully anonymised (see Section 4.2).

4.2 Anonymisation Guidance

Guidance: Employees can often benefit from AI analysis of sensitive data by anonymising it first. Provide concrete examples so they know what effective anonymisation looks like.

You may use AI tools to analyse data that would otherwise be classified as red, provided you first remove all identifying information. Effective anonymisation means a reasonable person could not identify any individual or organisation from the data.

Before Anonymisation (DO NOT enter) After Anonymisation (May enter into approved tools)
"Customer John Smith (account 12345) complained about delivery delays to Manchester on 15 March" "A customer complained about delivery delays to a northern city in mid-March"
"Sarah Jones in Finance earned £45,000 last year and received a 3% raise" "A mid-level finance employee earned between £40k-50k and received a standard annual increase"
"Our contract with Acme Ltd includes a 12% discount on orders over £50,000" "Our supplier agreements typically include volume discounts of 10-15% on large orders"
Caution: Simply removing names is not always sufficient. If the remaining data includes enough detail to identify someone (e.g., "the only female director" or "the client in our Leeds office"), it has not been effectively anonymised. Consider whether the combination of remaining details could identify an individual. Under UK GDPR, pseudonymised data (where identifiers are replaced with codes) is still personal data and still subject to data protection requirements.

4.3 Common Anonymisation Mistakes

Guidance: These are the most frequent errors organisations make when anonymising data for AI use. Share these with your team to improve data handling practices.

4.4 Data Classification Decision Aid

If you are unsure how to classify data, use this quick test:

  1. Is it already publicly available? → GREEN
  2. Could it identify a specific person? → RED (unless anonymised)
  3. Is it subject to a confidentiality agreement? → RED
  4. Would disclosure cause significant commercial harm? → RED
  5. Is it internal but not sensitive? → AMBER
  6. Still unsure? → Treat as AMBER and seek guidance from [designated contact]

5. Human Oversight Requirements

Guidance: AI outputs can be convincingly wrong. This section establishes where human review is mandatory. Be specific about who must review what, and what "review" actually means in practice. This section also addresses the Data Use and Access Act 2025's requirement for meaningful human involvement in significant automated decisions.

5.1 Mandatory Human Review

AI-generated content must be reviewed by a qualified human before being:

Output Type Reviewer Review Requirements
Content sent to clients or external stakeholders [e.g., Line manager or account lead] Check accuracy, tone, and appropriateness. Verify any factual claims against primary sources.
Published content (website, social media, marketing) [e.g., Marketing manager or content lead] Full editorial review including brand alignment, factual accuracy, and disclosure requirements.
Financial analysis or recommendations [e.g., Finance director or qualified accountant] Verify calculations independently. Cross-check against source data. AI-generated financial figures must never be trusted without verification.
Legal or compliance content [e.g., Legal counsel or compliance officer] Senior review required. AI-generated legal advice is not a substitute for qualified legal counsel.
Decisions affecting individuals (hiring, performance, customer scoring) [e.g., Senior manager with HR] Document whether AI recommendation was accepted, modified, or overridden, with reasoning. Required under DUAA 2025.
Code and technical outputs [e.g., Senior developer] Standard code review process. Check for security vulnerabilities, licensing issues, and correctness.
Autonomous actions (Agentic AI) executing workflows (e.g., sending emails, making purchases, altering database records) [e.g., IT security lead] AI agents must never be granted autonomous "write" or "send" permissions without explicit IT security clearance. All AI agents must operate in "human-in-the-loop" (draft only) mode by default.

5.2 Uses That Do Not Require Prior Review

Guidance: It is equally important to state what does NOT require review. If every use of AI requires approval, employees will either avoid AI entirely (the Conscientious Worker Penalty) or bypass the process. Explicitly permitting low-risk uses builds trust and compliance.

The following uses of approved AI tools do not require prior review:

5.3 Guarding Against Automation Bias

Automation bias occurs when human reviewers defer to AI recommendations without meaningful scrutiny. When reviewing AI outputs, you should:

Automation bias warning: Research shows that humans are particularly susceptible to automation bias when under time pressure, when they lack training on the AI system's limitations, or when organisational culture discourages overriding AI recommendations. Meaningful review requires adequate time, training, and genuine authority to disagree with AI outputs.

6. Transparency and Disclosure

Guidance: This section addresses when employees must disclose AI use. Be pragmatic: requiring disclosure for every email spell-checked by AI is impractical; requiring disclosure for AI-generated client deliverables is reasonable. Balance transparency with operational practicality.

6.1 When Disclosure Is Required

You must disclose that AI tools were used in the following situations:

6.2 When Disclosure Is Not Required

You do not need to disclose AI use for:

6.3 Intellectual Property

All AI-generated content created using [Organisation Name]'s tools, accounts, or data is the property of [Organisation Name], not the individual employee.

Employees should be aware that:

7. Incident Reporting

Guidance: The incident reporting process should encourage self-reporting, not punish it. If employees fear severe consequences for reporting mistakes, they will hide them. Violations discovered through self-reporting should be treated more favourably than violations discovered through audit. The goal is to learn and improve, not to catch people out.

7.1 What to Report

Report any of the following to [email address / person / channel]:

7.2 How to Report

You can report AI-related incidents through any of these channels:

7.3 What Happens After a Report

  1. All reports will be acknowledged within [1 business day]
  2. An investigation will assess what happened, the potential impact, and root causes
  3. Findings will be documented and, where appropriate, used to update this policy or improve training
  4. The reporter will be informed of the outcome and any actions taken
  5. Significant incidents will be logged in a central register for pattern analysis
Our commitment: Self-reporting a genuine mistake will always be treated more favourably than a violation discovered through audit. We encourage a culture of transparency where reporting mistakes helps everyone learn and improve our practices.

8. Training Requirements

Guidance: Training is the single most important factor in policy compliance. Organisations with role-specific AI training see compliance rates of 50-60%, compared to 15-25% for organisations that distribute policies without training. Specify what training is required, for whom, and how often.

8.1 Mandatory Training

Training Who When Duration
AI Foundations Briefing All staff Within 30 days of this policy's effective date (or within 30 days of joining) [e.g., 30 minutes]
Role-Specific AI Training Staff in departments using AI tools Within 60 days of this policy's effective date [e.g., 1 hour per department]
Annual Refresher All staff Annually, aligned with policy review cycle [e.g., 20 minutes]
Data Classification Workshop Managers and team leads Within 30 days of this policy's effective date [e.g., 45 minutes]

8.2 Training Content

The AI Foundations Briefing covers:

Role-Specific Training covers:

8.3 Free Resources

The UK Government offers free AI foundations training as part of its goal to upskill 10 million workers by 2030. Employees are encouraged to supplement organisational training with these resources for professional development.

9. Review Schedule and Version Control

9.1 Review Cycle

This policy will be formally reviewed:

9.2 Policy Owner

This policy is owned by [Name, Role], who is responsible for maintaining, reviewing, and communicating updates. Feedback and improvement suggestions should be directed to [email address].

9.3 Change Log

Version Date Author Summary of Changes
1.0 [Date] [Name] Initial policy published
[1.1] [Date] [Name] [Summary of changes]

10. Breach Consequences

Guidance: A graduated framework maintains fairness while creating incentives for compliance. Policies that threaten immediate termination for any violation are rarely followed because employees perceive them as unreasonable. Recognise that mistakes happen, especially with new technology, and distinguish between genuine errors and deliberate violations.

10.1 Graduated Response Framework

Level Situation Typical Response
Level 1
Unintentional
First-time violation due to misunderstanding, lack of training, or genuine mistake. Self-reported. Informal guidance, additional training on relevant policy section, documentation of incident for learning purposes.
Level 2
Repeated or Negligent
Second violation, or first violation where the employee should reasonably have known the policy. Not self-reported. Formal written warning, mandatory policy retraining, temporary restriction on AI tool access pending completion of training.
Level 3
Serious
Deliberate violation, or any violation involving regulated personal data, customer harm, or significant commercial risk. Formal disciplinary action in accordance with [Organisation Name]'s disciplinary procedure, which may include suspension or termination of employment.
Important: Self-reporting a policy violation will always be considered a mitigating factor. We recognise that AI is a new and evolving domain, and our primary goal is to improve practices through learning, not to punish people for genuine mistakes.

Appendix A: 30-Day Implementation Plan

Guidance: This plan provides a practical week-by-week framework for moving from template to operational policy. Adapt timeframes to your organisation's size and complexity. For organisations under 50 employees, the plan may be achievable in 2-3 weeks; for larger organisations, allow the full 30 days.

Week 1: Audit and Discovery (Days 1-7)

Day Action Owner Output
1-2 Survey all departments: which AI tools are currently in use (including personal/unapproved tools)? [IT / Compliance] AI tool usage inventory
3-4 Review existing policies (data protection, IT acceptable use, confidentiality) for potential conflicts with AI policy [Compliance / Legal] Policy conflict analysis
5-7 Classify organisational data into green/amber/red categories. Identify which data types each department handles. [Department heads + IT] Completed data classification table

Week 2: Draft and Review (Days 8-14)

Day Action Owner Output
8-10 Customise this template: populate approved tools register, data classifications, and organisation-specific sections [IT + Compliance] Draft AI policy v0.1
11-12 Review draft with leadership team. Check alignment with business objectives and risk appetite. [Senior leadership] Leadership-approved draft v0.2
13-14 Legal review (if applicable). Ensure consistency with employment contracts and regulatory obligations. [Legal advisor] Final policy v1.0

Week 3: Communication and Training (Days 15-21)

Day Action Owner Output
15 All-hands communication: announce the new policy. Explain the purpose (enable, not restrict). Share the document. [Senior leader / CEO] Policy announcement
16-18 Deliver AI Foundations Briefing to all staff (can be done in department groups) [IT / Training lead] Training completion records
19-21 Deliver role-specific training to departments that use AI tools regularly [Department leads + IT] Department-specific training completion

Week 4: Verify and Refine (Days 22-30)

Day Action Owner Output
22-24 Department heads verify: all team members have completed training, approved tools are correctly configured, reporting channels are operational [Department heads] Compliance verification checklist
25-27 Collect feedback from employees: what's unclear? What's impractical? What use cases aren't covered? [Compliance / HR] Feedback log
28-30 Address feedback, update policy if needed (issue v1.1), document lessons learned, set next review date [Policy owner] Updated policy + review calendar

Appendix B: AI Tool Evaluation Form

Guidance: Use this form when an employee requests approval for a new AI tool. Keep the process lightweight to avoid driving employees to use unapproved tools instead.
Field Response
Requested by [Name, Department]
Date requested [Date]
Tool name and provider [e.g., Claude by Anthropic]
Subscription tier requested [e.g., Team plan]
Intended use case [What will it be used for?]
Department(s) that will use it [Which teams?]
Estimated number of users [Number]
What data will be entered? [Green / Amber / Red classification]
Why can't an existing approved tool meet this need? [Justification]
Does the provider retain user input for training? [Yes / No / Depends on tier]
Provider security certifications [SOC 2, ISO 27001, etc.]
Cost per user per month [Amount]

Evaluation Decision

Decision [Approved / Conditionally Approved / Not Approved]
Conditions (if applicable) [Any restrictions or requirements]
Evaluated by [Name, Role]
Date decided [Date]

Appendix C: Glossary

Term Definition
AI (Artificial Intelligence) Technology that enables computers to perform tasks that typically require human intelligence, such as understanding language, generating text, analysing data, or creating images.
Generative AI AI systems that can create new content (text, images, code, audio) based on patterns learned from training data. Examples include ChatGPT, Claude, Midjourney, and Microsoft Copilot.
Shadow AI The use of AI tools by employees without organisational knowledge or approval. Often driven by lack of approved alternatives or overly restrictive policies.
Hallucination When an AI tool generates information that sounds authoritative but is factually incorrect. AI tools do not "know" things - they predict likely responses, which can include plausible but false statements.
Automation Bias The tendency for humans to defer to AI recommendations without sufficient critical scrutiny, particularly when the AI output is presented with confidence.
Data Anonymisation The process of removing personally identifiable information from data so that individuals cannot be identified from the remaining information.
UK GDPR The UK General Data Protection Regulation, the primary UK law governing the processing of personal data. Retained from EU law and supplemented by the Data Protection Act 2018.
DUAA 2025 The Data Use and Access Act 2025, UK legislation that reformed automated decision-making requirements, shifting from prohibition-based to principles-based governance with mandatory ex-post safeguards.
Special Category Data Personal data revealing racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic or biometric data, health data, or sexual orientation. Subject to heightened protections under UK GDPR Article 9.
Conscientious Worker Penalty The phenomenon where vague AI policies cause cautious employees to avoid AI entirely while less cautious employees use it without safeguards, resulting in worse outcomes than a clear, permissive policy.
Agentic AI AI systems that can autonomously plan, execute, and adapt actions to achieve goals with minimal human intervention. The ICO confirmed in January 2026 that agentic systems remain fully subject to UK GDPR. Policies should be reviewed as agentic AI becomes more prevalent in workplace tools.

Appendix D: Sign-Off

By signing below, the parties confirm that they have reviewed this AI Acceptable Use Policy and approve it for implementation across [Organisation Name].

Policy Owner Sign-Off

Name:
Role:
Signature:
Date:

Senior Leadership Sign-Off

Name:
Role:
Signature:
Date: