Shadow AI in UK workplaces: why your team is using AI behind your back


Published · Ihor Havrysh


Shadow AI in UK workplaces - why employees use unapproved AI tools

Quick answer: Shadow AI is when employees use AI tools at work without their employer's knowledge or approval. Red Eagle Tech research (February 2026, 200 UK desk workers) found that 32% of UK workers use AI covertly, and banning AI doesn't reduce this. Workers under explicit bans use shadow AI at the same rate (33%) as those with enabling policies. The fix isn't tighter restrictions but clearer policies, approved tools, and a culture that turns shadow AI into sanctioned AI.

Your employees are almost certainly using AI tools you haven't approved. Not because they're reckless or disloyal, but because they want to do good work and the tools they need are a browser tab away.

Microsoft UK research found that 71% of UK employees have used unapproved consumer AI tools at work. Our own survey of 200 UK desk workers found that 32% admit to using AI without their employer's knowledge. Interestingly, banning AI doesn't stop it. Among workers whose employers strictly ban AI, 33% still use it in secret.

Most guides on shadow AI are written by security vendors trying to sell you detection tools (which, incidentally, we'd be happy to do... but that's not the point!). This guide is different because we surveyed the people actually using shadow AI and asked them why. And what we found goes beyond simply the cybersecurity angle... this is a story about workplace culture.

What is shadow AI?

Shadow AI is when employees use artificial intelligence tools at work without the knowledge, approval or oversight of their employer's IT, security or governance teams. The most common example is an employee opening ChatGPT in their browser, logging in with a personal account, and pasting in work data to draft a report, summarise meeting notes or analyse a spreadsheet.

While it's a subset of shadow IT (the broader pattern of employees using unapproved technology), shadow AI carries distinct and elevated risks:

  • Shadow IT (e.g. using an unapproved Trello board or personal Dropbox) primarily risks your data sitting in an unmanaged, isolated system. The data stays where it's put.
  • Shadow AI (e.g. pasting client data into a free consumer chatbot) risks your data being ingested, learned by the model, and potentially exposed in another user's output. The data may be stored on overseas servers and used for model training.
Comparison diagram showing the key differences between shadow IT and shadow AI, highlighting that shadow AI data may be ingested and learned by AI models

The UK's National Cyber Security Centre has warned that queries made to large language models are visible to the organisations running them and may be retained. That makes shadow AI even higher risk than other forms of shadow IT.

Shadow AI takes several forms:

  • Consumer chatbots used via personal accounts (ChatGPT, Claude, Gemini, Perplexity)
  • AI-powered browser extensions that route text through external models
  • Code assistants used on personal subscriptions (GitHub Copilot, Cursor)
  • AI features embedded in SaaS tools that haven't been reviewed by IT (Notion AI, Grammarly, Canva AI)
  • Locally run open-source models and custom agents built without IT involvement

The common thread is that the employer has no visibility over what data goes in, what comes out, or where it's stored.

How common is shadow AI in UK workplaces?

Very common. Every major study from the past twelve months tells the same story: the majority of UK workers use AI tools their employers haven't approved.

71%
of UK employees have used unapproved AI at work (Microsoft UK, Oct 2025)
32%
admit to using AI without employer knowledge (Red Eagle Tech, Feb 2026)
68%
of UK organisations report staff using unapproved AI (SAP / Oxford Economics, Feb 2026)
51%
of UK employees use unapproved AI every week (Microsoft UK, Oct 2025)

The gap between Microsoft's 71% and our 32% is worth explaining. Microsoft asked whether employees had ever used unapproved AI. We asked whether they currently use AI without their employer's knowledge or approval, a stricter bar that captures ongoing, concealed behaviour. Both figures paint the same picture: shadow AI isn't a niche concern.

Our survey also found that 63% of UK desk workers use AI at work at least weekly. AI adoption isn't something coming down the road. It's already here, whether employers acknowledge it or not.

SAP and Oxford Economics research, published in February 2026, adds the organisational perspective: 60% of UK businesses say their employees haven't completed comprehensive AI training, even as AI investment is set to rise by 40% over the next two years. The average UK business is already generating around £2.7 million in returns from AI, yet only 7% have an enterprise-wide AI strategy. The result is a lot of AI usage with very little structure around it.

Why employees turn to shadow AI

This is where most shadow AI guides fall short. They frame shadow AI as a security problem to be detected and blocked. But the evidence shows it's primarily a culture and policy problem, so focussing on security tools alone misses the wider picture.

Office worker using a laptop and smartphone at a modern UK workplace desk, representing everyday shadow AI usage

They already know how to use it

Microsoft UK data shows that 41% of employees use shadow AI simply because they're familiar with the tools from personal use. They've been using ChatGPT or Claude at home for months. They know it can write a decent first draft in seconds. So when they sit down at work and face a blank document, they reach for what works. It's not defiance. It's a positive productivity habit.

Their employer hasn't given them an alternative

28% of employees say their employer doesn't provide an approved AI option. If you tell people they can't use AI but don't give them a sanctioned tool, you're effectively asking them to work less efficiently than they could. That's a hard sell when deadlines are real and workloads are growing.

Our survey underscored this: among the 54.5% of workers who lack a clear enabling AI policy (what we call the "Permission Gap"), 64.2% report negative effects. They spend time on repetitive tasks AI could handle. They worry about falling behind colleagues at other companies. They feel frustrated and undervalued.

They want to do good work

Among the 54.5% of workers who lack a clear enabling AI policy, the frustration is measurable. Our survey asked how restricted or absent AI access affects their work:

  • 37.6% spend time on repetitive tasks they know AI could speed up
  • 18.3% worry they're falling behind people at other companies who use AI freely
  • 12.8% feel frustrated or undervalued

Workers who use shadow AI told us why in their own words:

"My employer hasn't dictated to us whether or not we can use AI, but I use it for menial tasks that are repetitive and could be sped up, [like] collating customer emails."

Worker, age 29, South East England (AI never mentioned, uses shadow AI)

"AI already helps me to complete risk assessments faster, [but] my employer never mentions anything about AI."

Worker, age 45, Scotland (AI never mentioned, uses shadow AI)

"[I use AI to] minute meetings and take notes, [but my employer] limits the AI we can use, stopping progress."

Worker, age 42, East of England (AI banned, uses shadow AI)

These workers know the productivity gains are real. They see AI saving hours of work in their personal lives, or hear colleagues at other companies talk about what AI does for them. When their employer offers no approved route to those gains, they take the unofficial one. This isn't rebellion. It's rational behaviour from people who want to do their jobs well and feel their employer hasn't given them the tools they need to succeed.

They're afraid of falling behind

KPMG's Trust in AI report found that 44% of workers have made mistakes at work due to AI errors, and 57% admit to using AI in ways that go against company policy. Meanwhile, Microsoft UK data shows that 57% of employees now describe their feelings about AI as optimistic or confident. When people believe AI helps them do better work but know their employer hasn't sanctioned it, shadow AI becomes the path of least resistance.

Behavioural science backs this up. Self-determination theory suggests that people are driven by three core needs: autonomy (control over how they work), competence (being good at their job) and relatedness (feeling valued by their team). AI tools tick all three boxes. They give workers more control over their output, make them more competent at tasks they previously struggled with, and help them keep up with colleagues. Workplace rule-breaking research shows that people often break rules not from defiance but from what psychologists call "pro-social rule breaking", bending the rules to achieve a good outcome for the team or the organisation.

The policy vacuum problem: Our survey found that a staggering 41% of UK desk workers operate in a complete policy vacuum, relying on informal chats (22.5%), working at companies where AI has literally never been mentioned (15.5%), or genuinely unsure what the rules are (3%). Silence isn't a neutral position. It's an invitation for risk-takers to make their own rules, and for conscientious workers to fall behind.

The Conscientious Worker Penalty

The most striking pattern in our data isn't the shadow AI rate itself — it's how consistent it is across policy types (~31-35%, regardless of bans, informal guidance, or enabling policies). This consistency reveals a deeper problem: when 41% of workers operate in a policy vacuum, the workforce splits. Roughly 30% become risk-takers using shadow AI, while the 70% "conscientious majority" abstain entirely.

Horizontal bar chart showing shadow AI usage rates by employer policy type, demonstrating that banning AI does not reduce shadow AI

Those conscientious workers don't just miss out on AI — they fall behind colleagues who use it covertly, losing an estimated 7.75 hours of productivity per week (Microsoft UK data). We call this the Conscientious Worker Penalty: ambiguous policies don't stop the rule-breakers; they paralyse the rule-followers.

Read our full analysis of the Conscientious Worker Penalty, including the detailed data breakdown and practical steps to fix it.

The real risks of shadow AI

None of this means shadow AI is harmless. The risks are real, but they're governance risks, not technology risks. The danger isn't that employees use AI. It's that they use it without guardrails, training or oversight.

Data leakage and security breaches

When employees paste confidential data into a consumer AI chatbot using a personal account, the organisation loses control of that data. The AI provider may store it, use it for model training, or expose it through other users' outputs. Research from Harmonic Security, analysing 22 million enterprise AI prompts, found that code, legal documents and financial data account for approximately 82% of sensitive shadow AI exposures.

The Samsung warning

In 2023, Samsung semiconductor engineers pasted confidential source code and internal meeting notes into ChatGPT on at least three separate occasions. The data was retained on external servers, creating an irreversible exposure. Samsung responded with a company-wide ban on ChatGPT, but later reversed course, implementing governed internal AI solutions and selective approved access instead. The lesson: the ban came after the damage, and the ban itself proved unsustainable. Governed enablement replaced it.

Samsung isn't an outlier. 44% of UK businesses report data leakage linked to shadow AI, and 43% report security breaches attributable to it. IBM's 2025 Cost of a Data Breach Report found that breaches involving shadow AI cost an average of £500,000 more than those without, and take roughly a week longer to detect and contain. BlackFog's January 2026 research found that 58% of shadow AI users rely on free tool versions, which typically offer weaker data protections than paid enterprise tiers.

Regulatory exposure under UK GDPR

The ICO has been clear: existing data protection law applies to AI. There's no AI exemption. When an employee sends personal data to a consumer AI tool, the organisation must determine whether it's acting as controller, joint controller or processor, and document that decision. In most shadow AI scenarios, the employer is likely the controller because the employee is acting on its behalf.

That means the organisation needs a lawful basis for the processing (legitimate interests is most commonly cited, but it requires a documented balancing test), contractual protections with the AI provider (including clauses on data retention, model training and breach notification), and a Data Protection Impact Assessment if the processing is likely to result in high risk to individuals. The ICO treats use of innovative technologies including AI as a DPIA trigger.

The Data (Use and Access) Act 2025 has introduced some flexibility, including a new "recognised legitimate interests" ground for certain processing and a more flexible automated decision-making regime. But it doesn't displace the core UK GDPR principles. The NCSC has separately warned that queries sent to large language models are visible to the organisations running them and may be retained, making consumer AI tools a potential data-leakage channel regardless of how the data protection analysis plays out.

This matters even more for businesses in regulated sectors like financial services, healthcare and legal, where client data confidentiality is both a regulatory requirement and a commercial expectation. The ICO itself has published its own internal AI use policy requiring staff to use only approved tools and to label AI-generated content.

Inconsistent and unauditable outputs

Shadow AI creates a quality control problem. If a team member drafts client communications using an AI tool that nobody else knows about, those outputs bypass whatever review, brand voice or compliance checks the organisation normally applies. There's no audit trail. If something goes wrong, there's no way to trace what the AI contributed and what the human wrote.

KPMG's research found that nearly 60% of employees admitted to making mistakes due to AI errors. When AI use is hidden, those mistakes are harder to catch and harder to learn from.

The cost of inaction: The average cost of a data breach linked to high levels of shadow AI is approximately £500,000 (IBM, 2025). But the cost of banning AI and driving it underground is harder to quantify: lost productivity, employee frustration, and talent attrition as workers leave for employers who enable rather than restrict AI. Our survey found that 66.5% of workers say their employer's AI approach would influence their job decisions.

Worried about shadow AI in your organisation?

Download our free AI policy template for UK businesses. Give your team clear rules on approved tools, data handling and acceptable use, and start reducing shadow AI risk today.

AI acceptable use policy template for UK businesses

AI acceptable use policy template

A comprehensive AI policy template with 10 essential sections, data classification guidance, and a 30-day implementation rollout plan. Designed for UK businesses.

Word / HTML Download

The leadership paradox

There's an uncomfortable pattern in the data: senior leaders are often the worst offenders when it comes to shadow AI.

BlackFog's January 2026 research found that 69% of presidents and C-level executives believe speed outweighs privacy and security concerns when using unapproved AI tools. Among directors and senior vice presidents, the figure is 66%. A separate study from ChannelBuzz found that 68% of security leaders themselves admit to using unauthorised AI.

Why? Leaders face the same productivity pressures as everyone else, but with two additional factors: they've got more autonomy (nobody's checking their browser history) and they face higher-stakes decisions where AI's speed advantage is particularly attractive. When a board paper needs writing at short notice or a strategic analysis needs completing before a Monday morning meeting, the temptation to "just quickly use ChatGPT" is strong.

The problem is that leadership behaviour sets the cultural tone. If the CEO uses unapproved AI tools while the company policy says otherwise, that gap between stated values and actual behaviour corrodes trust. Employees notice. And it makes enforcing any AI policy significantly harder.

Fix it from the top: Leaders need to be the first to complete AI training, the first to use approved tools, and the first to demonstrate that the policy applies to everyone. Any shadow AI strategy that doesn't address leadership behaviour will struggle to gain credibility with the wider workforce.

Shadow AI across UK sectors

Shadow AI looks different depending on the industry. The stakes vary, but the pattern is the same: employees reach for AI tools because the approved alternatives are missing, slow, or don't exist.

Sector Common shadow AI uses Key risk
Financial services Drafting client communications, contract review, underwriting analysis, trading research FCA regulatory exposure; client data confidentiality; potential EU AI Act high-risk classification
Healthcare / NHS Drafting clinical notes, summarising patient histories, diagnostic research Patient data confidentiality under GDPR; clinical safety implications
Legal Case-law research, contract drafting, document summarisation Legal privilege exposure; client confidentiality obligations
Education Lesson planning, marking assistance, student support content Student data protection; academic integrity concerns

Some sectors are further ahead in addressing it. Barclays has created a Centre of Excellence for generative AI, providing enablement, education and tools rather than relying on bans. HSBC has implemented a group-wide AI governance framework with mandatory responsible-AI training. Law firm A&O Shearman developed a legal-risk assessment toolkit covering US, UK and EU requirements and won an innovation award for its approach to governed AI adoption.

While global banks have the budget to build dedicated "Centres of Excellence", UK SMEs can achieve the same governance outcomes on a much smaller scale. A 50-person business doesn't need an AI committee. It needs a clear one-page policy, a short list of approved tools, and a reliable IT partner to handle the data integration. The principles are identical; the implementation is just leaner. Our AI governance guide for UK SMEs walks through exactly how to build that lean framework step by step.

The common thread? Every organisation that has successfully managed shadow AI did it by making the approved path easier and faster than the unapproved one, not by making the unapproved path harder.

How to fix shadow AI without banning it

The data is clear: banning AI doesn't work. Workers under banning policies use shadow AI at virtually the same rate (33.3%) as those with enabling policies (33.0%). The difference is that banned workers use AI without guardrails, making their usage more risky, not less.

Effective shadow AI management treats the problem as a governance and culture challenge, not a technology one. Here's what the evidence supports:

1. Publish a clear, specific AI policy

Only 45.5% of workers in our survey have a clear enabling AI policy. The other 54.5% are operating in a vacuum. A good AI policy doesn't need to be lengthy, but it does need to answer three questions every employee has:

  • What tools am I allowed to use? Name them. A list of approved tools is far more useful than a vague statement about "responsible AI use".
  • What data can I put into them? Be specific. "No personal data" is clearer than "exercise judgement". Provide examples relevant to your industry.
  • What do I do if I need a tool that isn't on the list? Create a fast-track approval pathway. If requesting a new tool takes six weeks, people won't bother asking.

For a ready-to-use template, we've published a free AI policy template for UK businesses that covers these areas.

2. Provide approved tools that are genuinely useful (the integration paradox)

28% of shadow AI users say their employer doesn't provide an approved option. But here's the counter-intuitive finding that most AI governance guides miss: even among workers with a clear enabling policy, 33.0% still use shadow AI. Why?

Because a generic corporate AI subscription isn't enough. If you give an employee an "approved" ChatGPT Enterprise account but they still have to manually export data from your CRM, format it in a spreadsheet, and copy-paste it into the chatbot, they will inevitably revert to using unapproved browser extensions or consumer tools that do it automatically. The approved path is only the easiest path when it connects to the systems your team actually uses every day.

To genuinely eliminate shadow AI, businesses need to move beyond standalone AI subscriptions and invest in custom AI solutions that integrate securely with their existing CRMs, databases and workflows. When the approved tool does the job better than the unapproved one, shadow AI disappears on its own.

The key word is "genuinely useful". If the approved tool is locked down to the point of being unhelpful, employees will work around it. Make the approved path the easiest path.

3. Invest in training (most organisations haven't)

SAP and Oxford Economics found that 60% of UK businesses say their employees haven't completed comprehensive AI training. WalkMe's research puts it more starkly: only 7.5% of workers have had extensive AI training, while 23% have had none at all. Half report receiving conflicting guidance on what they should and shouldn't do.

Training should cover practical topics: how to use approved tools effectively, what data classification means in practice, how to spot AI hallucinations, and what to do if they accidentally share sensitive data with an external tool. One-off e-learning modules aren't enough. This needs to be ongoing and relevant to the actual tools people use.

Need help getting your team up to speed? We offer Claude AI and Claude Code training designed for UK businesses. Our hands-on sessions cover practical prompt techniques, data handling best practices and how to get the most from AI tools safely and effectively.

4. Audit and reclassify, don't just block

Rather than trying to block all unapproved AI, start by finding out what your team is actually using. Run an anonymous survey asking which tools people use, what they use them for, and what data they put in. Combine survey results with technical discovery (network traffic to known AI endpoints, browser extension audits, OAuth token reviews) to build a full picture. Research suggests surveys alone undercount actual usage by roughly half.

Then categorise each tool into one of three buckets:

Three-step framework for reclassifying shadow AI tools: approve, transition, or retire with replacement

We recommend employers run a 30-day "Shadow AI Amnesty". Announce that employees can declare unapproved tools to IT without risk of disciplinary action. This turns a confrontational process ("we caught you using banned tools") into a collaborative one ("help us understand what you need so we can provide it safely"). It also uncovers your true risk exposure instantly, giving IT a clear picture of what data has been going where.

Frame the conversation with managers using language like: "This is about protecting your work and giving you safe tools." Ask open questions: "Which tools help you most? What data do you need to process?" Invite employees to participate in building the approved list. The organisations that do this well (Barclays, HSBC, A&O Shearman) all took a collaborative approach.

5. Monitor without creating a surveillance culture

Technical monitoring has a role, but it needs to be proportionate. Endpoint monitoring, browser extension audits, and network traffic analysis can identify high-risk shadow AI usage such as sensitive data being sent to consumer AI endpoints. But heavy-handed surveillance risks damaging trust and driving AI usage to personal devices and mobile networks where you have zero visibility.

The technical controls you need will vary by sector. In highly regulated industries like financial services, healthcare and legal, you'd typically expect company-managed devices, data loss prevention (DLP) policies, network-level blocking of unapproved AI endpoints, and strict access controls that ensure corporate data is only accessible on approved hardware. In less regulated environments with hybrid working and bring-your-own-device policies, the baseline is lighter but the blindspots are bigger. The cultural principle, however, applies everywhere: people who trust the system are far less likely to work around it.

Be aware that traditional network monitoring has severe blindspots in hybrid environments. If an employee uses an AI app on their personal smartphone while working from home, your corporate firewall won't see it. Even in locked-down environments, determined users find workarounds. This is why cultural alignment and providing genuinely superior integrated tools is vastly more effective than attempting to build a digital fence alone. Technology can help you spot risks, but it can't replace a culture where people want to use the approved tools.

The goal is to identify and mitigate risks, not to catch people out. Be transparent about what you monitor and why. Frame it as protecting the organisation and its clients, not policing individual behaviour.

Need help getting the balance right? The right monitoring setup depends on your industry, your risk profile and how your team works. Our IT operations team can help you find the balance, from endpoint protection and data loss prevention through to network monitoring, access controls and AI governance policies tailored to your sector.

6. Treat shadow AI as a governance signal, not an employee problem

Shadow AI at scale isn't an employee discipline issue. It's a signal that your technology governance hasn't kept pace with how your team actually works. Employees are telling you, through their actions, what tools they need and what your current provision is missing.

Organisations that listen to that signal and respond with enabling policies, approved tools and practical training will reduce shadow AI far more effectively than those that respond with bans and blocking.

The business case in one paragraph

IBM's 2025 Cost of a Data Breach Report found that organisations with high shadow AI activity pay an average of £500,000 more per breach, and those breaches take roughly a week longer to detect and contain. Meanwhile, Microsoft UK data shows AI users save an average of 7.75 hours per week. The maths is simple: governed AI delivers the productivity gains while reducing the breach risk. The cost of a proper AI governance programme (policy, training, approved tools, monitoring) is a fraction of a single data breach.

What's next: shadow AI and agentic AI

Shadow AI isn't going away. If anything, it's about to get harder to manage. The rise of agentic AI, AI systems that can plan, execute multi-step tasks and interact with other tools autonomously, adds a new layer of complexity. Gartner named shadow AI as a material emerging risk for 2025, and the agentic trend makes it more urgent.

Today's shadow AI is mostly an employee opening ChatGPT in a browser tab. Tomorrow's shadow AI could be an employee building an autonomous agent that authenticates to company systems, pulls data from multiple sources, and processes it through an external model without anyone in IT knowing. The shift from one-off copy-paste to persistent, automated data flows raises the stakes considerably.

The detection and governance tooling is catching up. Products launched in late 2025 and early 2026 include AI-specific discovery tools (Credo AI's Shadow AI Discovery, JFrog's shadow AI detection), model-aware data loss prevention from Nightfall AI, and shadow AI dashboards built into cloud access security brokers like Cato Networks. These tools can identify which AI endpoints employees are sending data to, what type of data is being shared, and which browser extensions or API integrations are routing data to external models.

But technology alone won't solve this. The organisations that manage shadow AI most effectively will be those that combine technical visibility with the cultural and policy changes discussed in this guide. The goal isn't zero shadow AI. The goal is a workplace where employees can access AI tools that are powerful, easy to use, and governed in a way that protects the business, its clients and its people.

Frequently asked questions

Shadow AI is when employees use artificial intelligence tools at work without the knowledge, approval or oversight of their employer's IT, security or governance teams. Common examples include using ChatGPT, Claude or other consumer AI chatbots with a personal account to draft work documents, analyse business data or write code. It's a subset of shadow IT but carries distinct risks because AI tools can retain, learn from or expose the data employees paste into them.

Shadow AI is widespread in UK workplaces. Microsoft UK research (October 2025) found that 71% of UK employees have used unapproved consumer AI tools at work, with 51% doing so every week. A Red Eagle Tech survey of 200 UK desk workers (February 2026) found that 32% admit to using AI at work without their employer's knowledge or approval. SAP and Oxford Economics research (February 2026) found that 68% of UK organisations report staff using unapproved AI at least occasionally.

Employees use shadow AI primarily because they want to do their jobs better and faster. Microsoft UK data shows 41% use tools they already know from personal use, while 28% say their employer doesn't provide an approved alternative. Other drivers include official tools being too slow or restrictive, a growing sense that AI is essential for keeping up (57% of UK workers describe their AI feelings as optimistic or confident), and a perception that employers will turn a blind eye as long as the work gets done.

No. Red Eagle Tech survey data shows that banning AI doesn't reduce shadow AI usage. Among workers whose employers strictly limit or ban AI, 33.3% still use it covertly. This is virtually identical to the 33.0% shadow AI rate among workers with clear enabling policies. The difference is that banned workers use AI without any guardrails, training or data-handling guidance, making their usage riskier than it needs to be.

The main risks of shadow AI include data leakage (44% of UK businesses report data leakage linked to shadow AI), security breaches (43% report breaches attributable to shadow AI), regulatory non-compliance under UK GDPR when personal or sensitive data is entered into consumer AI tools, inconsistent or inaccurate work outputs, and loss of intellectual property. The average cost of a breach linked to high levels of shadow AI is approximately £500,000.

The most effective approach is to enable safe AI use rather than trying to ban it. Practical steps include publishing a clear AI-use policy with specific guidance on what data can and can't be used with AI tools, providing approved AI tools that are genuinely useful and easy to access, offering focused training (currently only 7.5% of workers have had extensive AI training), creating a fast-track approval pathway for teams that need tools not yet on the approved list, and monitoring usage to identify risks without creating a surveillance culture.

Using ChatGPT at work is shadow AI if your employer hasn't approved it or you're using a personal account rather than a company-provisioned one. Even if your employer hasn't explicitly banned AI, using ChatGPT with business data on a personal account means your organisation has no control over data retention, no audit trail, and no contractual protections with OpenAI. If your employer provides an approved ChatGPT Enterprise or Microsoft Copilot account, using that approved version wouldn't be shadow AI.

Sources

  • Red Eagle Tech / Pollfish (February 2026). UK Workplace AI Usage Survey. n=200 UK desk-based workers at a cross-section of UK micro, SME, and enterprise businesses (1 to 250+ employees).
  • Microsoft UK / Censuswide (October 2025). Rise in Shadow AI tools raising security concerns for UK. n=2,003 UK employees.
  • SAP / Oxford Economics (February 2026). The Value of AI in the UK: Growth, People & Data. n=200 UK senior executives.
  • BlackFog (January 2026). Shadow AI Threat Research. n=2,000 UK and US employees.
  • KPMG (2025). UK Attitudes to AI.
  • WalkMe (August 2025). Shadow AI and Training Gaps Survey.
  • Harmonic Security (2025). What 22 Million Enterprise AI Prompts Reveal About Shadow AI.
  • IBM / Ponemon Institute (2025). Cost of a Data Breach Report 2025.
  • Reco AI (2025). The State of Shadow AI Report.
Ihor Havrysh - Software Engineer at Red Eagle Tech

About the author

Ihor Havrysh

Software Engineer

Software Engineer at Red Eagle Tech with expertise in cybersecurity, Power BI, and modern software architecture. I specialise in building secure, scalable solutions and helping businesses navigate complex technical challenges with practical, actionable insights.

Read more about Ihor

Related articles

Discovery call

A friendly 15-minute video call with Kat to understand your needs. No preparation needed.

  • Discuss your project
  • Get honest advice
  • No obligation
Kat Korson, Founder of Red Eagle Tech

Kat Korson

Founder & Technical Director

Our team has 10+ years delivering software solutions for growing businesses across the UK.

Send us a message

Your information is secure. See our privacy policy.

Find us