Here's an uncomfortable truth: your employees are already using AI at work. They're pasting client emails into ChatGPT to draft responses. They're uploading spreadsheets to get analysis. They're feeding confidential documents into AI tools to generate summaries. And most of them don't think there's anything wrong with it.
This is shadow AI, and it's happening in every organisation that hasn't explicitly addressed it.
The Scale of Shadow AI
Research consistently shows that 60-70% of knowledge workers use AI tools at work, but only a fraction of organisations have formal AI policies. The gap between usage and governance is where risk lives.
Without a policy, every employee makes their own judgement about what's safe to put into AI. Some will be cautious. Many won't. All it takes is one person pasting a client contract into ChatGPT for a potential data breach.
What an AI Policy Should Cover
1. Approved Tools
List exactly which AI tools are approved for use. If you have a private AI deployment, make it the default. If public tools are allowed for certain tasks, specify which ones and under what conditions.
2. Data Classification
Define what data can and cannot be entered into AI tools:
- Never: Client data, patient records, financial information, passwords, personal data, proprietary code
- With caution: Internal processes, general business information
- Freely: Publicly available information, general knowledge queries
3. Accountability
AI output must be reviewed by a human before being used in any client-facing or decision-making context. The person using the AI is responsible for the accuracy and appropriateness of the output.
4. Disclosure
Define when AI usage must be disclosed. If AI drafted a legal document, does the client need to know? If AI generated a financial report, does the regulator need to know? These questions need clear answers.
5. Training
Everyone in the organisation should understand the policy. Not just a document on the intranet — actual training on what's allowed, what's not, and why.
An AI policy isn't about restricting your team. It's about giving them clear guidelines so they can use AI confidently and safely.
The Better Solution: Give Them a Safe Alternative
Banning AI doesn't work. People will use it anyway — they'll just hide it. The effective approach is to provide a secure alternative that's just as easy to use:
- Deploy a private LLM that your team can access like ChatGPT
- Connect it to internal systems via MCP servers so it's actually more useful than public tools
- Make it the path of least resistance — if the private tool is easier and better, people will use it naturally
When your team has a private AI tool that understands your business, knows your data, and is approved for all use cases, shadow AI disappears on its own.
Start Today
You don't need a perfect policy to start. A simple one-page document covering approved tools, data restrictions, and accountability is better than nothing. Refine it over time as your AI usage matures.
And if you want to eliminate the risk entirely, talk to us about deploying private AI that makes the policy question much simpler: everything goes through your secure system.