Most AI policies I've seen from SMBs are either non-existent ("we just tell people to be careful") or seventeen pages long and written in legal that nobody reads. Both fail. Here's how to write one that actually works.
Why most policies fail before they're published
Long policies get skimmed, then ignored. Vague policies get interpreted differently by every person who reads them. Policies written by lawyers, for lawyers, don't address the operational questions your team is actually asking ("can I use ChatGPT to summarize this client meeting?").
A policy that works needs to be short enough to read in five minutes, specific enough to answer real questions, and authoritative enough that people know there are real consequences for ignoring it. That's the target.
Step 1: Inventory what tools are already in use
Before writing a single word of policy, spend two days finding out what your team is actually doing. Ask in a team meeting, send a quick form, or just ask people directly. You'll find tools you didn't know about.
Common ones that appear: ChatGPT (personal accounts), Microsoft Copilot (sometimes bundled with M365 without IT involvement), Grammarly, Otter.ai for meeting transcription, Google Gemini in Workspace.
Your policy can only govern what it acknowledges. If it doesn't mention Grammarly, and Grammarly is now scanning every email your team writes—including the one with the client contract attached—you have a gap.
Sample wording: "The following AI tools are approved for business use at [Company]: [list]. Any AI tool not on this list requires written approval from [role] before use on company or client data."
Step 2: Define what data is off-limits
This is the most important section of the policy, and it should be written in plain language. Give concrete examples. "Confidential information" means nothing without examples; "client Social Security numbers" means something.
Categories that belong here for most SMBs:
- Client personal information (names + anything sensitive: account numbers, medical info, immigration status, financial details)
- Internal financial data (payroll, bank account numbers, tax records)
- Active contract terms, especially when NDAs are in play
- Source code with credentials or proprietary logic
- HR data (performance reviews, compensation, disciplinary matters)
Sample wording: "Do not input the following into any AI tool, including approved tools: [list above]. When in doubt, remove client-specific details before using AI assistance and add them back manually."
Step 3: Set a clear approval process for new tools
AI tools multiply fast. If your policy only covers today's approved list, it's obsolete by next quarter. Build in a process.
What works: a lightweight intake form that anyone can submit when they find a tool they want to use. Questions on the form: what does the tool do, what data will flow through it, who is the vendor, where is data stored, what are the training terms?
Someone (could be you, could be an office manager, could be a fractional IT contact) reviews it and gives a yes/no in writing. That decision gets logged. That log becomes your audit trail.
Sample wording: "To request approval for a new AI tool, complete the AI Tool Request form [link]. Expect a decision within five business days. Approval is per-tool; using an approved tool for a different purpose than described in the request requires a new submission."
Step 4: Write the incident-reporting requirement
People will make mistakes. Someone will paste a client email into a personal ChatGPT account, realize what they did, and then stay quiet about it because they're worried about consequences. That silence is where small mistakes turn into serious incidents.
Your policy needs to make reporting safe and mandatory. The reporting path should be simple—a Slack message, an email to one person, a form. The response should be proportionate and not punitive for honest disclosures.
Sample wording: "If you believe you may have shared restricted data with an AI tool, report it to [contact] within 24 hours of discovery. Early disclosure allows us to assess and respond appropriately. Reports made in good faith will not result in disciplinary action for the disclosure itself."
Step 5: Set a review cadence and stick to it
AI tools change fast. A policy written in January may be meaningfully out of date by June—not because your rules changed, but because the tools themselves changed their data practices, added new features, or got acquired.
Build in a quarterly review. Even a fifteen-minute pass that asks "has anything changed about the tools on our approved list, and are there new tools we need to address?" keeps the policy from becoming fiction.
Sample wording: "This policy is reviewed quarterly by [role]. The next scheduled review is [date]. Employees may submit suggested changes to [contact] at any time."
If you'd rather steal a starting-point template, our AI Acceptable-Use Policy template (free download) is what we hand to every Audit client. It includes the sample wording above, a tool-approval intake form, and an incident-reporting log template. Takes about thirty minutes to customize for your business.