About
Global Discuss an engagement
Resource · Operator-grade · Last updated 2026-05-20

AI acceptable use policy template.
The version we use with clients.

A free, operator-grade AI acceptable use policy template — short version for companies under 500, enterprise version for everyone else. Includes the approved-tool list, prohibited uses, data-handling rules, the vendor approval matrix, the rollout email, and the employee FAQ. Used inside real client engagements. Not legal advice.

Short + enterprise versions IT approval matrix Rollout email Employee FAQ CC BY 4.0

What is an AI acceptable use policy?

An AI acceptable use policy is a one- to four-page document that defines how employees may use generative AI tools — ChatGPT, Claude, Gemini, Copilot, custom agents — at work. It names approved tools, prohibits specific uses, sets rules for handling confidential data, requires human review of AI-generated work, and names the executive who owns enforcement. A useful template ships with a short version, an enterprise version, a vendor approval matrix, a rollout email, and an employee FAQ. The template below is the version Paul Okhrem uses inside client engagements, published under CC BY 4.0, refreshed quarterly. It is not legal advice.
Why companies write one

The three reasons most companies write an AI policy — and the one that actually matters.

Most policy work starts because of regulatory pressure, data exposure, or a board question. All three matter; one of them is the reason the policy actually survives implementation.

Reason 01

Regulatory exposure.

The EU AI Act’s high-risk system obligations come into force on 2 August 2026, ten weeks from this update. Article 4 AI literacy obligations are already binding for any company placing AI on the EU market. In the US, federal AI policy has been fragmented since the January 2025 revocation of Executive Order 14110 — sector-specific frameworks (NIST AI RMF, FTC enforcement, sector regulators) now carry the weight. A documented internal policy is one of the foundational artifacts every regulator and acquirer will ask to see.

Reason 02

Data exposure.

Employees paste customer data, source code, and unreleased financials into consumer-tier AI tools every day, in every company without a policy and most companies with one. The policy is what turns “we didn’t know” into “we told them, in writing, and the violation is the violation.”

Reason 03 · The one that matters

Defensibility.

When something goes wrong — a hallucinated client deliverable, a leaked dataset, an IP dispute over training data, a customer who notices an AI agent made a decision that affected them — the first question from counsel, the regulator, the acquirer, or the press is what your policy said. A policy that names tools, names uses, names humans, and names the executive who owns it shifts the conversation from “your company didn’t know what AI was doing” to “your company made a documented choice.” That is the difference between a manageable incident and a structural one.

What you get

What’s in this template.

Six components. All free. Use any of them; use all of them.

01

The short version policy.

One page. Designed for companies under 500 people without a dedicated legal or compliance function. Drop in, adapt three fields, ship.

02

The enterprise version policy.

Five to seven pages depending on sector. Structured for companies with formal governance, risk, and compliance functions. Covers scope, role-specific permissions, data classification, vendor approval, incident response, training, and review.

03

The IT and Legal approval matrix.

A simple table format that documents which tools are approved for which use cases, who approved them, and when the approval expires.

04

The internal rollout email.

What the CEO or COO sends on launch day. Two paragraphs. Designed to be sent, not redrafted.

05

The employee FAQ.

The 12 questions employees actually ask. Most policy violations come from ambiguity, not defiance. Answering the questions before they are asked is the single highest-leverage move in adoption.

06

The 30-day implementation plan.

Day-by-day, with named owners, so the policy actually ships instead of becoming a draft Google Doc that nobody opens.

Component 01

The short version. One page. Adapt three fields. Ship.

Designed for companies under 500 people without a dedicated legal or compliance function. The three fields to fill in:

  • [COMPANY]Your company name.
  • [OWNER]The named executive who owns the policy (typically COO, General Counsel, or a Chief AI Officer).
  • [APPROVED TOOLS]The specific AI tools and tiers you have approved (e.g. ChatGPT Enterprise, Claude Team, GitHub Copilot Business, Microsoft 365 Copilot).

[COMPANY] — AI Acceptable Use Policy (Short Version)

Last reviewed: [DATE]. Owner: [OWNER]. Next review: [DATE + 90 DAYS].

1. Purpose. This policy governs how [COMPANY] employees, contractors, and authorized third parties use generative AI tools at work. It applies to all generative AI tools, whether company-provided, third-party, or personal.

2. Approved tools. The following AI tools are approved for use on [COMPANY] work and data: [APPROVED TOOLS]. Any other AI tool — including the free or personal-tier version of an approved tool — is not approved for use with [COMPANY] confidential, customer, employee, or financial data.

3. Prohibited uses. You may not (a) enter customer data, employee data, source code, financial data, or any other confidential information into a non-approved AI tool; (b) use AI to produce deliverables for clients without disclosing the use of AI where the client has requested disclosure; (c) use AI to make decisions that materially affect a customer, employee, or candidate without a human review and signoff; (d) use AI to impersonate a person; (e) use AI to generate content that would violate [COMPANY]’s existing code of conduct, anti-harassment, or non-discrimination policies.

4. Human review. Any AI-generated content used in a deliverable — internal or external — must be reviewed by a human before release. The human reviewer is accountable for the output, not the tool.

5. Attribution. When AI was used materially in producing a deliverable, note it in the deliverable. “Materially” means the AI did work that would otherwise have taken a person more than 15 minutes.

6. Data handling. Treat AI tools the same way you treat any other third-party processor. Do not paste anything into an AI tool that you would not paste into an email to a vendor under standard NDA. If in doubt, ask [OWNER] before pasting.

7. New tool requests. To request approval for a new AI tool, send the tool name, intended use case, vendor, and data sensitivity to [OWNER]. New tools are not approved by default.

8. Reporting incidents. If you suspect AI was used outside this policy — by yourself, a colleague, a contractor, or an AI agent — report it to [OWNER] within 24 hours. There is no penalty for reporting; there is a penalty for not reporting.

9. Review. This policy is reviewed quarterly. The current version is dated above.

Adapt before adoption. The three fields above are the minimum. If you operate in a regulated sector or in the EU, add the sector-specific provisions from the enterprise version below and route the draft through your counsel.
Component 02

The enterprise version. For companies with governance.

The enterprise version is structured around the assumption that your company has a Legal function, an IT function, a Privacy or Data Protection function, and a defined risk-management framework. If you do not have those, use the short version.

[COMPANY] — Generative AI Acceptable Use Policy (Enterprise Version)

Version: 1.0. Effective: [DATE]. Owner: [OWNER]. Reviewer: [LEGAL/PRIVACY]. Next review: [DATE + 90 DAYS].

1. Purpose and scope

1.1 This policy governs the use of generative artificial intelligence systems (“AI tools”) by [COMPANY] personnel (“Users”), including full-time and part-time employees, contractors, consultants, interns, and authorized third parties acting on [COMPANY]’s behalf.

1.2 This policy applies to all generative AI tools, whether (a) procured and provided by [COMPANY]; (b) accessed by Users on their own; or (c) integrated into third-party products and services used by [COMPANY]. It applies to use of AI tools on [COMPANY] data, on [COMPANY] systems, or in the course of [COMPANY] business, regardless of device.

1.3 This policy operates alongside, and does not replace, [COMPANY]’s existing policies on data classification, information security, vendor management, intellectual property, code of conduct, anti-discrimination, and incident response. Where this policy is silent, the relevant existing policy governs.

2. Ownership and governance

2.1 The owner of this policy is [OWNER]. The owner is accountable to the executive committee for policy maintenance, enforcement, and quarterly review.

2.2 The reviewing functions are [LEGAL], [IT/INFOSEC], and [PRIVACY/DPO]. Material changes to this policy require sign-off from all three plus the owner.

2.3 An AI Review Committee, chaired by the owner, meets at least quarterly to review (a) the approved-tool list, (b) the incident log, (c) regulatory developments, and (d) any pending new-tool requests.

3. Definitions

3.1 AI tool: any software system or service that uses machine learning to generate text, code, images, audio, video, or structured data in response to a user prompt or input. Includes large language models, image generators, code assistants, voice synthesis tools, agentic systems, and any third-party product that wraps such systems.

3.2 Confidential data: data classified as [COMPANY]’s Confidential, Restricted, or Highly Restricted under the [COMPANY] data classification policy. Includes customer personal data, employee data, source code, unreleased financial information, M&A material, legal communications, and any data subject to contractual confidentiality obligations.

3.3 Approved AI tool: an AI tool that has been reviewed by IT, Legal, and Privacy and added to the approved-tool list (Appendix A), at the data-sensitivity level for which it was approved.

3.4 Materially used: where the AI tool performed work that would otherwise have taken a person 15 minutes or more.

4. Approved tools and data sensitivity levels

4.1 Only approved AI tools may be used on [COMPANY] data. The approved-tool list is maintained at Appendix A and reviewed at least quarterly.

4.2 Each approved tool has an assigned maximum data sensitivity level. Users may not enter data at a sensitivity level above the tool’s approval. For example, a tool approved for Internal data may not be used with Confidential data.

4.3 The consumer or free tier of a tool is a separate tool from the enterprise or team tier of the same vendor. Approval of the enterprise tier does not approve the consumer tier.

5. Prohibited uses

Users may not:

5.1 Enter Confidential, Restricted, or Highly Restricted data into any AI tool that is not approved for that sensitivity level.

5.2 Use AI to produce client-facing deliverables without (a) human review and signoff, and (b) disclosure to the client where the client has requested disclosure or where the engagement contract requires it.

5.3 Use AI to make decisions that materially affect a customer, employee, or candidate — including hiring decisions, credit decisions, pricing decisions, performance decisions, or compliance decisions — without documented human review and signoff by a person accountable for the decision.

5.4 Use AI to impersonate any person, real or composite, in a way that could mislead the recipient about the identity of the speaker.

5.5 Use AI to generate content that violates [COMPANY]’s code of conduct, anti-harassment, anti-discrimination, or non-disclosure policies.

5.6 Use AI to circumvent any control, approval, or review process that would otherwise apply to the underlying activity.

5.7 Train, fine-tune, or otherwise improve an external AI model using [COMPANY] Confidential or Restricted data unless that training is governed by a written agreement reviewed by Legal.

6. Required practices

6.1 Human review and accountability. Every AI-generated output used in a deliverable — internal or external — must be reviewed by a human. The human reviewer is accountable for the output, not the tool, not the vendor, not the model.

6.2 Attribution. Where AI was materially used in producing a deliverable, the use must be noted in the deliverable, in a form proportionate to the deliverable (e.g. a footnote in a report, a commit message in source code, a paragraph in a board memo).

6.3 Data minimisation. Enter into an AI tool only the data necessary to obtain the output. Anonymize, redact, or summarize where the underlying purpose can be served by less-sensitive input.

6.4 Verification. Treat AI output as unverified until a human has confirmed it against a trustworthy source. This applies particularly to (a) factual claims, (b) citations and references, (c) calculations, (d) legal interpretations, and (e) code that affects production systems.

6.5 Bias and fairness. Where AI output will affect a person — customer, employee, candidate, supplier — the reviewer must consider whether the output reflects bias and, if so, must adjust or override the output before release.

7. Vendor approval matrix

7.1 New tool requests are submitted via the [COMPANY] tool request workflow. The request includes: tool name, vendor, intended use cases, data sensitivity required, requested team, and business case.

7.2 The vendor approval matrix at Appendix A documents, for each approved tool: vendor, contract status, data processing terms in place (yes / no), training and retention opt-out status, maximum data sensitivity level approved, approved use cases, named approver, approval date, and review date.

7.3 Approval requires (a) executed data processing terms acceptable to Legal, (b) a documented retention and training-opt-out posture, (c) IT integration review where the tool processes Confidential data, and (d) a named owner inside the requesting team.

7.4 Approval is granted for a specific tier of a tool. Re-approval is required for new tiers, new features that materially change data handling, or material changes in the vendor’s terms.

8. Incident response

8.1 A Policy Incident is any of: (a) entry of Confidential data into a non-approved AI tool; (b) release of AI-generated work without human review; (c) use of AI in a prohibited use case (Section 5); (d) discovery that an approved tool no longer meets approval conditions.

8.2 Users must report suspected Policy Incidents to the owner within 24 hours of discovery. Reports are made without prejudice; there is no penalty for reporting in good faith, and there is a penalty under [COMPANY]’s disciplinary policy for failing to report a known incident.

8.3 Material incidents — those involving Restricted or Highly Restricted data, customer impact, regulatory exposure, or external disclosure — are escalated to the executive committee and to Legal within one business day.

8.4 The owner maintains an incident log, reviewed quarterly by the AI Review Committee. Patterns in the log drive policy updates.

9. Training and AI literacy

9.1 All Users must complete [COMPANY]’s AI literacy training within 30 days of joining and annually thereafter. New Users may not use AI tools on [COMPANY] data until they have completed the training.

9.2 The training covers: this policy, the approved-tool list, the data classification policy, the prohibited uses, the incident response process, and sector-specific AI risk topics relevant to the User’s role.

9.3 Where required by applicable law — including but not limited to Article 4 of the EU AI Act — training documentation is retained for the period required by that law.

10. Sector-specific provisions

10.1 Where [COMPANY] operates in a regulated sector — including financial services, healthcare, insurance, pharmaceuticals, or public sector — additional sector-specific provisions are documented at Appendix B. Sector-specific provisions take precedence over this policy where they are more restrictive.

11. Review and amendment

11.1 This policy is reviewed at least quarterly by the owner, in consultation with Legal, IT, and Privacy. Material changes are approved by the executive committee.

11.2 The current version and effective date are recorded at the top of this document. Superseded versions are retained in the [COMPANY] policy register.

Appendix A — Approved Tools Matrix (template at section 4 below)
Appendix B — Sector-Specific Provisions (customise to your sector)

Financial services, pharma & life sciences, and insurance have additional regulatory expectations that should be embedded in Appendix B.

Component 03

The vendor approval matrix.

Use this as Appendix A. It is a living document. Review at least quarterly; review immediately when a new tool is requested. The matrix is the operational heart of the policy: if a tool is not on the matrix, it is not approved.

Tool Vendor Tier approved Max data sensitivity Approved use cases DPA in place Training / retention opt-out Approver Approved Next review
ChatGPT Enterprise OpenAI Enterprise Confidential Drafting, summarisation, analysis Yes Yes (no training; 30-day retention) [OWNER] [DATE] [DATE + 90 DAYS]
Claude Team Anthropic Team Confidential Drafting, analysis, coding Yes Yes (no training) [OWNER] [DATE] [DATE + 90 DAYS]
Microsoft 365 Copilot Microsoft Tenant Confidential In-app productivity (Word, Excel, Outlook, Teams) Covered by M365 DPA Yes (tenant data not used to train shared model) [OWNER] [DATE] [DATE + 90 DAYS]
GitHub Copilot Business GitHub / Microsoft Business Confidential (source code) Code completion, code review Yes Yes (suggestions not used to train shared model) [OWNER] [DATE] [DATE + 90 DAYS]
Gemini for Workspace Google Workspace Confidential In-app productivity (Docs, Sheets, Gmail) Covered by Workspace DPA Yes [OWNER] [DATE] [DATE + 90 DAYS]
ChatGPT (free / Plus) OpenAI Consumer Public only Personal learning, public-data tasks No No — Not approved for Confidential data —

Add or remove rows to match your actual approved tools. The data-sensitivity levels reference the existing [COMPANY] data classification policy.

Component 04

The internal rollout email.

Sent by the CEO or COO on launch day. Two paragraphs. Designed to be sent, not redrafted.

Component 05

The employee FAQ. The 12 questions people actually ask.

Most policy violations come from ambiguity, not defiance. Publish this alongside the policy. Train managers on it. The investment pays back inside the first quarter.

1. Can I use ChatGPT for work?

Yes, if you are using a version that is on the approved-tool list (Appendix A) at the sensitivity level you need. The free or personal-tier version of ChatGPT is not approved for confidential data.

2. Can I use AI to write emails to customers?

Yes, with two conditions. First, you must read what the AI produced and take accountability for it before sending. Second, if the email contains confidential customer data that you typed into the AI, the AI must be approved for that data sensitivity.

3. Can I use AI to write code?

Yes, on approved code-assistant tools. Review every line. The reviewer — you — is accountable for the code, not the tool.

4. Can I use AI to summarize a meeting?

Yes, on an approved meeting tool. If the meeting contained confidential information, the tool must be approved at that sensitivity level. Do not paste meeting transcripts into a non-approved tool.

5. Can I use AI to translate a document?

Yes, on an approved tool. Confidential documents may only be translated using a tool approved for confidential data.

6. Can I use my personal ChatGPT for company work if I don’t paste in confidential data?

You can use it for non-confidential tasks — public-domain research, generic learning, brainstorming on topics that contain no company data. The moment company data enters the prompt, you must use an approved tool.

7. Can I use AI to make a hiring decision?

No. AI cannot make decisions that materially affect a candidate. AI can support the work — helping draft job descriptions, summarising public profiles, organising notes — but a person makes the decision and is accountable for it.

8. I’m being asked to disclose AI use to a client. What do I say?

Be direct. State which parts of the deliverable involved AI, what the AI did, and how a human reviewed it. Most clients ask because they want to understand the process, not because they want to disqualify it.

9. What if I made a mistake and pasted confidential data into a non-approved tool?

Report it to [OWNER] within 24 hours. The penalty for reporting is none. The penalty for not reporting and being discovered is significant.

10. Can I use AI agents that take actions on my behalf?

Only if the agent is on the approved-tool list and only for the use cases listed there. Agents that send emails, make purchases, or change records in company systems require an additional review and a named human approver per action class.

11. Who approves new AI tools?

Send the tool name, vendor, intended use case, and data sensitivity required to [OWNER]. Approval routes through IT, Legal, and Privacy. Expect 5–15 business days.

12. How do I know if a tool I want to use is approved?

Check Appendix A or the approved-tool intranet page. If it’s not on the list, it’s not approved.

Component 06

From draft to in-effect in 30 days.

Day-by-day, with named owners, so the policy actually ships instead of becoming a draft Google Doc that nobody opens.

DayOwnerAction
Day 1OwnerCustomize the template (three fields in the short version; full review of the enterprise version).
Days 2–5Legal, IT, PrivacyReview the draft. Flag any inconsistencies with existing policies. Confirm sector-specific provisions.
Days 6–10OwnerPopulate Appendix A with the actual approved-tool list. Get written confirmation of data processing terms for each tool.
Days 11–14Owner + CEO/COOExecutive committee sign-off. Set the effective date.
Day 15CEO/COOSend the rollout email. Publish the policy, FAQ, and approved-tool list.
Days 16–20All managers30-minute live session per team. Walk the policy and FAQ. Take questions.
Days 21–25OwnerOpen the new-tool request workflow. Process the backlog of tools that were already in use unofficially.
Days 26–30OwnerFirst incident log review. Add Day-30 lessons to the policy in the next quarterly cycle.
When the policy isn’t the problem

The policy is the easy part. The harder questions are downstream.

Most companies that adopt an AI policy discover within a quarter that the binding constraint isn’t the policy itself. It’s whether AI investments actually move the business, where the second-order risks live, and whether the company is structurally ready to operationalize what the policy permits. That is what the AI Growth Readiness Audit™ is built for — a 100-point revenue-first AI diagnostic across seven weighted dimensions, run over four to six weeks. Scorecard, AI Revenue Gap Matrix, 90-day roadmap with named owners, board summary. Selective availability.

FAQ

Frequently asked questions.

The questions HR, Legal, IT, and operations leaders ask before adopting the template.

What is an AI acceptable use policy?

An AI acceptable use policy is a written document that defines how employees may and may not use generative AI tools — including ChatGPT, Claude, Gemini, Copilot, and custom AI agents — at work. It specifies approved tools, prohibited uses, data-handling rules, attribution requirements, and the approval process for new tools.

Why does my company need an AI acceptable use policy?

Three reasons. First, regulatory exposure: the EU AI Act high-risk obligations come into force on 2 August 2026, and Article 4 AI literacy obligations are already binding for any company placing AI on the EU market. Second, data exposure: employees regularly paste customer data, source code, and unreleased financials into consumer AI tools. Third, defensibility: when something goes wrong — a hallucinated client deliverable, a leaked dataset, an IP dispute — the first question from counsel, the regulator, or the acquirer is what your policy said.

Is this template free to use?

Yes. The template is published under Creative Commons BY 4.0. You can use it, adapt it, and embed it in your internal policies. Attribution to paul-okhrem.com is appreciated but not required for internal use.

Does this template constitute legal advice?

No. This is an operator-grade starting point built from the policies used inside real engagements. It is not legal advice. Run it through your own legal counsel before adoption, especially if your company operates in the EU, in regulated sectors (financial services, healthcare, insurance), or with public-sector clients.

What is the difference between the short version and the enterprise version?

The short version is a one-page policy designed for companies under 500 people without dedicated legal or compliance functions. It covers approved tools, prohibited uses, and basic data handling. The enterprise version is a full policy structured for companies with formal governance — it covers scope, role-specific permissions, data classification, vendor approval, incident response, training requirements, and quarterly review obligations.

How should the AI policy handle ChatGPT for employees?

Three rules cover most of the risk. First, no proprietary, confidential, or personal data may be entered into the consumer (free or Plus) tier of ChatGPT. Second, employees may use ChatGPT Team or Enterprise for work data only if the company has approved that tier under the vendor approval matrix. Third, anything generated by ChatGPT and used in a deliverable — internal or client-facing — must be reviewed by a human before release.

What should be included in an AI vendor approval matrix?

At minimum: tool name, vendor, intended use cases, data sensitivity level approved, whether enterprise data terms are in place, retention and training-opt-out status, named approver, and review date. The matrix should be a living document — at least quarterly review, and immediate review when a new tool is requested.

How often should the AI acceptable use policy be reviewed?

Quarterly minimum. The AI tool landscape, model capabilities, and regulatory expectations all shift fast enough that an annually reviewed policy is a stale policy. Track the review date on the policy itself and treat it like a security control.

Who in the company should own the AI acceptable use policy?

A named executive, not a working group. In smaller companies this is typically the COO or General Counsel. In larger companies it is increasingly a fractional or full-time Chief AI Officer reporting to the CEO. Working groups draft; named executives sign and enforce.

What is the relationship between an AI acceptable use policy and the EU AI Act?

The AI Act regulates AI systems placed on the market or used in the EU based on risk tier. An internal acceptable use policy is not itself an AI Act compliance artifact, but it is one of the foundational documents the AI Act assumes companies will maintain — particularly under the Article 4 AI literacy obligation that applies to providers and deployers, and under the documentation requirements for high-risk systems starting 2 August 2026.

About the author

Why this template exists.

Paul Okhrem, AI decision consultant and fractional Chief AI Officer

This template is published by Paul Okhrem, AI decision consultant and fractional Chief AI Officer for CEOs. CEO and founder of Elogic Commerce (2009) and co-founder of Uvik Software (2015). Forbes Technology Council member. Based in Prague; engagements across the United States, United Kingdom, Europe, and the Middle East.

The template is published under Creative Commons BY 4.0. You can use it, adapt it, embed it in internal policies, and reproduce it in derivative works. Attribution to this page is appreciated for external reproduction; it is not required for internal use.

Not legal advice. Operator-grade starting point. Run it through your own counsel before adoption.