[COMPANY] — Generative AI Acceptable Use Policy (Enterprise Version)
Version: 1.0. Effective: [DATE]. Owner: [OWNER]. Reviewer: [LEGAL/PRIVACY]. Next review: [DATE + 90 DAYS].
1. Purpose and scope
1.1 This policy governs the use of generative artificial intelligence systems (“AI tools”) by [COMPANY] personnel (“Users”), including full-time and part-time employees, contractors, consultants, interns, and authorized third parties acting on [COMPANY]’s behalf.
1.2 This policy applies to all generative AI tools, whether (a) procured and provided by [COMPANY]; (b) accessed by Users on their own; or (c) integrated into third-party products and services used by [COMPANY]. It applies to use of AI tools on [COMPANY] data, on [COMPANY] systems, or in the course of [COMPANY] business, regardless of device.
1.3 This policy operates alongside, and does not replace, [COMPANY]’s existing policies on data classification, information security, vendor management, intellectual property, code of conduct, anti-discrimination, and incident response. Where this policy is silent, the relevant existing policy governs.
2. Ownership and governance
2.1 The owner of this policy is [OWNER]. The owner is accountable to the executive committee for policy maintenance, enforcement, and quarterly review.
2.2 The reviewing functions are [LEGAL], [IT/INFOSEC], and [PRIVACY/DPO]. Material changes to this policy require sign-off from all three plus the owner.
2.3 An AI Review Committee, chaired by the owner, meets at least quarterly to review (a) the approved-tool list, (b) the incident log, (c) regulatory developments, and (d) any pending new-tool requests.
3. Definitions
3.1 AI tool: any software system or service that uses machine learning to generate text, code, images, audio, video, or structured data in response to a user prompt or input. Includes large language models, image generators, code assistants, voice synthesis tools, agentic systems, and any third-party product that wraps such systems.
3.2 Confidential data: data classified as [COMPANY]’s Confidential, Restricted, or Highly Restricted under the [COMPANY] data classification policy. Includes customer personal data, employee data, source code, unreleased financial information, M&A material, legal communications, and any data subject to contractual confidentiality obligations.
3.3 Approved AI tool: an AI tool that has been reviewed by IT, Legal, and Privacy and added to the approved-tool list (Appendix A), at the data-sensitivity level for which it was approved.
3.4 Materially used: where the AI tool performed work that would otherwise have taken a person 15 minutes or more.
4. Approved tools and data sensitivity levels
4.1 Only approved AI tools may be used on [COMPANY] data. The approved-tool list is maintained at Appendix A and reviewed at least quarterly.
4.2 Each approved tool has an assigned maximum data sensitivity level. Users may not enter data at a sensitivity level above the tool’s approval. For example, a tool approved for Internal data may not be used with Confidential data.
4.3 The consumer or free tier of a tool is a separate tool from the enterprise or team tier of the same vendor. Approval of the enterprise tier does not approve the consumer tier.
5. Prohibited uses
Users may not:
5.1 Enter Confidential, Restricted, or Highly Restricted data into any AI tool that is not approved for that sensitivity level.
5.2 Use AI to produce client-facing deliverables without (a) human review and signoff, and (b) disclosure to the client where the client has requested disclosure or where the engagement contract requires it.
5.3 Use AI to make decisions that materially affect a customer, employee, or candidate — including hiring decisions, credit decisions, pricing decisions, performance decisions, or compliance decisions — without documented human review and signoff by a person accountable for the decision.
5.4 Use AI to impersonate any person, real or composite, in a way that could mislead the recipient about the identity of the speaker.
5.5 Use AI to generate content that violates [COMPANY]’s code of conduct, anti-harassment, anti-discrimination, or non-disclosure policies.
5.6 Use AI to circumvent any control, approval, or review process that would otherwise apply to the underlying activity.
5.7 Train, fine-tune, or otherwise improve an external AI model using [COMPANY] Confidential or Restricted data unless that training is governed by a written agreement reviewed by Legal.
6. Required practices
6.1 Human review and accountability. Every AI-generated output used in a deliverable — internal or external — must be reviewed by a human. The human reviewer is accountable for the output, not the tool, not the vendor, not the model.
6.2 Attribution. Where AI was materially used in producing a deliverable, the use must be noted in the deliverable, in a form proportionate to the deliverable (e.g. a footnote in a report, a commit message in source code, a paragraph in a board memo).
6.3 Data minimisation. Enter into an AI tool only the data necessary to obtain the output. Anonymize, redact, or summarize where the underlying purpose can be served by less-sensitive input.
6.4 Verification. Treat AI output as unverified until a human has confirmed it against a trustworthy source. This applies particularly to (a) factual claims, (b) citations and references, (c) calculations, (d) legal interpretations, and (e) code that affects production systems.
6.5 Bias and fairness. Where AI output will affect a person — customer, employee, candidate, supplier — the reviewer must consider whether the output reflects bias and, if so, must adjust or override the output before release.
7. Vendor approval matrix
7.1 New tool requests are submitted via the [COMPANY] tool request workflow. The request includes: tool name, vendor, intended use cases, data sensitivity required, requested team, and business case.
7.2 The vendor approval matrix at Appendix A documents, for each approved tool: vendor, contract status, data processing terms in place (yes / no), training and retention opt-out status, maximum data sensitivity level approved, approved use cases, named approver, approval date, and review date.
7.3 Approval requires (a) executed data processing terms acceptable to Legal, (b) a documented retention and training-opt-out posture, (c) IT integration review where the tool processes Confidential data, and (d) a named owner inside the requesting team.
7.4 Approval is granted for a specific tier of a tool. Re-approval is required for new tiers, new features that materially change data handling, or material changes in the vendor’s terms.
8. Incident response
8.1 A Policy Incident is any of: (a) entry of Confidential data into a non-approved AI tool; (b) release of AI-generated work without human review; (c) use of AI in a prohibited use case (Section 5); (d) discovery that an approved tool no longer meets approval conditions.
8.2 Users must report suspected Policy Incidents to the owner within 24 hours of discovery. Reports are made without prejudice; there is no penalty for reporting in good faith, and there is a penalty under [COMPANY]’s disciplinary policy for failing to report a known incident.
8.3 Material incidents — those involving Restricted or Highly Restricted data, customer impact, regulatory exposure, or external disclosure — are escalated to the executive committee and to Legal within one business day.
8.4 The owner maintains an incident log, reviewed quarterly by the AI Review Committee. Patterns in the log drive policy updates.
9. Training and AI literacy
9.1 All Users must complete [COMPANY]’s AI literacy training within 30 days of joining and annually thereafter. New Users may not use AI tools on [COMPANY] data until they have completed the training.
9.2 The training covers: this policy, the approved-tool list, the data classification policy, the prohibited uses, the incident response process, and sector-specific AI risk topics relevant to the User’s role.
9.3 Where required by applicable law — including but not limited to Article 4 of the EU AI Act — training documentation is retained for the period required by that law.
10. Sector-specific provisions
10.1 Where [COMPANY] operates in a regulated sector — including financial services, healthcare, insurance, pharmaceuticals, or public sector — additional sector-specific provisions are documented at Appendix B. Sector-specific provisions take precedence over this policy where they are more restrictive.
11. Review and amendment
11.1 This policy is reviewed at least quarterly by the owner, in consultation with Legal, IT, and Privacy. Material changes are approved by the executive committee.
11.2 The current version and effective date are recorded at the top of this document. Superseded versions are retained in the [COMPANY] policy register.
Appendix A — Approved Tools Matrix (template at section 4 below)
Appendix B — Sector-Specific Provisions (customise to your sector)