NEW: Claude Code Security — research preview

Policy Templates

Ready-to-use acceptable use policies, security baselines, and procurement checklists for AI coding tools

Read time: 12 min

title: "Policy Templates" description: "Ready-to-use acceptable use policies, security baselines, and procurement checklists for AI coding tools" section: "Adoption" readTime: "12 min"

Policy Templates

Before rolling out AI coding tools, your organization needs written policies. These templates are starting points — adapt them to your security posture, industry regulations, and legal requirements. Have your legal and security teams review before publishing.


Template 1: Acceptable Use Policy

# AI Coding Tools — Acceptable Use Policy
Version: 1.0 | Effective: [DATE] | Owner: [SECURITY TEAM]
 
## Purpose
This policy governs the use of AI-assisted coding tools (including but not limited to 
GitHub Copilot, Claude Code, and Cursor) by engineering staff.
 
## Permitted Uses
- Code completion and generation for company projects
- Code review, refactoring, and debugging assistance
- Documentation and test generation
- Learning and skills development
 
## Prohibited Uses
- Sending customer PII, PHI, or financial data to AI APIs
- Sharing proprietary algorithms classified as trade secrets
- Bypassing code review requirements by attributing AI output as human-reviewed
- Using personal AI tool accounts for company work (use company-provisioned accounts)
- Submitting AI-generated code to open source projects without disclosure per project norms
 
## Data Classification
| Data Type | May Send to AI? |
|---|---|
| Public documentation | Yes |
| Internal code (no trade secrets) | Yes, with approved tools |
| Customer PII / PHI | No |
| Financial data | No |
| Security credentials / keys | No |
| Trade-secret algorithms | No — escalate to security team |
 
## Code Review Requirements
AI-generated code is subject to the same review requirements as human-written code. 
Engineers are responsible for understanding and verifying all code they commit, 
regardless of how it was generated.
 
## Security Requirements
- All secrets must be in approved secrets managers, never in prompts
- Report suspected prompt injection incidents to security@[company].com
- Report AI-suggested code that appears malicious to security@[company].com
 
## Compliance
For regulated workloads (SOC 2, HIPAA, PCI-DSS, ISO 27001), consult the security team 
before using AI tools. Additional data residency and audit requirements may apply.
 
## Violations
Violations of this policy may result in access revocation and disciplinary action 
per the standard HR process.

Template 2: Security Baseline Checklist

# AI Coding Tools Security Baseline
Complete before granting team access.
 
## Pre-Deployment
- [ ] Vendor DPA (Data Processing Agreement) reviewed and signed
- [ ] Data residency requirements confirmed (EU? US only?)
- [ ] Enterprise plan evaluated for code telemetry opt-out
- [ ] API key management via secrets manager (not .env files in repos)
- [ ] Network policy updated if outbound API calls need allowlisting
 
## IDE / Tool Configuration
- [ ] Copilot: "Allow GitHub to use my code snippets for product improvements" → DISABLED
  (Settings → Copilot → Privacy)
- [ ] Claude Code: Review ~/.claude/settings.json for any telemetry settings
- [ ] Cursor: Disable "Codebase indexing" if code sensitivity requires it
  (Cursor Settings → Privacy → Indexing)
 
## Repository Hardening  
- [ ] .gitignore includes: .env, *.key, *.pem, secrets.*, credentials.*
- [ ] git-secrets or detect-secrets pre-commit hook installed
- [ ] Branch protection rules prevent direct push to main
- [ ] CLAUDE.md / copilot-instructions committed with no-secrets guidance
 
## Ongoing
- [ ] Monthly review of API usage logs for anomalies
- [ ] Quarterly policy review for new tool features that may change data exposure
- [ ] Annual penetration test includes AI tool integrations

Template 3: Procurement Checklist

Use this when evaluating AI coding tool vendors:

# AI Tool Procurement Evaluation Checklist
 
## Data Privacy
- [ ] Does the vendor train on customer code? (Answer must be: No for enterprise tier)
- [ ] Where is data processed? (Region / data residency)
- [ ] Is there a Data Processing Agreement available?
- [ ] What is the data retention period for prompts and completions?
- [ ] Can we audit what data was sent to the vendor?
 
## Security
- [ ] SOC 2 Type II report available?
- [ ] Pen test results available under NDA?
- [ ] SSO / SAML support for identity management?
- [ ] Role-based access control (admin vs. user)?
- [ ] Audit logs available for admin review?
 
## Compliance
- [ ] GDPR compliant (if EU data involved)?
- [ ] HIPAA BAA available (if healthcare data)?
- [ ] FedRAMP authorized (if US government work)?
- [ ] ISO 27001 certified?
 
## Operability
- [ ] API available for CI/CD integration?
- [ ] Offline / self-hosted deployment option available?
- [ ] SLA uptime guarantee?
- [ ] Support tier during business hours?
 
## Commercial
- [ ] Per-seat vs. usage-based pricing?
- [ ] Volume discounts available?
- [ ] Contract term flexibility (monthly vs. annual)?
- [ ] Exit / data portability terms?
 
## Vendor Health
- [ ] Funding / runway transparency?
- [ ] Support channel response time?
- [ ] Roadmap published and aligned with our needs?

Template 4: Incident Response Procedure

# AI Tool Security Incident Response
 
## Trigger Conditions
Invoke this procedure if:
1. AI tool suggests code containing a hardcoded secret
2. AI output appears to include another organization's proprietary code
3. Suspected prompt injection in code comments or AI outputs
4. AI tool account credentials compromised
 
## Response Steps
 
### Immediate (within 1 hour)
1. Revoke the affected API key / access token immediately
2. Scan recently committed code for AI-suggested content that may be affected
3. Notify security@[company].com
 
### Investigation (within 24 hours)
1. Identify all repositories where affected tool was used in last 30 days
2. Run secret scanning across those repos: `git log | git-secrets --scan`
3. Check vendor's audit logs if available
 
### Remediation
1. Rotate any exposed credentials (assume compromised)
2. Remove offending commits via git rebase or filter-branch if secrets are involved
3. Force-push with `git push --force` after secret removal (coordinate with team)
 
### Post-Incident
1. Root cause analysis: how did AI suggest this content?
2. Update CLAUDE.md / copilot-instructions to prevent recurrence
3. Notify affected parties if customer data was involved (legal/compliance review)
4. Publish internal post-mortem within 5 business days

Adapting These Templates

These templates are starting points. Before publishing, have them reviewed by:

  • Your legal team (employment law, IP ownership of AI output)
  • Your security team (for alignment with existing security policies)
  • HR (for disciplinary language)

Industry-specific additions:

  • Healthcare (HIPAA): Add explicit PHI prohibition examples; require BAA with vendor
  • Finance (PCI-DSS): Prohibit cardholder data in prompts; require audit trail
  • Government (FedRAMP): Require FedRAMP-authorized deployments; no commercial cloud APIs
  • Legal/IP-sensitive: Add section on AI output IP ownership per your jurisdiction