NEW: Claude Code Security — research preview

Team Rollout Playbook

Phased rollout plan for introducing AI coding tools to an engineering team — training, metrics, and change management

Read time: 15 min

title: "Team Rollout Playbook" description: "Phased rollout plan for introducing AI coding tools to an engineering team — training, metrics, and change management" section: "Adoption" readTime: "15 min"

Team Rollout Playbook

Rolling out AI coding tools to a team requires more than just installing software. Without structure, you get uneven adoption, security exposure, and frustration from engineers who don't know how to use AI effectively. This playbook gives you a phased approach.

Phase Overview

PhaseDurationGoal
Phase 1: PilotWeeks 1–2Prove value with 3–5 early adopters
Phase 2: FoundationWeeks 3–4Establish shared config, security rules, training
Phase 3: RolloutWeeks 5–8Expand to full team with support structures
Phase 4: OptimizeOngoingMeasure, refine, share learnings

Phase 1: Pilot (Weeks 1–2)

Select Your Pilot Group

Choose 3–5 engineers who are:

  • Curious and self-directed (not just the most senior)
  • Working on a project with clear deliverables (so you can measure output)
  • Willing to share what's working and what isn't

Tool Selection Decision

Before Phase 1 starts, decide which tool to pilot. Don't pilot all three simultaneously.

Team ProfileRecommended Start
VS Code-heavy team, GitHub orgGitHub Copilot
Proprietary/sensitive code, CLI preferenceClaude Code
Mixed editors, heavy AI chat useCursor

Week 1 Goals

  • All pilot members have accounts and have completed setup
  • Each person has run at least 3 AI-assisted tasks
  • Weekly 30-min sync scheduled to share findings

Pilot Success Criteria

At end of Week 2, assess:

  • At least 2 pilot members use the tool daily unprompted
  • At least 1 concrete time-saving example documented
  • No security incidents or policy violations
  • Common friction points identified

Phase 2: Foundation (Weeks 3–4)

Shared Configuration

Create a repository of shared config that all engineers will use:

For Claude Code teams:

# CLAUDE.md (commit to every repo)
## Team Conventions
- Use TypeScript strict mode
- All API endpoints require input validation with Zod
- Never log request bodies or user PII
- Commit convention: conventional commits (feat/fix/chore/docs)

For Copilot teams:

// .github/copilot-instructions.md
Committed to repo — see .github/copilot-instructions.md guide

For Cursor teams:

# .cursorrules (or .cursor/rules/)
Shared rules committed to repo root

Security Baseline

Before expanding beyond the pilot, establish:

  1. Data classification: What code can go to AI APIs (is your IP/PII concern addressed)?
  2. Secrets policy: Confirm .gitignore covers all secrets; configure git-secrets or similar
  3. Prompt injection awareness: Brief on what prompt injection looks like in code comments
  4. API key management: All keys through secrets manager, never in code

See Security Hardening for the full checklist.

Training Plan

AudienceFormatDurationTopics
All engineersWorkshop2 hoursSetup, basic prompting, safety, what AI can/can't do
Senior engineersDeep-dive3 hoursAdvanced workflows, CLAUDE.md design, parallelization
Tech leadsLeadership session1 hourPolicy, ROI measurement, risk management

Workshop agenda template:

0:00 — Why AI coding tools (concrete examples from pilot)
0:20 — Demo: live feature build with AI assistance
0:50 — Hands-on: each person tries on their own code
1:20 — Common mistakes and how to avoid them
1:40 — Team Q&A and setup support

Phase 3: Full Rollout (Weeks 5–8)

Rollout Checklist

Support Structures

AI Champion Network: Designate 1 champion per team/squad. Their job: answer questions, share tips, escalate issues. Rotate quarterly to spread knowledge.

Weekly Tips Digest:

  • Each week, two team members share their best AI prompt/workflow
  • Keeps adoption high and surfaces tricks others haven't discovered
  • Takes 10 minutes to prepare, high value

Failure Post-Mortems:

  • When AI produces wrong/harmful output, document it
  • Share the failure mode and how to avoid it
  • Builds institutional knowledge faster than success stories

Phase 4: Optimize (Ongoing)

Measure What Matters

Track these weekly (see Measuring ROI for detail):

  • PR cycle time: Time from PR open to merge (should decrease)
  • Completion acceptance rate: Copilot telemetry; target > 30%
  • Test coverage trend: AI should help increase coverage
  • AI-generated churn rate: % of AI lines modified within 7 days (signal of quality)

Quarterly Reviews

Every quarter:

  1. Survey engineers: satisfaction, friction, perceived time savings
  2. Review security incidents (hopefully zero)
  3. Update shared config based on what the team has learned
  4. Benchmark against new model releases

Change Management Tips

"Augmentation, not replacement" framing: Engineers who fear job displacement adopt poorly. Frame AI as handling the boring parts so they can focus on the interesting problems.

Address the "cheating" concern: Some engineers feel AI assistance is "cheating." Normalize it by comparing to using Stack Overflow, documentation, or autocomplete — all normal tools.

Senior engineer buy-in first: Junior engineers will follow. If senior engineers are skeptical and vocal about it, adoption stalls. Get them use-case wins early.

Don't mandate AI for every task: AI works well for some tasks and poorly for others. Engineers who feel forced to use it for everything will resent it. Let them use judgment.