AI code share is a composition metric that measures the percentage of committed code -- code that has been merged into a production or mainline branch -- that was generated or substantially assisted by AI coding tools.
"Substantially assisted" means the AI tool produced the initial version of the code that a developer then accepted, modified, and committed. This includes inline completions from tools like GitHub Copilot, multi-line generations from Cursor, and agentic code creation from tools like Claude Code. It does not include code that a developer wrote manually while an AI tool happened to be running in the background.
AI code share is distinct from two commonly conflated metrics:
A team can have 80% adoption, 40% acceptance rate, and only 12% AI code share -- meaning most developers use the tool, they accept suggestions regularly, but AI's actual contribution to the committed codebase is modest. AI code share tells you what is in the codebase. The other metrics tell you about tool usage.
AI code share can be calculated at three levels of granularity. Each captures a different aspect of AI's contribution.
What percentage of committed lines were initially generated by AI? (Lines generated by AI / Total committed lines x 100.) The most precise measure but the hardest to collect -- it requires editor-level instrumentation tracking the origin of each line.
What percentage of commits contain AI-generated code? (AI-assisted commits / Total commits x 100.) Easier to collect via commit metadata or tagging conventions. Less precise -- a commit marked "AI-assisted" might contain 5% or 95% AI-generated code -- but it scales well.
What percentage of pull requests contain AI-generated code? (AI-assisted PRs / Total PRs x 100.) The coarsest measure, but the easiest to collect via PR labels, template checkboxes, or automated detection.
Start with PR-level tracking and add granularity as your measurement capability matures. The full methodology is detailed in the comprehensive AI Code Share article.
AI code share varies widely depending on team maturity, domain, and the complexity profile of the work. Based on aggregated engineering team data and research including GitHub's Copilot studies:
| Maturity Level | AI-Assisted Lines % | AI-Assisted PRs % |
|---|---|---|
| Early adoption | 5-15% | 10-25% |
| Active adoption | 15-30% | 25-50% |
| High adoption | 30-70% | 50-80% |
These benchmarks are directional. The right target for your team depends on your domain, codebase complexity, and quality standards. A team working on safety-critical systems may appropriately have lower AI code share than a team building internal tools -- and that is not a failure of adoption.
AI code share matters because it is the denominator that makes every other productivity metric interpretable. Without it, a 40% increase in PR throughput could mean genuine productivity gains or AI-inflated volume -- you cannot tell. Without it, rising code turnover rate could be an AI-specific quality problem or a systemic one -- you cannot tell. Without it, ROI calculations on AI tooling investment are guesswork.
AI code share is Pillar 2 of The Developer AI Impact Framework -- the bridge between adoption (who uses the tools) and impact (what the tools produce). GitClear's research shows code churn rising from 3.3% to 5.7-7.1% since widespread AI adoption, but without AI attribution, the aggregate number obscures whether the problem is concentrated in AI-generated code or distributed across all code.
Treating AI code share as a goal. Higher is not inherently better. AI code share should be paired with code turnover rate and complexity-adjusted throughput to ensure that higher AI share corresponds to genuine value, not disposable code.
Confusing adoption rate with AI code share. A team with 90% adoption and 8% AI code share has wide but shallow usage. A team with 30% adoption and 45% AI code share has narrow but deep usage. These teams need entirely different strategies, but on an adoption dashboard they look like a success and a failure, respectively.
Measuring without quality context. AI code share is a composition metric, not a quality metric. Always pair it with durability metrics.