TL;DR
- AI Code Share is the percentage of committed code that was generated or substantially assisted by AI tools, measured at the line, commit, and PR level.
- Industry average AI-assisted lines sit between 15-25%. Top-quartile teams reach 40-60%. Block reports that approximately 95% of their engineers regularly use AI to assist development, with their most intensive users achieving very high AI-assisted code rates.
- Adoption rate tells you who uses AI tools. AI Code Share tells you how much of your codebase AI actually produced. A team can have 70% weekly active usage but only 10% AI code share -- meaning adoption is wide but usage is shallow.
- AI Code Share without quality data is a vanity metric. High code share with no tracking of code turnover, defect rate, or review depth creates a blind spot, not an achievement.
What Is AI Code Share?
AI Code Share is the percentage of committed code in your codebase that was generated or substantially assisted by AI coding tools -- measured at the line, commit, or pull request level.
"Substantially assisted" means the AI tool produced the initial version of the code that a developer then accepted, edited, and committed. This includes inline completions from tools like GitHub Copilot, multi-line generations from Cursor, and agentic code creation from Claude Code. It does not include code that a developer wrote manually while an AI tool happened to be open in the background.
AI Code Share is a composition metric. It answers a specific question: of all the code your team shipped this week, what fraction originated from an AI tool? This is distinct from adoption rate (how many developers use AI tools) and from acceptance rate (how often developers accept AI suggestions). Adoption tells you about tool access. Acceptance tells you about suggestion quality. AI Code Share tells you about your codebase.
Why AI Code Share Matters
Most engineering organizations measure AI tool adoption and stop there. They know 65% of their developers used Copilot this month. They declare the AI rollout a success. But adoption is a usage metric, not an impact metric.
Adoption tells you who is using AI tools. AI Code Share tells you how much of your codebase AI actually produced.
Consider two teams:
- Team A: 70% weekly active usage, 10% AI code share. Most developers have the tool installed and occasionally accept a suggestion. AI's contribution to the codebase is marginal. This is shallow adoption -- high visibility, low impact.
- Team B: 40% weekly active usage, 50% AI code share. Fewer developers use the tool, but the ones who do are power users generating substantial portions of their output with AI assistance. This is deep adoption -- lower visibility, high impact.
Without AI Code Share, these two teams look identical on adoption dashboards. With it, the difference is stark -- and the implications for enablement strategy, licensing spend, and productivity measurement are entirely different.
There is a second, equally important reason AI Code Share matters: you cannot correctly interpret your velocity or quality metrics without it. If your team's PR throughput increased 40% last quarter, that number means something very different depending on whether 15% or 60% of those PRs were AI-assisted. If your code turnover rate is climbing, you need to know whether the churn is concentrated in AI-generated code or human-written code. AI Code Share is the denominator that makes other metrics interpretable.
Three Ways to Measure AI Code Share
AI Code Share can be measured at three levels of granularity. Each captures something different, and the most complete picture comes from tracking all three.
AI-Assisted Lines %
The percentage of committed lines of code that originated from AI suggestions, completions, or generations.
This is the most granular measure. It captures every line where an AI tool produced the initial draft -- whether that was a single-line completion or a 200-line function generation. The data source is typically editor telemetry: Copilot's Metrics API tracks accepted suggestions and their line counts, Cursor's analytics provide similar data, and Claude Code's OpenTelemetry (OTEL) traces log generated content with token-level attribution.
The limitation is that line-level attribution gets fuzzy when developers substantially edit AI-generated code before committing. A 50-line AI generation that a developer rewrites to 30 lines is somewhere between "AI-generated" and "human-written." Most tools count this as AI-assisted, which is the pragmatic choice.
AI-Assisted PRs %
The percentage of pull requests containing at least one AI-generated segment.
This is a coarser but more practical metric for most teams. A PR is counted as AI-assisted if any portion of its diff was generated by an AI tool, as detected by editor telemetry or developer self-report. This measure is easy to collect, easy to explain to leadership, and correlates well with the line-level metric in practice.
The limitation is that it is binary: a PR with one AI-generated line and a PR that is 95% AI-generated both count equally. Pairing AI-Assisted PRs % with AI-Assisted Lines % resolves this.
AI-Assisted Commits %
The percentage of commits containing AI-generated content, identified via tool telemetry, commit metadata, or git markers.
Some AI tools embed metadata directly into commits -- Claude Code can be configured to add attribution markers, and organizations can enforce commit message conventions that tag AI-assisted work. This approach integrates AI Code Share directly into the git history, making it auditable and retrospective.
The limitation is that commit-level tracking depends on tooling configuration and developer compliance. Without enforcement, developers may forget to tag AI-assisted commits, resulting in undercounting.
Data Sources
The primary data sources for measuring AI Code Share include:
- GitHub Copilot Metrics API -- provides acceptance rates, suggestion counts, and line-level data per user and organization.
- Cursor analytics -- tracks AI-generated code volume, acceptance rates, and session-level telemetry.
- Claude Code OTEL telemetry -- exports OpenTelemetry traces with detailed attribution of AI-generated content, token counts, and tool usage patterns.
- Git commit markers and metadata -- custom commit trailers, branch naming conventions, or CI pipeline tags that identify AI-assisted work.
Benchmarks
| Metric | Bottom Quartile | Industry Average | Top Quartile | Elite |
|---|---|---|---|---|
| AI-Assisted Lines % | <10% | 15-25% | 40-60% | >75% |
| AI-Assisted PRs % | <15% | 25-35% | 50-65% | >80% |
| AI-Assisted Commits % | <10% | 20-30% | 45-55% | >70% |
These benchmarks reflect aggregated data from AI-native engineering organizations. Block (formerly Square) reports that approximately 95% of their engineers regularly use AI to assist development1, with their most intensive users achieving high AI-assisted code rates. GitHub's enterprise data from Copilot usage across large deployments supports the industry average range of 15-25% AI-assisted lines2.
Context matters when reading these benchmarks. A team working primarily on greenfield web applications will naturally have a higher AI code share than a team maintaining a legacy embedded systems codebase. The benchmarks represent cross-industry averages; your target should be calibrated to your stack, domain, and the complexity profile of your work.
AI Code Share Without Quality Data Is a Vanity Metric
This section is a warning, not a celebration.
A team that reports 70% AI-assisted lines with no corresponding data on code quality is not demonstrating AI maturity. They are demonstrating a blind spot. High AI code share is only meaningful when paired with quality signals:
- Code turnover rate. What percentage of AI-generated code is rewritten or deleted within two weeks? If your AI code share is 60% but 40% of that code churns within a sprint, your effective AI contribution is far lower than the headline number suggests. GitClear's data shows code churn has nearly doubled since widespread AI adoption -- from 3.3% to 5.7-7.1% -- and AI-generated code is disproportionately represented in the churn3.
- Defect density. Are bugs disproportionately concentrated in AI-generated code? If so, high AI code share is producing technical debt, not velocity.
- Review depth. Are developers rubber-stamping AI-generated PRs, or are they reviewing with the same rigor they apply to human-written code? AI code that ships without meaningful review is a liability regardless of volume.
The rule is straightforward: never report AI Code Share without also reporting code turnover rate for AI-generated code. The two metrics together tell the real story. Either one alone is incomplete.
What AI Code Share Reveals About Your Team
The raw percentage is less important than the pattern behind it. Three patterns appear consistently across engineering organizations:
Low code share + high adoption = shallow usage, enablement gap. Your developers have the tools but are not using them deeply. This usually indicates insufficient training, restrictive tool configurations, or workflows that do not naturally integrate AI assistance. The fix is enablement -- not more licenses.
High code share concentrated in boilerplate = AI as typing accelerator, not engineering multiplier. If AI-generated code is overwhelmingly concentrated in Easy work -- configuration files, CRUD endpoints, test scaffolding, documentation -- then AI is saving keystrokes, not augmenting engineering judgment. This is valuable but limited. The opportunity is to extend AI usage into Medium and Hard work through better prompting practices, agentic workflows, and context-rich tool configurations.
High code share across complexity levels = AI deeply integrated into workflows. When AI-assisted code appears in architectural changes, complex algorithm implementations, and cross-system integrations -- not just boilerplate -- the team has moved beyond typing acceleration to genuine engineering augmentation. This is the pattern that correlates with the strongest throughput and quality outcomes.
To distinguish between these patterns, pair AI Code Share with Complexity-Adjusted Throughput (CAT). CAT segments work into Easy, Medium, and Hard tiers, revealing where AI-generated code is actually landing in your complexity distribution.
How AI Code Share Fits the Developer AI Impact Framework
AI Code Share is Pillar 2 of the Developer AI Impact Framework -- the five-pillar system for measuring developer productivity when AI writes the code.
| Pillar | What It Measures | Key Metric |
|---|---|---|
| 1. AI Adoption | Who is using AI tools | Weekly Active Users % |
| 2. AI Code Share | How much code AI produces | AI-Assisted Lines %, PRs %, Commits % |
| 3. Throughput | Is AI accelerating real output | Complexity-Adjusted Throughput (CAT) |
| 4. Quality | Is AI code maintainable | Code Turnover Rate |
| 5. Cost & ROI | Is the investment paying off | Cost per AI-Assisted Complexity Point |
Pillar 1 (Adoption) answers: are developers using AI tools? Pillar 2 (AI Code Share) answers: how much of your codebase did AI produce? The gap between these two pillars is where most organizations lose the thread. They measure adoption, assume code share is proportional, and skip ahead to throughput. It is not proportional. Measuring it directly is what separates data-driven AI strategy from adoption theater.
For comprehensive benchmarks across all five pillars, see the 2026 Developer Productivity Benchmarks.
Frequently Asked Questions
What percentage of code is AI-generated in 2026?
The industry average is 15-25% of committed lines, though this varies dramatically by team and domain. Top-quartile engineering teams report 40-60% AI-assisted lines. Teams working on greenfield applications with common frameworks tend toward the higher end; teams maintaining complex legacy systems or working in specialized domains tend toward the lower end.
How do you measure AI code share?
AI code share is measured using editor telemetry, tool APIs, and commit metadata. The primary data sources are the GitHub Copilot Metrics API, Cursor analytics, and Claude Code's OTEL telemetry -- all of which track when AI generates code that is subsequently committed. Some teams supplement this with git commit markers or CI pipeline tags that identify AI-assisted work. The most reliable approach combines tool telemetry with commit-level metadata for cross-validation.
What is a good AI code share percentage?
A "good" AI code share depends on your quality metrics. A team with 50% AI-assisted lines and low code turnover (below 4%) is performing well. A team with 70% AI-assisted lines and high code turnover (above 7%) has a quality problem masked by a volume metric. Target the top quartile for your domain -- typically 40-60% AI-assisted lines -- but only if your code quality metrics remain stable or improve.
Is high AI code share good or bad?
Neither, in isolation. High AI code share is good when paired with stable or improving code quality, and bad when it correlates with rising churn, increasing defect density, or declining review depth. The metric itself is neutral -- it measures composition, not quality. Organizations that celebrate high AI code share without tracking quality are optimizing for the wrong thing.
How does AI code share differ from AI adoption rate?
AI adoption rate measures how many developers use AI tools. AI code share measures how much of your codebase those tools actually produced. Adoption is a binary question per developer (used it or did not). AI code share is a continuous measure of the codebase itself. A team can have 90% adoption and 10% code share (everyone uses AI tools occasionally) or 30% adoption and 50% code share (a few power users generate most of the AI-assisted output). The two metrics together reveal both the breadth and depth of AI integration.
Footnotes
Related Resources
- The Developer AI Impact Framework
- Code Turnover Rate: The AI Quality Metric
- Developer Productivity Benchmarks 2026
- What Is Complexity-Adjusted Throughput?
- What Is AI Code Share?
- AI Coding Benchmarks 2026
-
Block engineering blog, "AI-Assisted Development at Block". Reports approximately 95% of engineers regularly using AI to assist development, with top engineers achieving high AI-assisted code rates in production workflows. ↩
-
GitHub, "Research: Quantifying GitHub Copilot's Impact in the Enterprise with Accenture" (2024). Enterprise acceptance rate data and industry-average AI-assisted code percentages. ↩
-
GitClear, "Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality" (2024). Analysis of code churn rising from 3.3% (2021 baseline) to 5.7-7.1% (2024), correlating with widespread AI coding tool adoption. ↩