TL;DR
- Acceptance rate measures how often developers accept AI code suggestions, but a high rate is not inherently good. When acceptance rate is high and code turnover is also high, developers are rubber-stamping AI output without adequate scrutiny.
- The healthy range for acceptance rate is 25-45%. Below 25%, the tool is not well-configured for the team's stack or workflow. Above 45%, the team may be accepting suggestions too uncritically -- especially if post-acceptance edit rates and code turnover are also elevated.
- GitHub reports an industry average acceptance rate of approximately 30%. This aligns with Accenture enterprise data showing that developers accept roughly one in three suggestions, a rate that balances productivity gains with appropriate human judgment.
- Acceptance rate becomes diagnostic only when paired with downstream quality metrics -- specifically code turnover rate and post-acceptance edit rate. Without those pairings, acceptance rate is a usage metric, not a quality metric.
What Is AI Suggestion Acceptance Rate?
AI Suggestion Acceptance Rate is the percentage of code suggestions offered by an AI coding tool -- such as GitHub Copilot, Cursor, or similar assistants -- that a developer explicitly accepts. It is calculated as:
Acceptance Rate = (Suggestions Accepted / Suggestions Shown) x 100
A "suggestion" is any code completion, inline generation, or multi-line proposal that the AI tool surfaces in the editor. An "acceptance" occurs when the developer explicitly takes the suggestion -- typically by pressing Tab, Enter, or clicking an accept action. Dismissals, ignores, and manual overrides all count as non-acceptances.
Acceptance rate is tracked per developer, per team, and per organization. Most AI coding tools provide this data natively: the Copilot Metrics API exposes acceptance rates at the organization and individual level, and Cursor's analytics dashboard provides similar telemetry.
The metric is intuitive, which makes it popular -- and also makes it dangerous. Intuitive metrics invite simplistic interpretation. A VP who sees a 50% acceptance rate assumes the AI tool is working well. A VP who sees a 15% acceptance rate assumes it is not. Both conclusions can be wrong.
Why Acceptance Rate Alone Is Misleading
Acceptance rate measures a moment -- the instant a developer presses Tab. It says nothing about what happens next. Three failure modes illustrate why the metric requires context.
Failure Mode 1: High acceptance, high turnover. A team accepts 55% of AI suggestions. Their code turnover rate for AI-generated code is 28% at 30 days. This means developers are accepting more than half of what the AI proposes, but more than a quarter of that accepted code is being rewritten or deleted within a month. The acceptance rate looks excellent. The code quality is poor. The team is accepting suggestions faster than they can evaluate them, and the rework shows up downstream.
Failure Mode 2: High acceptance, low complexity. A team accepts 60% of AI suggestions. But 85% of those accepted suggestions are single-line completions in boilerplate-heavy files -- import statements, configuration entries, variable declarations. The AI is functioning as a sophisticated autocomplete, and the high acceptance rate reflects that the suggestions are trivially correct. The number tells you nothing about whether AI is contributing to meaningful engineering work.
Failure Mode 3: Low acceptance, high value. A team accepts only 20% of AI suggestions. But the suggestions they do accept are multi-line function implementations, complex algorithm scaffolds, and architectural boilerplate that would take 10-30 minutes to write manually. The acceptance rate looks poor. The productivity impact is substantial. The team is using AI selectively, accepting only high-value suggestions and rejecting noise.
These failure modes share a common lesson: acceptance rate without downstream context is a vanity metric. It measures developer behavior at the moment of suggestion, not the outcome of that behavior.
Benchmarks
These benchmarks synthesize data from GitHub's enterprise Copilot research, Larridin's framework targets, and publicly reported adoption data from engineering organizations.
| Metric | Low | Healthy | Elevated | Red Flag |
|---|---|---|---|---|
| Acceptance Rate | <20% | 25-45% | 45-55% | >55% |
| Post-Acceptance Edit Rate | N/A | 15-30% | 30-45% | >45% |
| AI Code Turnover (30D) | N/A | <15% | 15-22% | >25% |
GitHub reports an industry-average acceptance rate of approximately 30% based on enterprise Copilot deployments, including research conducted with Accenture1. This 30% figure represents a cross-industry average; teams working in well-supported languages and frameworks (TypeScript, Python, React) tend to see higher rates, while teams in niche languages or proprietary frameworks see lower rates.
The healthy range of 25-45% reflects a balance: developers are accepting enough suggestions to gain meaningful productivity benefits, while rejecting enough to indicate active critical evaluation. Below 25%, the tool may be poorly configured -- wrong model, insufficient context, mismatched language support -- or the team may not have received adequate enablement training. Above 45%, the risk of uncritical acceptance rises, particularly when the elevated rate is not accompanied by low code turnover.
The red flag is not high acceptance rate in isolation -- it is high acceptance rate combined with high code turnover or high post-acceptance edit rate. A team at 50% acceptance and 8% AI code turnover is performing well. A team at 50% acceptance and 25% AI code turnover has a quality problem that the acceptance rate is masking.
Post-Acceptance Edit Rate: The Missing Metric
Post-acceptance edit rate measures how much a developer modifies an AI-generated suggestion after accepting it but before committing. It captures the gap between "accepted" and "shipped."
Post-Acceptance Edit Rate = (Lines modified after acceptance / Lines accepted) x 100
A post-acceptance edit rate of 20% means developers are modifying one in five lines of accepted AI code before it reaches a commit. This is healthy -- it suggests developers are reviewing and refining AI output rather than committing it verbatim.
A post-acceptance edit rate below 10% combined with a high acceptance rate is a warning signal. It suggests developers are accepting and committing AI code with minimal modification -- treating the AI as an authority rather than a starting point. When this pattern appears alongside elevated code turnover, the causal chain is clear: accept uncritically, commit without editing, rewrite later.
A post-acceptance edit rate above 45% raises a different question: if developers are rewriting nearly half of what they accept, the acceptance itself is providing limited value. The AI is generating a rough draft that requires substantial human rework. This is not necessarily bad -- a rough draft can still be faster than starting from scratch -- but the acceptance rate overstates the AI's contribution.
The Copilot Metrics API does not currently expose post-acceptance edit rate directly. Teams can approximate it by comparing accepted suggestion content (from editor telemetry) with the final committed diff. Cursor's analytics provide closer approximations for some workflows. This metric is harder to instrument than acceptance rate, but it is substantially more informative.
How to Use Acceptance Rate Diagnostically
Acceptance rate becomes useful when you segment and pair it with other signals.
Segment by complexity
Break acceptance rate into tiers based on suggestion complexity:
- Trivial completions (single-line, imports, declarations): Expect 50-70% acceptance. These are low-risk and high-frequency.
- Medium completions (multi-line, function bodies, standard patterns): Expect 25-40% acceptance. These require more judgment.
- Complex generations (algorithm implementations, architectural scaffolds, cross-file logic): Expect 10-25% acceptance. These should be scrutinized heavily.
If your acceptance rate is uniform across all complexity tiers, developers are not adjusting their scrutiny based on risk. A flat 45% across trivial, medium, and complex suggestions is more concerning than a pattern of 60% / 35% / 15%.
Pair with code turnover
The diagnostic pairing of acceptance rate and code turnover rate reveals four states:
| Acceptance Rate | Code Turnover | Interpretation |
|---|---|---|
| Low (< 25%) | Low | Tool underutilized -- enablement opportunity |
| Healthy (25-45%) | Low | Ideal -- selective acceptance, durable code |
| High (> 45%) | Low | Acceptable -- team reviews well despite high acceptance |
| High (> 45%) | High (> 18%) | Problem -- uncritical acceptance generating rework |
The bottom-right quadrant is where most interventions should focus. Teams in this state need prompt engineering training, review process reinforcement, and potentially tool reconfiguration to reduce suggestion volume and increase suggestion quality.
Track trends, not snapshots
A single month's acceptance rate is a data point. The trend over three to six months tells the story. Healthy patterns include:
- Acceptance rate rising gradually as the team learns to prompt effectively and the tool adapts to the codebase.
- Acceptance rate stabilizing in the 30-40% range after an initial adjustment period.
- Post-acceptance edit rate declining as prompt quality improves.
Unhealthy patterns include:
- Acceptance rate rising steadily without a corresponding decline in code turnover.
- Post-acceptance edit rate dropping to near zero -- suggesting developers have stopped reviewing accepted suggestions.
- Acceptance rate spiking after a team deploys a new AI tool without enablement training.
Data Sources
The primary data sources for tracking acceptance rate and related metrics include:
- GitHub Copilot Metrics API -- provides acceptance rates, suggestion counts, and acceptance counts at the organization, team, and individual level. The most mature and widely-deployed data source for acceptance rate1.
- Cursor analytics -- tracks suggestion acceptance, generation volume, and session-level telemetry. Provides acceptance rate data with additional context about suggestion type and complexity.
- Claude Code OTEL telemetry -- exports OpenTelemetry traces with token-level attribution. Useful for tracking acceptance and edit patterns in agentic coding workflows.
- IDE telemetry plugins -- custom telemetry extensions that capture accept/dismiss events independent of the AI tool vendor, enabling cross-tool comparison.
How Acceptance Rate Fits the Developer AI Impact Framework
Acceptance rate is a Pillar 1 (AI Adoption) metric in Larridin's Developer AI Impact Framework. It sits alongside Weekly Active Users and power user density as a measure of how deeply developers are engaging with AI tools.
Within the framework, acceptance rate connects directly to two other pillars:
- Pillar 2 (AI Code Share): Acceptance rate is the upstream driver of AI code share. Higher acceptance feeds more AI-generated code into the codebase. But the relationship is not linear -- a team with high acceptance of trivial completions may have modest AI code share because the accepted suggestions are small.
- Pillar 4 (Quality): Acceptance rate paired with code turnover rate is the quality diagnostic for AI tool usage. High acceptance with low turnover means the team is using AI effectively. High acceptance with high turnover means the team is accepting faster than they are evaluating.
The framework treats acceptance rate as an input metric, not an outcome metric. It tells you about developer behavior, not about engineering outcomes. The outcome metrics -- Complexity-Adjusted Throughput, code turnover, Innovation Rate -- are where the real signal lives. Acceptance rate helps explain why those outcome metrics look the way they do.
Read the full Developer AI Impact Framework -->
Frequently Asked Questions
What is a good AI code suggestion acceptance rate?
A healthy acceptance rate is 25-45%, with the industry average at approximately 30% based on GitHub Copilot enterprise data. Below 25% typically indicates the tool is not well-configured for the team's language, framework, or workflow. Above 45% may indicate uncritical acceptance -- developers taking suggestions without adequate review. The number matters less than the pairing: a 50% acceptance rate with low code turnover is better than a 30% acceptance rate with high code turnover. Always evaluate acceptance rate alongside downstream quality metrics.
Does a higher acceptance rate mean the AI tool is better?
Not necessarily. A higher acceptance rate means developers are accepting more suggestions, which could reflect suggestion quality, developer behavior, or both. A tool that generates many trivial single-line completions will have a higher acceptance rate than one that proposes complex multi-line implementations -- but the latter may deliver more productivity value per accepted suggestion. Acceptance rate measures frequency of acceptance, not value of acceptance. Compare tools on downstream metrics like post-acceptance edit rate and code turnover, not on raw acceptance percentages.
What is GitHub Copilot's average acceptance rate?
GitHub reports an industry-average acceptance rate of approximately 30% across enterprise Copilot deployments. This figure comes from GitHub's research with Accenture and broader enterprise deployment data1. The rate varies by language (higher for Python and TypeScript, lower for niche languages), by task type (higher for boilerplate, lower for complex logic), and by developer experience (power users often have lower acceptance rates because they are more selective, not less productive).
How do you improve AI suggestion acceptance rate?
The most effective improvements come from prompt engineering training, tool configuration optimization, and context enrichment -- not from encouraging developers to accept more suggestions. Improving the quality of suggestions (so more are worth accepting) is better than reducing developer scrutiny (so more are accepted regardless of quality). Specific interventions include: configuring the AI tool with repository-level context, training developers on effective prompting patterns, adjusting suggestion length and frequency settings, and ensuring the tool supports the team's primary languages and frameworks well.
Should you track acceptance rate by individual developer?
Track it by individual for coaching purposes, but never use it as a performance metric. Individual acceptance rate data helps identify developers who may need enablement support (very low rates) or who may be accepting uncritically (very high rates combined with high personal code turnover). But ranking developers by acceptance rate or setting individual targets creates perverse incentives -- developers will accept more suggestions to hit the target, regardless of quality. Team-level acceptance rate paired with team-level code turnover is the appropriate unit for organizational reporting and goal-setting.
Footnotes
Data sources and methodology:
- GitClear, "Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality" (2024) and "AI Copilot Code Quality: 2025 Data" (2025). Longitudinal analysis of code churn and quality trends correlating with AI coding tool adoption, providing context for the relationship between acceptance rate and code durability.
- Larridin. "The Developer AI Impact Framework." Positions acceptance rate as a Pillar 1 (Adoption) metric, with diagnostic value when paired with Pillar 4 (Quality) metrics including code turnover rate. Benchmark targets of 25-45% acceptance rate based on aggregated engineering data (Larridin internal benchmark).
Related Resources
- The Developer AI Impact Framework
- Code Turnover Rate: The AI Quality Metric
- AI Code Share: What Percentage of Your Code Is AI-Generated?
- Developer Productivity Benchmarks 2026
- AI Value Realization Score
- PR Cycle Time in the AI Era
-
GitHub, "Research: Quantifying GitHub Copilot's Impact in the Enterprise with Accenture" (2024). Enterprise acceptance rate data across large-scale Copilot deployments, reporting approximately 30% average acceptance rate. ↩↩↩
