TL;DR
- The AI Impact Hierarchy is a five-level model for measuring AI's effect on engineering organizations. Each level builds on the one below, and most organizations stop at Level 1.
- The five levels are: Adoption, Engagement, Productivity, Quality, and Business Value. They map directly to the five pillars of The Developer AI Impact Framework.
- Measuring adoption without measuring impact is "adoption theater." Knowing that 70% of your engineers use Copilot tells you nothing about whether the investment is generating returns.
- Each level requires different data, instrumentation, and organizational capability. Climbing the hierarchy is not just a measurement challenge -- it is a maturity challenge.
The Problem the Hierarchy Solves
Most engineering organizations measure AI adoption and declare success. The dashboard shows 70% weekly active usage of Copilot -- but that tells you nothing about how much code AI produced (AI code share), whether teams are shipping more value (complexity-adjusted throughput), or whether AI-generated code is durable (code turnover rate).
Stopping at adoption confirms access, not impact. The AI Impact Hierarchy provides a structured path from access to impact in five ascending levels.
The Five Levels
Level 1: Adoption
Question: Are developers using AI tools?
Key metrics: Weekly active user rate (WAU), tool installation rate, license utilization
Adoption tells you which teams have the tool and which do not. It does not tell you anything about impact -- a developer who opens Copilot once a week and dismisses every suggestion counts as an active user.
Most organizations stop here because adoption is the easiest level to measure. It requires only license management data, produces clean dashboard numbers, and answers the question leadership asks first: "are people using the thing we bought?" The problem is that this question is the least important one.
Level 2: Engagement
Question: How deeply are developers using AI tools?
Key metrics: AI code share at the line, commit, and PR level; acceptance rate; AI-assisted time as a percentage of coding time
Engagement surfaces the gap between shallow adoption (tool installed, occasionally used) and deep adoption (AI integral to the workflow). It does not tell you whether deep engagement produces better outcomes -- a developer with 60% AI code share could be generating excellent code or enormous volumes of disposable code.
Level 3: Productivity
Question: Is the team producing more valuable output?
Key metrics: Complexity-Adjusted Throughput (CAT) per engineer per week, cycle time by complexity tier, review queue depth
Productivity measurement distinguishes between raw volume and difficulty-weighted value. It does not tell you whether the additional throughput is durable -- a team can have strong CAT scores while accumulating technical debt from AI-generated code being quietly rewritten downstream.
Level 4: Quality
Question: Is the code durable and maintainable?
Key metrics: Code Turnover Rate (AI-generated vs. human-written), defect rate by AI attribution, Innovation Rate
Quality is where the difference between sustainable productivity and adoption theater becomes clear. GitClear's research shows code churn rising from 3.3% to 5.7-7.1% since widespread AI adoption. Organizations measuring only Levels 1 and 2 do not see this signal.
Level 5: Business Value
Question: Is the AI investment generating positive ROI?
Key metrics: Net ROI multiplier, cost per complexity-adjusted unit of output, time-to-market for new features
Business Value connects the full chain -- from tool investment to adoption to productivity to quality -- to outcomes that justify the investment. This level is only reliable if the levels below it are solid. The hierarchy exists because each level provides the foundation for the one above it.
How the Hierarchy Maps to the Framework
The five levels of the AI Impact Hierarchy correspond directly to the five pillars of The Developer AI Impact Framework:
| Hierarchy Level | Framework Pillar | Primary Metric |
|---|---|---|
| Level 1: Adoption | Pillar 1: AI Adoption | Weekly Active User Rate |
| Level 2: Engagement | Pillar 2: AI Code Share | AI-Assisted Lines / Commits / PRs % |
| Level 3: Productivity | Pillar 3: Velocity | Complexity-Adjusted Throughput |
| Level 4: Quality | Pillar 4: Quality | Code Turnover Rate |
| Level 5: Business Value | Pillar 5: Cost & ROI | Net ROI Multiplier |
The Developer AI Impact Framework provides the measurement capability needed at each level. Organizations that implement all five pillars operate at Level 5 of the hierarchy.
Where Most Organizations Sit Today
The vast majority of engineering organizations are stuck at the bottom of the hierarchy. Roughly 60% measure only adoption. Another 25% track engagement metrics like AI code share. Only about 10% measure productivity in ways that account for AI, and fewer than 5% track quality with AI attribution. The gap is not primarily about tooling -- it is a measurement maturity gap.
Moving Up the Hierarchy
Level 1 to 2: Implement AI attribution at the PR or commit level -- start with a PR template checkbox, evolve to automated detection.
Level 2 to 3: Implement complexity-adjusted throughput -- classify PRs by complexity and track weighted output rather than raw counts.
Level 3 to 4: Implement code turnover rate tracking segmented by AI attribution -- track whether code survives 14 and 30 days.
Level 4 to 5: Connect engineering metrics to business outcomes -- map throughput to feature delivery, cost-per-unit to budget, and quality to reliability.
Frequently Asked Questions
What is the AI Impact Hierarchy?
Why do most organizations stop at Level 1 of the AI Impact Hierarchy?
What is adoption theater?
How do you move from Level 1 to Level 5?
How does the AI Impact Hierarchy relate to the Developer AI Impact Framework?
Further Reading
- The Developer AI Impact Framework -- the measurement framework that operationalizes the hierarchy
- AI Code Share: What Percentage of Your Code Is AI-Generated? -- the key metric for Level 2
- Complexity-Adjusted Throughput -- the key metric for Level 3
- Code Turnover Rate -- the key metric for Level 4
- Developer Productivity Benchmarks 2026 -- benchmark data across all five levels