Most engineering organizations measure AI adoption and declare success. The dashboard shows 70% weekly active usage of Copilot -- but that tells you nothing about how much code AI produced (AI code share), whether teams are shipping more value (complexity-adjusted throughput), or whether AI-generated code is durable (code turnover rate).
Stopping at adoption confirms access, not impact. The AI Impact Hierarchy provides a structured path from access to impact in five ascending levels.
Question: Are developers using AI tools?
Key metrics: Weekly active user rate (WAU), tool installation rate, license utilization
Adoption tells you which teams have the tool and which do not. It does not tell you anything about impact -- a developer who opens Copilot once a week and dismisses every suggestion counts as an active user.
Most organizations stop here because adoption is the easiest level to measure. It requires only license management data, produces clean dashboard numbers, and answers the question leadership asks first: "are people using the thing we bought?" The problem is that this question is the least important one.
Question: How deeply are developers using AI tools?
Key metrics: AI code share at the line, commit, and PR level; acceptance rate; AI-assisted time as a percentage of coding time
Engagement surfaces the gap between shallow adoption (tool installed, occasionally used) and deep adoption (AI integral to the workflow). It does not tell you whether deep engagement produces better outcomes -- a developer with 60% AI code share could be generating excellent code or enormous volumes of disposable code.
Question: Is the team producing more valuable output?
Key metrics: Complexity-Adjusted Throughput (CAT) per engineer per week, cycle time by complexity tier, review queue depth
Productivity measurement distinguishes between raw volume and difficulty-weighted value. It does not tell you whether the additional throughput is durable -- a team can have strong CAT scores while accumulating technical debt from AI-generated code being quietly rewritten downstream.
Question: Is the code durable and maintainable?
Key metrics: Code Turnover Rate (AI-generated vs. human-written), defect rate by AI attribution, Innovation Rate
Quality is where the difference between sustainable productivity and adoption theater becomes clear. GitClear's research shows code churn rising from 3.3% to 5.7-7.1% since widespread AI adoption. Organizations measuring only Levels 1 and 2 do not see this signal.
Question: Is the AI investment generating positive ROI?
Key metrics: Net ROI multiplier, cost per complexity-adjusted unit of output, time-to-market for new features
Business Value connects the full chain -- from tool investment to adoption to productivity to quality -- to outcomes that justify the investment. This level is only reliable if the levels below it are solid. The hierarchy exists because each level provides the foundation for the one above it.
The five levels of the AI Impact Hierarchy correspond directly to the five pillars of The Developer AI Impact Framework:
| Hierarchy Level | Framework Pillar | Primary Metric |
|---|---|---|
| Level 1: Adoption | Pillar 1: AI Adoption | Weekly Active User Rate |
| Level 2: Engagement | Pillar 2: AI Code Share | AI-Assisted Lines / Commits / PRs % |
| Level 3: Productivity | Pillar 3: Velocity | Complexity-Adjusted Throughput |
| Level 4: Quality | Pillar 4: Quality | Code Turnover Rate |
| Level 5: Business Value | Pillar 5: Cost & ROI | Net ROI Multiplier |
The Developer AI Impact Framework provides the measurement capability needed at each level. Organizations that implement all five pillars operate at Level 5 of the hierarchy.
The vast majority of engineering organizations are stuck at the bottom of the hierarchy. Roughly 60% measure only adoption. Another 25% track engagement metrics like AI code share. Only about 10% measure productivity in ways that account for AI, and fewer than 5% track quality with AI attribution. The gap is not primarily about tooling -- it is a measurement maturity gap.
Level 1 to 2: Implement AI attribution at the PR or commit level -- start with a PR template checkbox, evolve to automated detection.
Level 2 to 3: Implement complexity-adjusted throughput -- classify PRs by complexity and track weighted output rather than raw counts.
Level 3 to 4: Implement code turnover rate tracking segmented by AI attribution -- track whether code survives 14 and 30 days.
Level 4 to 5: Connect engineering metrics to business outcomes -- map throughput to feature delivery, cost-per-unit to budget, and quality to reliability.