Skip to main content

TL;DR

  • The AI Impact Hierarchy is a five-level model for measuring AI's effect on engineering organizations. Each level builds on the one below, and most organizations stop at Level 1.
  • The five levels are: Adoption, Engagement, Productivity, Quality, and Business Value. They map directly to the five pillars of The Developer AI Impact Framework.
  • Measuring adoption without measuring impact is "adoption theater." Knowing that 70% of your engineers use Copilot tells you nothing about whether the investment is generating returns.
  • Each level requires different data, instrumentation, and organizational capability. Climbing the hierarchy is not just a measurement challenge -- it is a maturity challenge.

The Problem the Hierarchy Solves

Most engineering organizations measure AI adoption and declare success. The dashboard shows 70% weekly active usage of Copilot -- but that tells you nothing about how much code AI produced (AI code share), whether teams are shipping more value (complexity-adjusted throughput), or whether AI-generated code is durable (code turnover rate).

Stopping at adoption confirms access, not impact. The AI Impact Hierarchy provides a structured path from access to impact in five ascending levels.


The Five Levels

Level 1: Adoption

Question: Are developers using AI tools?

Key metrics: Weekly active user rate (WAU), tool installation rate, license utilization

Adoption tells you which teams have the tool and which do not. It does not tell you anything about impact -- a developer who opens Copilot once a week and dismisses every suggestion counts as an active user.

Most organizations stop here because adoption is the easiest level to measure. It requires only license management data, produces clean dashboard numbers, and answers the question leadership asks first: "are people using the thing we bought?" The problem is that this question is the least important one.

Level 2: Engagement

Question: How deeply are developers using AI tools?

Key metrics: AI code share at the line, commit, and PR level; acceptance rate; AI-assisted time as a percentage of coding time

Engagement surfaces the gap between shallow adoption (tool installed, occasionally used) and deep adoption (AI integral to the workflow). It does not tell you whether deep engagement produces better outcomes -- a developer with 60% AI code share could be generating excellent code or enormous volumes of disposable code.

Level 3: Productivity

Question: Is the team producing more valuable output?

Key metrics: Complexity-Adjusted Throughput (CAT) per engineer per week, cycle time by complexity tier, review queue depth

Productivity measurement distinguishes between raw volume and difficulty-weighted value. It does not tell you whether the additional throughput is durable -- a team can have strong CAT scores while accumulating technical debt from AI-generated code being quietly rewritten downstream.

Level 4: Quality

Question: Is the code durable and maintainable?

Key metrics: Code Turnover Rate (AI-generated vs. human-written), defect rate by AI attribution, Innovation Rate

Quality is where the difference between sustainable productivity and adoption theater becomes clear. GitClear's research shows code churn rising from 3.3% to 5.7-7.1% since widespread AI adoption. Organizations measuring only Levels 1 and 2 do not see this signal.

Level 5: Business Value

Question: Is the AI investment generating positive ROI?

Key metrics: Net ROI multiplier, cost per complexity-adjusted unit of output, time-to-market for new features

Business Value connects the full chain -- from tool investment to adoption to productivity to quality -- to outcomes that justify the investment. This level is only reliable if the levels below it are solid. The hierarchy exists because each level provides the foundation for the one above it.


How the Hierarchy Maps to the Framework

The five levels of the AI Impact Hierarchy correspond directly to the five pillars of The Developer AI Impact Framework:

Hierarchy Level Framework Pillar Primary Metric
Level 1: Adoption Pillar 1: AI Adoption Weekly Active User Rate
Level 2: Engagement Pillar 2: AI Code Share AI-Assisted Lines / Commits / PRs %
Level 3: Productivity Pillar 3: Velocity Complexity-Adjusted Throughput
Level 4: Quality Pillar 4: Quality Code Turnover Rate
Level 5: Business Value Pillar 5: Cost & ROI Net ROI Multiplier

The Developer AI Impact Framework provides the measurement capability needed at each level. Organizations that implement all five pillars operate at Level 5 of the hierarchy.


Where Most Organizations Sit Today

The vast majority of engineering organizations are stuck at the bottom of the hierarchy. Roughly 60% measure only adoption. Another 25% track engagement metrics like AI code share. Only about 10% measure productivity in ways that account for AI, and fewer than 5% track quality with AI attribution. The gap is not primarily about tooling -- it is a measurement maturity gap.


Moving Up the Hierarchy

Level 1 to 2: Implement AI attribution at the PR or commit level -- start with a PR template checkbox, evolve to automated detection.

Level 2 to 3: Implement complexity-adjusted throughput -- classify PRs by complexity and track weighted output rather than raw counts.

Level 3 to 4: Implement code turnover rate tracking segmented by AI attribution -- track whether code survives 14 and 30 days.

Level 4 to 5: Connect engineering metrics to business outcomes -- map throughput to feature delivery, cost-per-unit to budget, and quality to reliability.


Frequently Asked Questions

What is the AI Impact Hierarchy?

The AI Impact Hierarchy is a five-level model for measuring AI's effect on engineering organizations. The levels -- Adoption, Engagement, Productivity, Quality, and Business Value -- are ascending and cumulative. Each level builds on the one below it, requiring progressively more sophisticated measurement capability. The hierarchy maps directly to the five pillars of The Developer AI Impact Framework.

Why do most organizations stop at Level 1 of the AI Impact Hierarchy?

Adoption is the easiest level to measure -- it requires only license management data, not code-level instrumentation. It produces clean dashboard numbers and answers the question leadership asks first: "are people using the tool we bought?" The problem is that adoption tells you about access, not impact. Moving to higher levels requires instrumentation, attribution data, and analytical frameworks that most organizations have not yet built.

What is adoption theater?

Adoption theater is the practice of measuring AI tool adoption (who uses the tool) and treating it as evidence of impact (what the tool produced). An organization practicing adoption theater reports high WAU rates to leadership without tracking whether that usage translates into meaningful productivity, quality, or business outcomes. It is the most common failure mode in AI tool measurement.

How do you move from Level 1 to Level 5?

Each transition requires specific measurement capabilities. Level 1 to 2 requires AI attribution (tracking which code AI produced). Level 2 to 3 requires complexity-adjusted throughput measurement. Level 3 to 4 requires code durability tracking segmented by AI attribution. Level 4 to 5 requires connecting engineering metrics to business outcomes. The transitions are sequential -- you cannot reliably measure quality without first measuring productivity, and you cannot measure productivity without first measuring engagement.

How does the AI Impact Hierarchy relate to the Developer AI Impact Framework?

The five levels of the hierarchy map directly to the five pillars of [The Developer AI Impact Framework](/developer-productivity/developer-ai-impact-framework). Level 1 (Adoption) maps to Pillar 1 (AI Adoption). Level 2 (Engagement) maps to Pillar 2 (AI Code Share). Level 3 (Productivity) maps to Pillar 3 (Velocity / CAT). Level 4 (Quality) maps to Pillar 4 (Code Turnover Rate). Level 5 (Business Value) maps to Pillar 5 (Cost & ROI). The framework provides the metrics; the hierarchy provides the maturity model.

Further Reading