AI adoption KPIs are the specific, measurable indicators that tell a CIO whether AI tools are being used, how deeply they're embedded in workflows, what modalities they span, and whether usage patterns signal real productivity gains — not just activity.
The board is asking one question about AI in 2026: "Is it making us more productive?"
Not: "How many licenses did we buy?" Not: "What percentage of employees logged in?" Not: "How many tools are deployed?"
Those were acceptable answers in 2024. They are career-limiting answers in 2026.
Here's the gap: enterprises will spend an estimated $2.5 trillion on AI this year (Gartner, 2026), up 44% from 2025. Yet only 6% of CIOs report that accountability for AI governance and outcomes is clearly established.^1 Boards are waking up to this disconnect. When AI spend grows 44% and the CIO's adoption report shows "65% of employees logged in this month," the obvious follow-up is: "Logged in to do what? And did it make them more productive?"
The ten KPIs below answer that question. They move the CIO's measurement stack from tool inventory to productivity instrumentation — from counting seats to understanding whether AI is changing how work gets done.
What it answers: "What percentage of our workforce actually used an AI tool this week — across all tools, not just the primary platform?"
Why it matters: This is the floor metric. No usage means no productivity gain is possible. But the 2026 version of this KPI is fundamentally different from the 2024 version: it must capture usage across the entire AI tool ecosystem — not just your licensed ChatGPT Enterprise or Copilot instance.
Employees typically use 3–5x more AI tools than IT estimates. Organizations that measure WAU only on their primary platform are seeing a fraction of actual AI activity — and missing the productivity signals happening elsewhere.
How to measure:
| Dimension | What to Track | Threshold |
|---|---|---|
| Overall WAU | % of total workforce using any AI tool at least once per week | >70% = strong; <40% = intervention needed |
| WAU by tool | Active users per tool as % of provisioned users | <30% on a provisioned tool = waste signal |
| First-time vs. returning | Ratio of new users to returning users week-over-week | Healthy ratio shifts toward returning users over time |
Reporting cadence: Weekly.
Board-ready framing: "X% of our workforce actively used AI tools this week, up from Y% last quarter. Usage spans [N] distinct tools across [N] departments."
Larridin's four-layer measurement framework captures WAU across every AI tool in the ecosystem — sanctioned, tolerated, and Shadow AI — so the number you report to the board reflects actual organizational behavior, not just your primary vendor's dashboard.
What it answers: "Are employees doing real work with AI — or just dabbling?"
Why it matters: This is the KPI that separates activity from productivity. An employee who opens ChatGPT, asks one question, and closes the tab is counted as an active user. An employee who runs a multi-turn research synthesis, iterates on outputs, and integrates results into a deliverable is also counted as an active user. These are not the same thing.
Engagement depth distinguishes between shallow usage (simple queries, one-shot interactions, low-complexity tasks) and deep usage (multi-turn workflows, complex prompts, tool integration, output iteration). Deep usage correlates with productivity gains. Shallow usage does not.
How to measure:
| Signal | Shallow (Low Depth) | Deep (High Depth) |
|---|---|---|
| Session length | <2 minutes | >10 minutes |
| Interaction pattern | Single query, single response | Multi-turn, iterative refinement |
| Task complexity | Simple lookups, rewrites | Research synthesis, analysis, workflow automation |
| Output integration | Copy-paste into another tool | Direct integration into deliverables or downstream systems |
| Frequency | Sporadic, event-driven | Daily, habitual |
Scoring approach: Place each user on an engagement spectrum — dabbler, occasional user, regular user, deep user — based on behavioral signals, not self-reporting. Track the distribution shift over time.
Reporting cadence: Weekly aggregate; monthly trend analysis.
Board-ready framing: "Of our active AI users, X% are deep users embedding AI into daily workflows — up from Y% last quarter. This cohort shows measurably higher output velocity."
Larridin scores engagement depth across behavioral signals — session patterns, interaction complexity, and habit formation — rather than relying on vendor-reported "usage" metrics that treat a single login the same as a full work session. See how engagement depth maps to Larridin's adoption spectrum.
What it answers: "Is your organization using AI only for text — or across the full spectrum of work?"
Why it matters: Most enterprises equate "AI adoption" with "ChatGPT usage." But AI in 2026 spans multiple modalities — text, code, image, audio, video — and the breadth of modality usage reveals how deeply AI has penetrated different types of work.
A team using AI only for text generation is capturing a narrow slice of productivity potential. A team using AI for text, code generation, image creation, audio transcription, and video analysis has AI embedded across the full surface area of their work.
Modality segmentation framework:
| Modality | Example Tools | Work Types Covered | Adoption Signal |
|---|---|---|---|
| Text | ChatGPT, Claude, Gemini | Research, writing, analysis, summarization, email | Table stakes — most organizations start here |
| Code | GitHub Copilot, Cursor, Claude Code | Software development, automation, scripting, data analysis | Engineering productivity accelerator |
| Image | Midjourney, DALL-E, Canva AI | Design, marketing creative, presentations, prototyping | Creative workflow transformation |
| Audio | ElevenLabs, Otter.ai, Whisper | Meeting transcription, voice synthesis, podcast production | Communication workflow integration |
| Video | Runway, Synthesia, Descript | Training content, marketing video, internal communications | Emerging — signals advanced adoption |
What to track:
Reporting cadence: Monthly.
Board-ready framing: "AI usage spans [N] modalities across the organization. Engineering uses AI for text and code. Marketing has expanded to text, image, and video. HR remains text-only — a targeted expansion opportunity."
Larridin classifies every AI tool by modality, autonomy level, and scope — giving CIOs a portfolio view of which work types are AI-enabled and where white space remains.
What it answers: "How much of your AI usage is human-driven (interactive) versus AI-driven (agentic)?"
Why it matters: This is the KPI that didn't exist 18 months ago — and in 2026, it's becoming one of the most important signals of AI maturity.
Interactive AI is what most organizations measure today: a human prompts an AI tool, the tool responds, the human evaluates and iterates. The human is in the loop for every step.
Agentic AI is fundamentally different: an AI agent receives a goal, plans the steps, executes autonomously, and delivers a result — with minimal or no human intervention during execution. AI coding agents, research agents, workflow automation agents, and multi-step task agents are all agentic.
The ratio between these two usage patterns reveals where your organization sits on the AI maturity curve:
| Ratio Profile | What It Signals |
|---|---|
| 95% interactive / 5% agentic | Early adoption — AI as assistant |
| 75% interactive / 25% agentic | Scaling — AI beginning to work independently |
| 50% interactive / 50% agentic | Advanced — AI is a co-worker, not just a tool |
| <50% interactive / >50% agentic | AI-native — autonomous workflows are the norm |
What to track:
Reporting cadence: Monthly; quarterly trend analysis.
Board-ready framing: "X% of our AI usage is now agentic — AI completing tasks autonomously without human prompting during execution. This is up from Y% last quarter, indicating AI is shifting from assistant to autonomous contributor."
80% of Fortune 500 companies now use active AI agents (Microsoft, 2026), but most CIOs lack visibility into how much work agents are actually doing. Larridin tracks the interactive-to-agentic ratio across the full tool ecosystem, giving CIOs the first clear picture of autonomous AI activity in their organization.
What it answers: "How much compute, tokens, and cost are AI agents consuming — and is that spend scaling predictably?"
Why it matters: Agentic AI fundamentally breaks the per-seat licensing model that CIOs have used to manage software spend for two decades.
With interactive AI (ChatGPT, Copilot), cost is predictable: $30/user/month, 500 users, $15,000/month. The CIO can budget this.
With agentic AI, cost is consumption-based and variable. A single agent run might consume 10,000 tokens and cost $0.15. Or it might consume 500,000 tokens across 47 API calls and cost $12.00. Multiply that by hundreds of agents running autonomously across the organization, and spend becomes a function of what agents are doing, not how many seats you have.
The new cost model:
| Cost Model | How It Works | Predictability | CIO Risk |
|---|---|---|---|
| Per-seat licensing | Fixed cost per user per month | High — budgetable | Low — capped spend |
| Consumption-based (agentic) | Variable cost per agent run — tokens, API calls, compute | Low — depends on agent behavior | High — uncapped, can spike |
| Hybrid | Per-seat for interactive + consumption for agentic | Medium | Medium — needs monitoring |
What to track:
Reporting cadence: Weekly spend monitoring; monthly trend and efficiency analysis.
Board-ready framing: "Agentic AI spend is $X/month, growing at Y% month-over-month. Average cost per agent workflow is $Z. We're monitoring spend velocity against usage growth to ensure efficiency scales with adoption."
This is where most CIOs have a blind spot. Traditional SaaS management tools track licenses. Larridin tracks consumption-based agentic spend alongside per-seat costs — giving CIOs a unified view of AI economics that reflects the reality of how AI is being consumed in 2026.
What it answers: "What percentage of our AI users are power users — and is that percentage growing week-over-week, department by department?"
Why it matters: This is the single strongest leading indicator that AI adoption is translating into productivity. OpenAI's research shows a 6x productivity gap between AI power users and average employees. McKinsey's data indicates AI power users complete tasks 77% faster. The implication is stark: if your organization has 1,000 AI users but only 50 are power users, you're capturing a fraction of the productivity potential you're paying for.
Power user density answers the question the board is actually asking: not "are people using AI?" but "are people using AI well enough to change how fast they work?"
How to define a power user (behavioral signals, not self-reporting):
| Signal | Average User | Power User |
|---|---|---|
| Frequency | Uses AI a few times per week | Uses AI multiple times daily |
| Modality | Single modality (text only) | Multi-modal (text + code + image, etc.) |
| Session depth | Short, simple interactions | Extended, multi-turn, complex workflows |
| Tool breadth | 1 tool | 3+ tools across different categories |
| Output integration | Copy-paste into other tools | Direct workflow integration, automation |
| Agentic usage | None | Builds or uses agentic workflows |
What to track:
Reporting cadence: Weekly density tracking; monthly department-level analysis.
Board-ready framing: "Power users — employees using AI deeply enough to measurably change their output velocity — represent X% of our AI user base, up from Y% last quarter. Engineering power user density is at Z%; Sales is at W%. We're targeting N% organization-wide by Q[X]."
This is the KPI that transforms the board conversation from "are people using AI?" to "are people getting better at using AI?" Larridin tracks power user density across the adoption spectrum — non-user, explorer, regular user, power user, AI-native — with week-over-week growth tracking by department, giving CIOs the trend data they need to know whether their workforce is leveling up or plateauing.
What it answers: "Which departments are using AI and which aren't — and how wide is the gap?"
Why it matters: Aggregate adoption rates are misleading. Your overall 65% WAU might mask a 92% engineering rate and a 23% HR rate. The variance between departments is where the productivity story actually lives — because every department below threshold is a team getting zero AI-driven productivity lift.
In 2026, the cross-department gap is significant:
| Department | Bottom 25% | Median | Top 25% |
|---|---|---|---|
| Technology & Engineering | 35–50% | 65–75% | 85–95% |
| Sales & Marketing | 25–40% | 55–70% | 80–90% |
| Customer Success & Support | 30–45% | 60–75% | 85–95% |
| Human Resources | 20–35% | 45–60% | 70–85% |
| Finance & Operations | 15–30% | 40–55% | 65–80% |
What to track:
Reporting cadence: Monthly.
Board-ready framing: "AI adoption ranges from X% in Engineering to Y% in HR — a [N]-point variance. We're targeting <20-point variance by Q[X] through targeted enablement in lagging departments. Every department below 40% represents a team not yet benefiting from AI-driven productivity gains."
Larridin segments adoption across all four measurement layers — usage, depth, breadth, and segmentation — by department, role, geography, and hierarchy level. This surfaces exactly where adoption is strong, where it's lagging, and where intervention will produce the fastest productivity lift.
What it answers: "Is AI usage growing, plateauing, or declining — and at what rate?"
Why it matters: A snapshot is not a strategy. Knowing that 65% of your workforce used AI this week tells you the current state. Knowing that it was 63% last week and 58% the week before tells you the trend. And the trend is what the board cares about: is this accelerating, stalling, or regressing?
Adoption velocity is particularly critical because AI adoption follows a predictable curve with a dangerous middle zone — the plateau trap. Organizations typically see rapid early adoption (novelty effect), followed by a plateau where usage stabilizes well below potential. Without velocity tracking, CIOs mistake the plateau for "steady state" when it's actually a stall.
What to track:
| Metric | Healthy Signal | Warning Signal |
|---|---|---|
| WoW WAU growth | 1–3% growth per week during scaling | <0.5% growth for 4+ consecutive weeks |
| New user activation rate | Steady stream of first-time users | New user count declining while total workforce grows |
| Returning user retention | >80% of last week's users return | <60% return rate — churn signal |
| Modality expansion rate | New modalities adopted per quarter | Stuck on single modality for 2+ quarters |
| Department velocity variance | Lagging departments accelerating | Lagging departments flat while leaders grow |
Reporting cadence: Weekly.
Board-ready framing: "AI adoption is growing at X% week-over-week. New user activation is [steady/accelerating/slowing]. We've identified [N] departments in the plateau zone and have targeted enablement plans in place."
Larridin provides real-time adoption velocity tracking with automated plateau detection — alerting CIOs when adoption velocity drops below threshold for any department, so intervention happens in weeks, not quarters.
What it answers: "Are employees using 10% of the tool or 60%?"
Why it matters: Most enterprise AI tools have deep feature sets that users barely scratch. A team "using Copilot" might only be using autocomplete suggestions while ignoring Copilot Chat, Copilot in Docs, Copilot in Meetings, and Copilot Studio. They're counted as active users in every dashboard — but they're capturing a fraction of the productivity potential they're paying for.
Feature utilization rate reveals the gap between what you're buying and what people are actually using. It's the most direct signal of wasted productivity potential — not wasted spend (that's a finance problem), but wasted capability (that's a CIO problem).
Feature utilization by tool (illustrative):
| Tool | Commonly Used Features | Underutilized Features | Productivity Left on the Table |
|---|---|---|---|
| Microsoft Copilot | Email drafting, meeting summaries | Copilot Studio, Power Automate integration, data analysis in Excel | Workflow automation, custom agent creation |
| GitHub Copilot | Code autocomplete | Chat, code explanation, test generation, PR summaries | Code review acceleration, documentation automation |
| ChatGPT Enterprise | Simple Q&A, text rewriting | Custom GPTs, data analysis, canvas, multi-modal input | Repeatable workflows, team-specific assistants |
| Claude | Research, writing | Projects, artifacts, computer use, extended thinking | Complex analysis, structured output generation |
What to track:
Reporting cadence: Monthly.
Board-ready framing: "Our teams use an average of X% of available features in our primary AI tools. For Copilot specifically, utilization is at Y% — autocomplete is universal, but Copilot Studio and workflow automation remain underadopted. This represents significant untapped productivity potential without additional spend."
This is one of the highest-leverage KPIs for a CIO: it identifies productivity gains available without buying anything new. Larridin measures feature utilization across the full AI tool portfolio, surfacing exactly which capabilities are deployed but unused — turning "we need more AI tools" conversations into "we need to use the ones we have better."
What it answers: "What percentage of AI usage in our organization is happening outside sanctioned, governed tools?"
Why it matters for productivity (not just security): The conventional CIO framing of Shadow AI is risk — data leakage, compliance exposure, ungoverned spend. That framing is correct but incomplete.
Shadow AI is also a productivity signal. When employees adopt unsanctioned AI tools, they're telling you something: the sanctioned tools aren't meeting their workflow needs fast enough. Shadow AI is the market signal that your AI portfolio has gaps — and your employees are solving productivity problems faster than IT can provision solutions.
The dual framing:
| Lens | What Shadow AI Tells You | CIO Action |
|---|---|---|
| Risk lens | Ungoverned tools processing company data | Detect, classify risk, apply governance spectrum |
| Productivity lens | Sanctioned tools don't cover this workflow need | Evaluate the Shadow AI tool, sanction or provide alternative |
What to track:
Reporting cadence: Weekly detection; monthly analysis.
Board-ready framing: "Shadow AI accounts for X% of total AI usage — [above/below] our target of 3–4%. We identified [N] unsanctioned tools this month. [N] were fast-tracked for evaluation because they indicate workflow gaps in our current portfolio. [N] were flagged for risk review."
83% of enterprises report Shadow AI growing faster than IT can track (Larridin, 2025). Larridin's Shadow AI detection framework continuously discovers unsanctioned AI tools across the organization — and uniquely reframes Shadow AI as both a risk to manage and a productivity signal to learn from.
These ten KPIs don't live in isolation. They form a layered reporting stack, with different KPIs surfaced at different cadences and to different audiences:
| Cadence | KPIs | Audience | Purpose |
|---|---|---|---|
| Weekly | WAU, Adoption Velocity, Power User Growth, Shadow AI Detection, Agentic Spend | CIO + AI Program Team | Operational pulse — catch stalls and anomalies fast |
| Monthly | Engagement Depth, Modality Mix, Department Variance, Feature Utilization, Agentic Ratio | CIO + Department Heads | Strategic view — where is adoption deepening, where is it stuck? |
| Quarterly | All 10 KPIs + trend analysis | Board / Executive Committee | Board-ready — is AI making us more productive, and can we prove it? |
The quarterly board report should answer three questions:
These are not vanity metrics. These are the instrumentation layer that tells a CIO — and a board — whether AI investment is translating into organizational productivity or just generating activity.
CIOs should track ten adoption and usage KPIs across four dimensions: activity (WAU, adoption velocity), depth (engagement depth, feature utilization, power user density), breadth (modality mix, department variance), and new AI patterns (agentic ratio, agentic spend, Shadow AI rate). The critical shift from prior years: measuring AI adoption as a productivity proxy, not a deployment checklist. Boards are no longer satisfied with login counts — they want evidence that AI is changing how work gets done.
Two structural changes: the rise of agentic AI and the shift from per-seat to consumption-based cost models. In 2025, CIOs could track adoption through login counts on a few primary tools. In 2026, AI agents run autonomously with variable compute costs, employees use multiple modalities (text, code, image, audio), and Shadow AI makes single-vendor dashboards dangerously incomplete. The KPI stack must evolve to match.
Power user density and its week-over-week growth rate. WAU tells you activity. Engagement depth tells you quality. But power user density tells you whether AI is genuinely changing productivity — because power users show a 6x productivity gap versus average users (OpenAI). A CIO whose power user density is growing 2% per week has a different organizational trajectory than one where it's been flat for two quarters.
Frame every metric as a productivity signal, not a deployment update. Instead of "65% of employees logged into an AI tool this month," report: "65% of employees actively used AI this week. Power users — those using AI deeply enough to measurably change output velocity — grew from 8% to 12% this quarter. AI usage now spans 4 modalities, and agentic workflows handle X% of routine tasks autonomously." The board doesn't care about logins. They care about whether AI spend is making the organization faster.
Agentic AI refers to autonomous AI systems that plan, execute, and deliver results without human prompting during execution. It needs its own KPI because it breaks the traditional measurement model: agentic AI generates variable, consumption-based costs (tokens, API calls, compute) rather than fixed per-seat costs, and its work happens without a human session to measure. CIOs who don't track the agentic ratio and agentic spend velocity will have a growing category of AI activity — and AI cost — that they can't see or manage.
Track what unsanctioned tools people are choosing and for what workflows — then ask why they chose them over sanctioned alternatives. If employees adopt an unsanctioned design tool, it likely means the sanctioned design workflow has a gap. If a department builds on an unapproved AI agent platform, it means the approved platform doesn't meet their automation needs. Shadow AI detection is a risk management function. Shadow AI analysis is a productivity intelligence function.
70%+ weekly active usage rate across all AI tools is strong. Below 40% signals intervention is needed. But the rate alone is insufficient — a 70% WAU with 3% power user density and single-modality usage signals wide but shallow adoption. The benchmarks that matter are multi-dimensional: WAU, engagement depth, modality count, power user percentage, and department variance together paint the real picture.
Weekly for operational KPIs (WAU, velocity, power user growth, Shadow AI, agentic spend), monthly for strategic KPIs (engagement depth, modality mix, department variance, feature utilization), and quarterly for board reporting with full trend analysis. The most common mistake is quarterly-only measurement — by the time a plateau shows up in quarterly data, three months of potential productivity gains have been lost.
^1 Info-Tech Research Group, "CIO Priorities 2026." Based on survey of CIOs and IT leaders across enterprise organizations.
^2 Gartner, enterprise AI spending forecast, 2026.
^3 McKinsey Global Survey on AI, 2026. n=1,363 respondents across industries and regions.
^4 OpenAI, "The State of Enterprise AI," 2025. Analysis of productivity differences across user engagement levels.
^5 Microsoft Security Blog, "80% of Fortune 500 Use Active AI Agents," February 2026.
^6 Larridin, "State of Enterprise AI 2025," n=567 companies across 12 industries. Updated every 2–3 weeks.
Related Resources