Solutions

The Larridin Guide to ROI for Enterprise AI: How to Build the Financial Case for Multi-Year AI Investment

Written by Floyd Smith | Apr 9, 2026 10:08:56 PM

The board wants a number. The CFO wants a formula. But the companies generating billions in AI value, such as IBM, JPMorgan, and Starbucks, didn’t get there by calculating payback periods. They built positive, compounding curves. Here’s how to frame the financial case for AI transformation that actually reflects how the value works.

TL;DR

A single AI ROI number is misleading at best, dangerous at worst. AI transformation value compounds over multi-year cycles, with modest returns early, acceleration in the middle, and exponential separation at maturity. The financial case isn’t about the payback period. It’s about compounding organizational intelligence and giving the board the right metrics at the right phase.

 

Using this Guide

Somewhere in your next board meeting, someone will ask: “What’s the ROI on our AI investment?”

This Guide has ten sections:

 

#1 Introduction

This section frames the entire process: how to communicate with the board, what numbers they need, and why you should tell a story instead of just delivering a single number.

Somewhere in your next board meeting, someone will ask: “What’s the ROI on our AI investment?”

Your anticipated results may vary, depending on your industry. Figure 1 is from Larridin’s report, The State of Enterprise AI. It shows the highest- and lowest-scoring uptake of AI by industry among our respondents.

Figure 1. Industries with the highest and
lowest expectations for near-term ROI from AI.
(Source: The State of Enterprise AI 2026)

And note that more is not always better. If companies in your industry are early and eager AI adopters, the bar is high for your company just to keep up, and developing a leadership position will be a challenge.

Whereas, if your industry is slow to move to AI, there’s room for deep thinking; task, operational, and supply chain refactoring; and outstanding innovation. For instance, you might have noticed that the slowest-adopting industries all present obvious opportunities for combining AI with robotics…

The Single-Number Trap

Somewhere in your next board meeting, someone will ask: “What’s the ROI on our AI investment?”

The question feels reasonable. But the answer, if you give a single number, will be wrong.

Here’s the core problem: AI transformation ROI is not a static figure. It’s a compounding curve. Early investments in adoption, visibility, and governance produce modest, hard-to-quantify returns. Mid-stage investments in proficiency and workflow encoding produce measurable acceleration. Mature investments, where AI operates on your proprietary knowledge base, produce returns that are exponential and nearly impossible for competitors to replicate.

Collapsing that complex curve into one number erases the most important thing about it: the trajectory.

The data confirms this pattern. PwC’s 2026 Global CEO Survey found that 56% of CEOs report no revenue or cost benefits from AI. Deloitte’s AI ROI: The Paradox of Rising Investment and Elusive Returns (October 2025) found that only 6% of organizations achieve payback in under one year. But BCG’s The Widening AI Value Gap (September 2025) analysis of over 1,250 firms found that the leaders—the top 5% achieving value at scale—generate 1.7x revenue growth and 3.6x total shareholder return compared to laggards.

The gap between early disappointment and long-term dominance is the compounding curve. A single ROI number at month six tells you little about where you’ll be at month thirty.

The companies generating real value understand this. JPMorgan produced $2 billion in business value, built on years of proprietary data infrastructure and the work of 900 data scientists. IBM’s “Client Zero” initiative produced $4.5 billion in productivity gains after embedding AI across 70+ workflows over multiple years. Starbucks drove a 30% ROI increase through Deep Brew, built on—and leveraging—a decade of customer behavior data no competitor can replicate.

None of these numbers materialized in year one. All of them came from building AI around organizational intelligence that competitors cannot buy.

This is the Core-and-Orbit Model in financial terms: ROI grows as you move from outer orbit investments (tools, vendors, pilots) toward core investments (amplified organizational intelligence). The outer orbit produces table stakes. The core produces competitive separation.

 

The Financial Translation Layer: Three Metrics That Replace a Single ROI

The Measuring AI Impact guide introduces a Financial Translation Layer: three approaches that communicate AI value without collapsing it into a single, misleading figure.

Capacity Reallocation Value (CRV). This is the most broadly applicable metric. CRV calculates the economic value of shifting employee time from routine work to strategic work – not just “hours saved,” but the value differential of what those hours are now spent on. A marketing manager who saves five hours weekly on drafting and redirects that time to strategy creates $625 in weekly value, not $375, when you account for the higher-value work. CRV makes the invisible visible.

Cost of Delay (CoD). Quantifies the financial value of delivering outcomes sooner. If accelerating a product launch by six weeks generates $75,000 per week, the CoD value is $450,000. This connects time savings directly to revenue, without using a traditional ROI formula.

Return on AI Investment (ROAI). For CFOs who require a traditional calculation: ROAI = (Revenue Attribution + Cost Savings + Risk Mitigation Value) / Total AI Investment. Position this as the capstone metric, not the starting point. Without operational measurement infrastructure feeding real data into the formula, ROAI is fiction dressed as finance.

Different metrics serve different audiences. CRV tells the operating story. CoD tells the speed story. ROAI tells the financial summary. Together, they give the board a narrative, not a number.

 

Framing ROI by Transformation Phase

The financial case changes as transformation matures. Here’s how to frame value at each stage, mapped to the transformation roadmap.

Early Phase (0-6 Months): Foundation Value

What you’re building: Visibility, governance infrastructure, baseline adoption measurement, and the knowledge audit that maps where organizational intelligence lives.

What to measure:

Board framing: “We’ve built the infrastructure to invest at scale – identifying $X in untracked AI spend, governing Y% of usage, and preventing Z compliance risks.” Early-phase ROI looks modest. The companies that push through this phase are the ones that reach the compounding curve.

Mid Phase (6-18 Months): Acceleration Value

What you’re building: Proficiency programs, workflows that encode organizational intelligence into AI systems, and measurement connected to business outcomes.

What to measure: CRV across functions. Proficiency-driven productivity gains. (OpenAI documents a 6x engagement gap between power users and typical employees). Cost of Delay on accelerated workflows. The Net Productivity Matrix that separates real gains from the 37% rework problem.

Board framing: “AI-augmented teams are completing workflows X% faster with Y% quality improvement, generating $Z in capacity reallocation value.” This is where the 90-day transformation cycles start compounding, with each cycle building on the knowledge encoded in the previous one.

Mature Phase (18+ Months): Separation Value

What you’re building: AI operating on your proprietary knowledge base. Competitive capabilities that cannot be replicated by buying the same tools.

What to measure: Revenue from AI-amplified capabilities. Full ROAI backed by real operational data. The widening gap between your AI’s performance and a competitor’s generic deployment.

Board framing: “Our AI transformation is a competitive moat: $X in additional revenue built on proprietary knowledge systems competitors cannot replicate.” This is the IBM, JPMorgan, and Starbucks outcome. Not a payback calculation; a compounding curve tracked by the metrics that matter.

 

Three Mistakes That Kill the Financial Case

1. Expecting payback in year one. Deloitte found only 6% of organizations achieve payback in under a year. If your board expects it, you have a framing problem, not a performance problem. Set expectations around the compounding curve from day one, or you’ll be defending a “failed” initiative that was actually on track.

2. Benchmarking against generic averages. Your ROI should not look like the industry average, because your transformation should not. The Core-and-Orbit Model means your AI strategy is built around your organizational intelligence – your value creation pattern will be unique. BCG found that 95% of organizations are not achieving value at scale. Benchmarking against the 95% is benchmarking against failure.

3. Ignoring the compounding effect. Measuring AI ROI at a single point is like measuring compound interest after the first month. Each cycle of the transformation loop builds on the previous one. The operating model you build determines whether those cycles compound or stall. Organizations that report ROI quarterly, without showing the trend line, are undermining their own case.

 

Communicating to the Board: The Narrative, Not the Number

The board doesn’t need a single ROI number. They need confidence that the investment justifies continued commitment. That confidence comes from a narrative, not a formula.

Lead with the trajectory. A 15% improvement in proficiency scores this quarter, driving 22% throughput improvement next quarte,r tells a more compelling story than any single number.

Connect to your competitive position. The board understands moats. Frame AI transformation as building a capability competitors cannot replicate by buying the same technology. Reference the AI Maturity Model: where you are, what stage you’re progressing through, and what becomes possible next.

Match the narrative to the phase. In the early phase, boards need to hear about risk mitigation and foundation-building. In the mid-phase, boards need to know about acceleration and scaling. In the mature phase, boards need competitive separation and net new revenue creation. Mismatching destroys credibility. And never lead with ROAI alone; it invites the single-number trap.

#2 How to Build an AI Transformation Roadmap

Stop writing three-year plans that are obsolete before the ink dries. Build a living roadmap on repeating 90-day cycles that deepen as your organization matures.

 

TL;DR

The AI landscape shifts every six months; a static multi-year roadmap can’t survive that pace. Build your AI transformation roadmap around the 90-day Transformation Loop (Assess, Prioritize, Execute, Measure, Adapt), with each cycle increasing in strategic depth as your organization matures. Early cycles build visibility and foundation. Middle cycles encode knowledge and scale. Mature cycles compound advantage.

 

Why Static Roadmaps Fail in AI

Most AI transformation roadmaps look like traditional IT roadmaps with “AI” bolted on: a phased three-year plan that moves from pilots to scaling to optimization. Neat. Linear. And almost always wrong by quarter three.

The reason is structural. Foundation models shift capabilities every six months. The build-vs-buy calculus reverses as open-source models mature and costs per API call drop. Agentic AI, which was barely a concept two years ago, is now reshaping what “deployment” even means. A roadmap written in January is navigating a different landscape by July.

RAND’s report, Why AI Projects Fail and How They Can Succeed, found that more than 80% of AI projects fail. PwC’s 2026 Global CEO Survey reported that 56% of CEOs have realized neither revenue nor cost benefits from AI. BCG’s The Widening AI Value Gap (September 2025) analysis of over 1,250 firms found that only 5% of organizations achieve AI value at scale. These aren’t technology failures. They’re planning failures: organizations locking into fixed plans while the ground is shifting beneath them.

The alternative isn’t “no plan.” It’s a different kind of plan. The AI Transformation Guide introduces the Core-and-Orbit Model: build your strategy around your organizational intelligence (the core), commit to execution disciplines that compound over years (the inner orbit), and keep tools, vendors, and specific bets deliberately adaptive (the outer orbit). A living roadmap is the operating rhythm that makes this model executable: holding the core and inner orbit steady while the outer orbit shifts every cycle.

 

How the Roadmap Connects to the Pillar Frameworks

A roadmap doesn’t exist in isolation. Three pillar frameworks determine what your roadmap should contain at any given moment.

Your maturity position determines what’s possible next. The AI Maturity Model defines five stages – from Visibility and Controls through Agentic Deployment. You cannot skip stages, but you can accelerate through them. Your roadmap must respect where you actually are, not where you wish you were. An organization at Stage 1 that roadmaps agentic deployment, which comes later in the game, is writing fiction.

Visibility is the starting point for every cycle. The AI Adoption Guide makes clear that you cannot govern, measure, or optimize what you cannot see. Every 90-day cycle begins with an honest assessment of your AI landscape: what’s deployed, what’s being used, what’s unsanctioned, and where the gaps sit. Organizations that skip this step build roadmaps on assumptions.

Each cycle needs measurement to know if you’re amplifying or just automating. If your metrics can’t tell you whether AI is encoding your organization’s unique expertise into scalable systems, or merely speeding up generic tasks, you’re measuring activity, not transformation. The Measuring AI Impact framework – effectiveness, quality, time, revenue, and cost – provides the five dimensions you evaluate against at the end of every cycle.

 

The 90-Day Transformation Loop: A Practical Framework

The loop has five phases. The rhythm stays the same. What deepens over time is the strategic ambition of each phase.

Phase 1: Assess (Week 1-2)

Map the current state. What has changed since the last cycle?

Early cycles (0-6 months): Discover the full AI tool landscape (sanctioned and unsanctioned). Identify where your most valuable organizational knowledge lives. Baseline adoption rates by team, department, and location. Audit governance gaps.

Mid cycles (6-18 months): Evaluate which knowledge domains have been encoded into AI workflows and which remain untapped. Assess proficiency progression. Review which outer-orbit bets need to shift.

Mature cycles (18+ months): Assess compounding returns. Is each AI workflow building on knowledge captured in previous cycles? Identify workflows ready for agentic deployment. Evaluate where the gap between your AI capabilities and commodity solutions has widened.

Checklist questions:

  • What do we know now that we didn’t know 90 days ago?
  • Which outer-orbit variables have shifted?
  • Where is the biggest gap between what our people know and what our AI systems leverage?

Phase 2: Prioritize (Week 3-4)

Pick the highest-value targets for a given cycle:

  • Early cycles: Prioritize visibility, governance infrastructure, and baseline measurement. Pick one or two teams for deeper adoption tracking.
  • Mid cycles: Prioritize the knowledge domains where encoding tacit expertise into AI creates the most competitive advantage. Focus proficiency development on the teams carrying your most valuable domain knowledge.
  • Mature cycles: Prioritize workflows with the highest Capacity Reallocation Value, where freeing human talent from AI-automatable work redirects it toward judgment-intensive, strategic work.

Checklist questions:

  • Which initiative would create the most durable competitive advantage?
  • Do we have the maturity prerequisites to execute this?
  • What is the primary metric we will optimize, and what are our guardrail metrics?

Phase 3: Execute (Week 5-10)

Run the targeted initiatives. Build from the core out.

  • Early cycles: Deploy discovery and governance tooling. Launch initial proficiency assessments. Establish the measurement layer – thin at first, but present from day one.
  • Mid cycles: Build workflows that systematically transfer tacit knowledge into AI systems. Run differentiated enablement programs based on proficiency data. Scale adoption from pockets of excellence to adjacent teams.
  • Mature cycles: Deploy agentic workflows in production. Design AI-first processes where humans intervene at checkpoints rather than every step. Cross-pollinate encoded knowledge between business units.

Checklist questions:

  • Are we building around our organizational intelligence, or around a vendor’s product?
  • Is this initiative encoding knowledge or just automating tasks?

Phase 4: Measure (Week 11-12)

Evaluate results against the five dimensions of AI impact.

Early cycles: Focus on cost (what are we spending?) and basic effectiveness (is anyone using this?).

Mid cycles: Add quality (is rework declining as proficiency rises?) and time (are workflows completing faster?). Begin correlating AI-augmented work with business outcomes.

Mature cycles: The full five-dimension suite is active, including revenue attribution and Capacity Reallocation Value. Measurement now directs investment, not just reports on it.

Checklist questions:

  • Did AI amplify what we intended, or just accelerate it?
  • Can we distinguish between activity and impact in our data?
  • What did we learn that changes what’s possible next cycle?

Phase 5: Adapt (Ongoing into next cycle)

Feed every lesson into the next cycle’s assessment. Reassess the outer orbit: Are the tools still the right ones? Should the build-vs-buy balance shift? Has a new capability emerged that changes priorities? Which assumptions turned out to be wrong?

If you’re running the same tool strategy you had a year ago, you’re not being adaptive; you’re being complacent.

 

Common Mistakes

Writing a Static Three-Year Plan

Leadership wants certainty. Strategy teams deliver a phased multi-year roadmap with milestones, Gantt charts, and a “done” date. By month nine, the landscape has shifted, and the plan becomes either a constraint that prevents adaptation or shelfware that nobody follows. A roadmap built on 90-day cycles provides the structure leadership wants, without the brittleness that kills execution.

Trying to Skip Maturity Stages

Organizations at Stage 1 (Visibility and Controls) roadmapping Stage 5 (Agentic Deployment) are writing aspirational fiction. Each stage produces the knowledge and capability required for the next. An organization that deploys AI agents without understanding how work flows through the organization will automate the wrong things. An AI transformation assessment that honestly locates your current position is worth more than a roadmap that starts from where you wish you were.

Roadmapping Tools Instead of Knowledge Domains

“Deploy Copilot in Q1, build a customer service chatbot in Q2, launch an internal RAG system in Q3”: this is a tool deployment schedule, not a transformation roadmap. Tools are outer-orbit variables. They change.

A real roadmap is organized around which knowledge domains you’re encoding into AI and which competitive capabilities you’re building. When your roadmap reads like a vendor deployment calendar, you’ve confused the outer orbit with the core.

 

#3 AI Transformation Operating Model: How to Structure Your Organization for Continuous AI Evolution

You don’t need a reorg. You need an operating structure that supports continuous transformation – one that keeps strategy centralized, execution disciplined, and tool decisions flexible.

 

 

TL;DR

Most organizations either centralize AI in a center of excellence (too slow, too disconnected) or distribute it to every team (no coherence, no governance). Both fail. The Core-and-Orbit Operating Model proposes a hybrid: centralized stewardship of strategy and the five execution disciplines, distributed flexibility for tool selection and experimentation. The operating model must support repeating 90-day cycles, not a one-time deployment, and it must evolve as your maturity increases.

 

The Two Models That Fail – and Why

Every enterprise lands in one of two traps when it tries to organize for AI.

Trap 1: The Centralized Center of Excellence. A dedicated AI team owns everything: tool selection, deployment, governance, measurement. In theory, this creates coherence. In practice, it creates a bottleneck. The central team becomes a service desk. And the people who carry your most valuable domain expertise, such as traders, engineers, and clinicians, are the furthest from decisions about how AI gets used.

Trap 2: Distributed free-for-all. Every team picks its own tools, builds its own workflows, defines its own metrics. This produces the shadow AI problem at scale: 200-300 tools, no governance, no shared measurement, no way to know whether AI is amplifying organizational intelligence or just accelerating busywork.

Both models fail for the same reason: they treat AI transformation as either a technology function or a team-level activity. It’s neither. It’s an organizational capability that requires centralized strategy with distributed execution. The AI Transformation Guide calls this the Core-and-Orbit Model. This section of our Guide turns that model into an operating structure, with clear ownership at every layer.

 

How the Operating Model Connects to the Pillar Frameworks

Maturity stage determines the operating model’s shape. At Stage 1 of the AI Maturity Model, you need centralized visibility. At Stage 3, proficiency development requires coordination across business units that a purely centralized model can’t deliver. By Stage 4 and beyond, workflow intelligence and agentic deployment demand deep business-unit involvement. The operating model is not static; it evolves as your maturity increases.

Governance needs clear ownership at every layer. The Shadow AI Guide demonstrates that governance is a spectrum from educate to block. Someone has to define that spectrum, and someone else has to enforce it at the point of action. Centralized policy, distributed enforcement.

Each inner orbit discipline needs an owner. The five execution disciplines: adoption visibility, proficiency development, governance, measurement, and maturity progression; compound over years. Without clear ownership, they become everyone’s concern and no one’s responsibility.

 

The Three-Layer Operating Model

Layer 1: Core Stewardship

Who: A strategic function led by the CIO, CAIO, or transformation lead, who reports directly to the CEO. This is a strategy role, not a technology role.

Mandate: Own the question at the center of the entire model: “What is our organizational intelligence, and how do we amplify it with AI?”

Core stewardship is responsible for:

  • Defining the organizational intelligence strategy. Where does your most valuable tacit knowledge live? Which knowledge domains should AI amplify first?
  • Setting the 90-day Transformation Loop rhythm. The AI Transformation Roadmap runs on repeating cycles of Assess, Prioritize, Execute, Measure, Adapt. Core stewardship owns the cadence and ensures that each cycle deepens in strategic ambition.
  • Connecting AI transformation to competitive strategy. This is the translation layer between the C-suite and the execution disciplines. If the board asks, “What is AI doing for us?,” and the answer is, “We deployed tools,” you have the wrong person in the role.
  • Evolving the operating model itself. As maturity increases, the balance between centralized control and distributed authority must shift. Core stewardship monitors progression and triggers those shifts.

What this is not: Core stewardship does not manage tools, run training programs, or evaluate vendors. It sets strategic direction and ensures that execution layers build from the core out.

Layer 2: Inner Orbit Ownership

Who: A cross-functional team or coordinated set of owners, one per discipline. This could be a single AI Execution team, or distributed leads with a formal coordination mechanism.

Mandate: Own the five execution disciplines that compound over years.

  • Adoption visibility. Complete, continuously updated visibility into what AI tools are being used, by whom, and for what. Without this, governance is guesswork and measurement is fiction.
  • Proficiency development. A continuous capability-building engine. Identify power users, design differentiated enablement, deploy champions as force multipliers. The 6x engagement gap between power users and typical employees is this owner’s mandate to close.
  • Governance. Define the governance spectrum: educate, warn, monitor, restrict, block. Ensure enforcement at the point of action.
  • Measurement. Build the infrastructure that tracks AI impact across five dimensions: effectiveness, quality, time, revenue, and cost. Without this, the organization cannot distinguish activity from amplification.
  • Maturity progression. Track where every team, department, and function sits on the five-stage maturity model. Produce the maturity heatmap that shows the real distribution, not a single blended score. Trigger operating model evolution when the organization outgrows its current structure.

The coordination mechanism matters. These five owners must operate as a connected system, not five silos. A biweekly inner orbit sync: think 30 minutes, data-driven; keeps the disciplines connected and feeds directly into the 90-day Transformation Loop.

Layer 3: Outer Orbit Flexibility

Who: Business units, functional teams, and innovation leads. The people closest to the work.

Mandate: Select tools, manage vendors, run experiments, and build AI workflows within the guardrails set by the inner orbit.

Outer orbit execution is distributed because it has to be. The marketing team knows which AI content tools serve their workflows. The engineering team knows which code generation tools fit their stack. Centralizing these decisions produces the bottleneck problem. Distributing them without guardrails produces the shadow AI problem.

The inner orbit provides the guardrails. Adoption visibility ensures that no tool goes dark. Governance defines what data can flow where. Measurement ensures that every tool deployment is evaluated against impact dimensions. Proficiency development ensures that people are building skills, not just collecting subscriptions to AI tools.

 

Common Mistakes

Building a Center of Excellence That Only Manages Tools

An organization creates a dedicated AI team and hands it responsibility for tool evaluation, deployment, and support. The team becomes a procurement function with an AI label. Nobody owns the question of organizational intelligence. The “center” is at the outer orbit, when it should be at the core.

The fix: If your AI leader spends more time evaluating vendor demos than identifying where your most valuable knowledge lives, you’ve misaligned the operating model.

Distributing Execution Without Governance Infrastructure

Leadership announces that AI transformation is “everyone’s responsibility,” gives every team a budget, and steps back. Six months later: 250 tools, little or no visibility, duplicative spend, and compliance exposure that no one can quantify.

The fix: Distribute execution only after the inner orbit disciplines are staffed and operational. Governance, visibility, and measurement must exist before distributed experimentation can be productive.

Failing to Evolve the Operating Model as Maturity Increases

The centralized structure that enabled early visibility at Stage 1 constrains the distributed workflow intelligence and change management that later stages require.

The fix: At Stage 1-2, lean centralized. At Stage 3, begin distributing proficiency development to business units. At Stage 4-5, distribute execution authority, while core stewardship maintains strategic coherence. The operating model is a living structure, not an org chart carved in stone.

 

How the Three Layers Support the 90-Day Loop

The layers map directly to the Transformation Loop:

  • Assess: Core stewardship drives. Inner orbit owners bring data: adoption trends, proficiency scores, governance metrics, maturity positions.
  • Prioritize: Core stewardship sets strategic priorities. Inner orbit translates them into discipline-specific plans.
  • Execute: Outer orbit teams run initiatives. Inner orbit provides guardrails.
  • Measure: The measurement owner evaluates results across five dimensions. Core stewardship interprets results against competitive strategy.
  • Adapt: Core stewardship decides what shifts. Outer orbit retires what isn’t working and scales what is.

Every 90 days, the loop turns. The operating model ensures someone owns every part of it – and that the parts connect. Get the structure right, and each cycle compounds. Get it wrong, and you’re either too slow or too scattered to capture the value your workforce is capable of creating.

 

#4 AI Transformation by Industry: Why Your Core Determines Your Strategy

The Core-and-Orbit Model doesn’t change by industry. But the core does. And that changes everything.

TL;DR

The tacit knowledge that matters in, for example, financial services, is significantly different from healthcare, logistics, manufacturing, retail, or professional services. Your AI transformation strategy must start from what your organization uniquely knows, and that knowledge is shaped by the industry you operate in. The inner orbit disciplines stay the same. The core determines how you apply them.

 

The Core Changes. The Framework Doesn’t.

The Core-and-Orbit Model is built on an architectural insight: AI transformation should orbit around your organizational intelligence: the tacit knowledge, domain expertise, and decision patterns that live in your people, not your databases.

That model is universal. But the content of the core is not.

A financial services firm’s organizational intelligence is built on risk intuition and market pattern recognition. A hospital system’s intelligence lives in clinical expertise no model can replicate from textbooks alone. A logistics company’s advantage is encoded in supplier relationships and routing instincts that experienced operators carry in their heads.

This is the power of the framework. The inner orbit: adoption, proficiency, governance, measurement, and maturity; stays constant across every industry. The outer orbit adapts to industry-specific tools and regulations. But the core is fundamentally shaped by your domain. Organizations that miss this try to import another industry’s AI playbook. It doesn’t work. The tools might overlap, but the knowledge at the center does not.

 

Financial Services: Decision Patterns and Risk Intuition

The core here is accumulated judgment under uncertainty: risk assessment, market pattern recognition, and deal evaluation that the best traders and analysts carry as instinct, not documented procedure.

JPMorgan built its transformation around this. Over 200,000 employees use the bank’s proprietary LLM Suite, generating an estimated $2 billion in business value, built on $10 trillion in daily transaction flow and decades of financial intelligence. Goldman Sachs took a parallel approach with its GS AI Platform, a single secure gateway to route the firm’s accumulated financial intelligence through AI. Klarna went in the opposite direction: deploying AI to replace human judgment, cutting 40% of its workforce, then publicly admitting the AI lacked the nuanced discernment experienced agents carried.

What makes it distinct: Regulatory density pushes governance toward restrict-and-monitor. The tacit knowledge is temporal; market intuition built over cycles, not quarters.

 

Healthcare: Clinical Judgment and Diagnostic Expertise

The core is clinical knowledge that resists codification. Expert physicians outperform AI diagnostic models by 15.8% in accuracy, a gap driven by tacit clinical knowledge: the pattern recognition built through thousands of patient encounters; the diagnostic instinct that synthesizes symptoms, history, and context in ways no training dataset captures.

The stakes of getting the core wrong here are measured in patient outcomes. A physician’s judgment about when to deviate from a treatment protocol is precisely the kind of uncodifiable expertise that the Core-and-Orbit Model is designed to protect and amplify.

What makes it distinct: Governance tolerances for unauthorized AI usage should sit at 1-2%, compared to 5-6% in technology companies (per the “shadow AI” benchmarks). AI transformation in healthcare isn’t slower because the technology is worse. It’s slower because the cost of failure is human.

 

Logistics and Supply Chain: Operational Intuition and Relationship Knowledge

The core is accumulated operational instinct: the warehouse manager who reroutes shipments before the system flags a delay, the procurement lead who knows which suppliers deliver under pressure, the routing coordinator whose network knowledge outperforms optimization algorithms on the edge cases that matter most.

This intelligence is deeply relational and contextual. It lives in informal knowledge about which ports are actually congested versus what the data shows, in the judgment calls about when to trust a forecast and when to override it.

What makes it distinct: This core is distributed across an extended network, not concentrated in a headquarters. AI transformation must account for knowledge that lives partly outside the firm. The AI systems that create the most value augment experienced operators during disruptions: precisely when algorithms trained on normal conditions fail.

 

Manufacturing: Process Expertise and Quality Instinct

The core is deep process knowledge: the technician whose instinct catches defects no sensor detects, the engineer who adjusts a process when raw materials shift, the quality expert who spots a systemic issue from a pattern invisible in the data.

Siemens is building an Industrial Foundation Model specialized in the physics and logic of the industrial world, because general-purpose models cannot carry domain-specific operational knowledge. John Deere has spent over a decade building AI around its unmatched understanding of precision agriculture, turning farming operations expertise into a data flywheel no competitor can replicate.

What makes it distinct: The core is physical, tied to machinery and materials that behave differently in practice than in theory. The outer orbit includes industrial IoT, edge computing, and safety regulations with no equivalent in knowledge-work industries.

 

Retail: Customer Behavior Knowledge and Merchandising Expertise

The core is proprietary understanding of customer behavior: buying patterns, seasonal dynamics, and merchandising instincts that experienced buyers carry from years of observation.

Walmart built Wallaby, retail-specific LLMs trained on decades of proprietary product, customer, and operational data no general-purpose model carries. Starbucks processes 100 million weekly transactions through Deep Brew, turning customer behavior into personalized experiences at a scale competitors cannot replicate. Both made the same strategic choice: to build AI around what they uniquely know about their customers.

What makes it distinct: The feedback loop between AI and outcomes is faster than in almost any other industry. You can test AI-driven merchandising within days, not quarters, which makes the measurement discipline of the inner orbit especially powerful and especially unforgiving.

 

Professional Services: Client Relationships and Domain Consulting Expertise

The core is relational and advisory intelligence: the partner who reads a client’s unspoken concerns, the consultant whose pattern recognition makes her recommendations land differently, the deal-maker whose judgment about timing has been honed over hundreds of transactions.

This is the hardest core to encode into AI because it’s the most deeply interpersonal. The client relationship isn’t in the CRM. The deal judgment isn’t in the playbook. The organizational intelligence is inseparable from the humans who carry it.

What makes it distinct: The proficiency discipline matters more here than in any other industry. Without deep proficiency, AI in professional services becomes an expensive writing assistant. With it, AI becomes a mechanism for scaling judgment previously locked in individual heads.

 

The Constant Across All Industries

The core changes. The inner orbit does not.

Every industry needs adoption depth: visibility into which AI tools people are actually using. Every industry needs proficiency development: the mechanism by which organizational intelligence gets encoded into AI. Every industry needs governance, though tolerance levels differ sharply: healthcare at 1-2% unauthorized usage, technology at 5-6%, financial services in between (per the shadow AI benchmarks). Every industry needs measurement connecting AI to business outcomes. And every industry progresses through the same maturity stages.

The inner orbit is the universal operating system. The core is what makes your implementation of it yours.

 

Three Mistakes Organizations Make Across Industries

1. Copying another industry’s playbook. A healthcare system adopts a Silicon Valley approach: high tolerance for unauthorized tools, move-fast governance, minimal oversight. The result: compliance exposure and a governance posture mismatched to regulatory reality. Your industry’s constraints aren’t obstacles. They’re design parameters.

2. Ignoring what makes your tacit knowledge distinct. Deploying generic AI tools to generic workflows and expecting transformation. If your AI strategy doesn’t start with the question, “What does our organization know that no one else knows?,” answered with industry-specific precision, you’re building someone else’s transformation.

3. Applying uniform governance across a conglomerate. Same governance framework for the healthcare division and the technology division. One is too loose. The other is too tight. Both fail. Governance benchmarks must be calibrated to industry-specific risk, not to a corporate standard that ignores the differences.

 

#5 AI Transformation Change Management: Getting Your People to Build From the Core

Most change management programs for AI focus on getting people to use new tools. That’s the wrong target. The real change challenge is getting people to externalize what they uniquely know, so AI can amplify it.

 

TL;DR

63% of AI implementation challenges stem from human factors, not technical ones (Russell Reynolds Associates). Yet most organizations treat AI change management as a tool adoption campaign, with training sessions, launch emails, and usage dashboards. The actual change required is far harder: getting people to externalize their tacit knowledge into AI systems. Enterprises that integrate real change management into AI initiatives are 47% more likely to meet their objectives. The difference between organizations that transform and organizations that just deploy is whether their people make the shift from “AI is a tool I use” to “AI amplifies what I uniquely know.”

 

The Change Challenge Most Organizations Misdiagnose

Here is the uncomfortable truth about AI transformation: the technology is the easy part.

The hard part is human. It is getting a 20-year veteran procurement lead to articulate the supplier intuition she has never written down. It is convincing the senior engineer that encoding his debugging instinct into an AI workflow is not a threat to his value; it is the highest expression of it. It is shifting an entire organization’s mental model from “AI does tasks for me” to “AI makes what I know more powerful.”

This is not a standard change management problem. Rolling out a new ERP system requires people to learn new screens and new processes. Rolling out AI transformation requires people to externalize knowledge they may not even realize they have, and trust that doing so makes them more valuable, not less.

The Core-and-Orbit Model makes this explicit. At the center of every AI transformation sits your organizational intelligence: the tacit knowledge, domain expertise, and decision patterns that live in your people. That core is your organization’s moat. But it only becomes an AI asset when people actively encode it. A core that stays locked in people’s heads is not a strategy; it is a vulnerability.

The data confirms the scale of the problem. RAND’s Why AI Projects Fail report found that 80% of AI projects fail, and the primary drivers are organizational, not technical. PwC’s 2026 Global CEO Survey found that 56% of CEOs have seen no revenue or cost benefits from AI. MIT’s GenAI Divide report (August 2025) shows 95% of generative AI pilots never scale beyond the experimental phase. Behind every one of these statistics is the same root cause: tools were deployed, but behavior did not change.

This is what we call regret spend: the growing pile of AI investment that produces activity, but not transformation. Licenses purchased, dashboards green, adoption metrics climbing, but with nothing fundamentally changing about how the organization operates or competes. The tools are in place. The knowledge stays locked in people’s heads. The core remains untapped.

 

How This Connects to Adoption and Proficiency

Change management for AI transformation does not exist in isolation. It sits at the intersection of two disciplines that most organizations treat separately: adoption and proficiency.

Adoption gives you the vehicle. Proficiency gives you the driver. Change management gives you the destination.

Consider two contrasting approaches to AI adoption. At Zapier, CEO Wade Foster drove 97% company-wide AI adoption through happy hours, hackathons, and a culture of experimentation; a bottom-up strategy built on psychological safety and peer influence. At Meta, the company tied performance reviews directly to AI-driven impact: a top-down mandate where high performers can earn bonuses of up to 200%. Both approaches can work. Both drive adoption. But neither, on its own, produces transformation. Adoption without direction is just activity.

Now layer in proficiency. The AI Proficiency spectrum, from Level 1 Search Replacer to Level 5 AI-Native Orchestrator, maps the change journey itself.

Most employees start at Level 1, using AI as a slightly better search engine. The change management challenge is moving them up the spectrum: not just in skill, but in mindset. The leap from Level 2 to Level 3 is not a training problem; it is the moment when someone stops thinking of AI as a utility and starts thinking of it as a thinking partner. The leap from Level 4 to Level 5 is when they begin encoding their unique expertise into AI workflows that outlast any single session.

EY’s 2025 Work Reimagined Survey makes the gap vivid: 88% of employees use AI daily, but only 5% use it in advanced ways. That 83-point gap is not a skills deficit. It is a change management failure. The tools are deployed, but the behavior has not shifted.

 

A Change Framework for AI Transformation

Traditional change models, such as Kotter’s 8 steps, ADKAR, and Prosci, were built for process changes: new systems, new org structures, new workflows. AI transformation requires something different, because the change itself is different. You are not asking people to follow a new process. You are asking them to externalize what they know, and to trust that the organization will value them more for it, not less.

Here is a five-stage framework built specifically for this challenge.

1. Create Visibility Into What People Already Do With AI

You cannot manage change you cannot see. Before launching any initiative, understand the current state. Who is using AI? How deeply? What are they actually doing with it? Where are the pockets of sophistication, and where are the deserts?

This is not a survey exercise. Self-reported AI usage is unreliable; the Dunning-Kruger effect applies in full force. Use behavioral data. Map your organization’s actual AI landscape across all tools, teams, and proficiency levels. The patterns will surprise you: the quiet analyst who has built a sophisticated workflow no one knows about; the loud advocate who turns out to be a Level 1 search replacer.

2. Identify Knowledge Champions

Not all power users are created equal for change management purposes. You need people who sit at the intersection of two qualities: deep tacit expertise and high AI proficiency. The 20-year domain expert who is also a Level 4 power user is your most valuable change agent, because she can demonstrate what it looks like to encode real organizational knowledge into AI, not just use AI for generic productivity.

Find these people by gathering data, not through self-nomination. Then make them visible. Their credibility comes from the fact that they carry the institutional knowledge the organization depends on, and they have found ways to make AI amplify it.

3. Build Encoding Loops

This is the core of the framework, and where most programs fall short. An encoding loop is a structured, repeatable process for transferring tacit expertise into AI workflows. It is not a one-time knowledge dump. It is an ongoing cycle:

  • Elicit – Work with domain experts to surface the knowledge they use but have never documented. The decision heuristics. The pattern recognition. The “I just know” instincts.
  • Encode – Translate that knowledge into AI-usable formats: custom instructions, prompt libraries, workflow templates, fine-tuning datasets, structured knowledge bases.
  • Test – Run the AI workflow against real scenarios. Does it replicate the expert’s judgment? Where does it fall short?
  • Refine – Feed the gaps back to the expert. Iterate. Each loop captures more of what they know.

The encoding loop is where the Core-and-Orbit Model comes to life. Every iteration makes the core, your organizational intelligence, more accessible, more durable, and more defensible.

4. Celebrate Amplification, Not Just Activity

What you measure and celebrate signals what you value. Most organizations celebrate AI activity: the number of users, sessions, and queries. This reinforces the wrong behavior. It rewards people for using AI, not for making AI smarter with their knowledge.

Shift the recognition model. Celebrate the sales lead who encoded her client relationship intelligence into an AI system that helps the whole team. Celebrate the operations manager whose supplier knowledge now lives in a workflow that routes decisions automatically. Celebrate the engineer whose debugging instinct is now captured in an agentic workflow that others can leverage.

The metric that matters is not, “How much are people using AI?” It is, “How much organizational knowledge is now encoded in AI systems that did not exist there before?”

5. Iterate Every Cycle

This is not a one-time initiative. Align change management to the 90-day transformation cycles described in the AI Transformation Roadmap. Each cycle, reassess: What new knowledge domains should we encode? Which champions have emerged? Where is resistance persisting, and why? What has the organization learned about itself?

The change deepens over time. Early cycles focus on building trust and demonstrating value. Mid cycles focus on scaling encoding practices across teams. Mature cycles focus on building a self-sustaining culture where knowledge externalization is the default, not the exception.

 

The Mistakes That Derail AI Change Management

Focusing on Tool Training Instead of Knowledge Externalization

This is the most common mistake. Organizations invest heavily in “how to use Copilot” training: prompt engineering workshops, feature walkthroughs, and tip sheets; then call it change management. It is not. Teaching people to use a tool does not teach them to encode what they know. Tool training produces Level 1-2 users. Knowledge externalization produces the Level 4-5 users who actually drive transformation.

Mandating Adoption Without Enabling Proficiency

The Meta approach, tying performance reviews to AI usage, only works if employees have the proficiency to deliver meaningful results. Without investment in moving people up the proficiency spectrum, mandates will produce compliance, not transformation. People will log in, run basic queries, and check the box. The dashboards will look great. Nothing will have changed. This is how you manufacture spend regret at scale.

Treating Change Management as a One-Time Rollout

A launch campaign is not a change program. AI transformation is a permanent capability, and the change management that supports it must be equally permanent. The landscape shifts every quarter, with new models, new capabilities, and new possibilities. A change program that ended six months ago is already obsolete. Build change management into every 90-day cycle, not into a launch plan.

 

Where to Start

If your organization is early in this journey, focus on three things:

  1. Get honest about the current state. Run a real assessment: not a survey, but a behavioral analysis of who is doing what with AI, and how deeply. The gap between perceived and actual usage is almost always larger than leaders expect.
  2. Find your knowledge champions. Identify the people who combine deep domain expertise with genuine AI proficiency. They exist in every organization. They are rarely the ones in the spotlight.
  3. Build one encoding loop. Pick a single knowledge domain, a single expert, and build the loop. Document what works. Then scale it.

The organizations that get AI change management right will not just have higher adoption rates or better proficiency scores. They will have done something far more valuable: they will have turned the knowledge that lives in their people into an institutional asset that compounds over time; one that survives turnover, scales beyond any individual, and becomes the foundation of a durable AI advantage.

That is not tool deployment. That is transformation.

 

#6 AI Transformation Metrics: What to Measure Beyond Adoption

Most organizations track AI logins and license utilization and call it measurement. That tells you activity, not transformation. The metrics that matter measure whether AI is amplifying your organizational intelligence, not just whether people are logging into it.

TL;DR

The majority of AI measurement programs track activity: logins, sessions, license counts; when they should be tracking amplification. Transformation metrics answer a fundamentally different question: is AI making your organization’s unique knowledge more powerful? Organize your metrics around the Core-and-Orbit Model, measuring knowledge capture at the core, execution discipline in the inner orbit, and tool performance in the outer orbit. Then, you get a measurement system that distinguishes real transformation from expensive tool deployment.

 

The Core Problem: You’re Measuring Activity, Not Amplification

Most AI measurement dashboards show monthly active users, sessions per employee, license utilization rates, tool-level adoption percentages. These numbers go up and to the right. They look good in a quarterly slide deck. They tell you almost nothing about transformation.

Most organizations are measuring whether people are using AI. They are not measuring whether AI is amplifying what makes the organization uniquely competitive. That distinction is everything.

Consider two employees, both logging in daily. One dismisses 47 autocomplete suggestions. The other has built a workflow encoding a decade of domain expertise into an AI-assisted decision process the entire team now relies on. Your adoption dashboard shows them as identical. Your transformation metrics should show them as worlds apart.

The AI Proficiency guide documents the scale of this gap: OpenAI’s 2025 State of Enterprise AI report found a 6x engagement difference between power users and typical employees. EY’s 2025 Work Reimagined Survey found 88% use AI daily, but only 5% in advanced ways. Meta’s Q4 2025 earnings reported 30% average output gains, but 80% among power users. Activity metrics flatten this distribution into a single “adopted” checkbox. Transformation metrics reveal it.

The question your metrics must answer is not “are people using AI?” It is “is AI making our organizational intelligence more powerful?”

 

Connecting the Frameworks

This metrics framework draws directly from three pillar guide frameworks – and your measurement system should reflect all of them.

The Productivity Roof from the Measuring AI Impact guide provides the structural architecture. Five pillars: adoption, proficiency, throughput, reliability, and governance; support a roof of measurable business outcomes. Your transformation metrics need coverage across all five pillars, not just the first one.

The proficiency spectrum from the AI Proficiency guide tells you whether people are encoding expertise or just chatting. A Level 1 “Search Replacer” and a Level 5 “AI-Native Orchestrator” generate completely different kinds of value. If your metrics can’t distinguish between these levels, you are blind to where value actually lives.

The Five Dimensions of AI Impact are effectiveness, quality, time, revenue, and cost. They provide the outcome layer. Rising proficiency means nothing if it isn’t producing better outcomes, faster throughput, fewer errors, and real financial returns.

The takeaway: measure across all five Productivity Roof pillars; measure the depth of proficiency, not just the presence of adoption; and connect everything to the five outcome dimensions. That is a transformation measurement system. Everything else is vanity metrics.

 

A Practical Framework: Metrics Organized by the Core-and-Orbit Rings

The Core-and-Orbit Model provides the organizing structure. Metrics should span all three rings; transformation that only shows up in one ring isn’t transformation at all.

Core Metrics: Knowledge Capture and Expertise Encoding

Core metrics measure whether AI is capturing and amplifying your organizational intelligence: the tacit knowledge, domain expertise, and decision patterns that make you uniquely competitive:

  • Expertise encoding rate. How many AI workflows embed domain-specific knowledge versus generic prompts? Count the workflows where someone has externalized institutional expertise into a reusable AI process.
  • Knowledge asset growth. Are you building a growing library of proprietary AI assets (custom instructions, specialized workflows, fine-tuned processes) that encode what your organization uniquely knows?
  • Cross-team knowledge leverage. When one team encodes expertise into an AI workflow, do other teams adopt and build on it? Knowledge locked in one team isn’t compounding.
  • Tacit-to-explicit conversion. Are your most experienced people transferring their intuition into AI systems others can use? This is the ultimate transformation metric: organizational intelligence moving from people’s heads into scalable AI workflows.

These are the hardest metrics to measure, and the most important. If the core metrics aren’t moving, everything in the outer rings is just tool deployment.

Inner Orbit Metrics: The Strategic Constants

The inner orbit’s five execution disciplines each need their own measurement layer:

  • Adoption depth. Go beyond “who has logged in” to “who is using AI in ways that matter.” Track the distribution across the proficiency spectrum: what percentage of your workforce is at Level 1-2 versus Level 3-5? A shift from 80/20 to 60/40 is more meaningful than moving from 85% to 95% on a login metric.
  • Proficiency distribution. Measure where your workforce sits on the spectrum, segmented by department, role, and seniority. The metric that matters is not average proficiency; it is the shape of the distribution. A bimodal distribution (many Level 1s, a few Level 5s) signals a different problem than a normal curve centered at Level 2.
  • Governance coverage. What percentage of AI usage falls within governed, visible channels? Most CIOs estimate 60-70 tools; actual counts often reveal 200-300. If coverage is declining as adoption grows then risk is accumulating faster than value.
  • Measurement maturity. Where are you on the measurement journey? Stage 1 (activity), Stage 2 (efficiency), Stage 3 (outcome), or Stage 4 (strategic)? The sophistication of your measurement determines whether you can even see transformation happening.

Outer Orbit Metrics: The Strategic Variables

Tools, vendors, and specific initiatives. These metrics should change as the landscape shifts, reviewed every 90 days.

  • Tool performance by use case – Which tools produce the highest-quality outcomes for which tasks? Not adoption per tool, but measured impact per tool per use case.
  • ROI by initiative – Return on each discrete AI initiative, measured across the five dimensions. Not a blended number – initiative-level granularity.
  • Vendor value realization – Are you getting expected value from each vendor relationship? Measure against the business case, not the contract.
  • Build vs. buy effectiveness – Which approach delivers more value per dollar? This shifts as open-source models mature and API costs drop. Revisit quarterly.

Outer orbit metrics are important, but not precious. They change as your tool landscape changes. The inner orbit and core metrics are the ones you commit to for years.

 

Common Mistakes

Measuring Logins Instead of Proficiency

A dashboard showing 90% monthly active users tells you nothing about whether those users are Level 1 search replacers or Level 4 power users encoding domain expertise. The proficiency distribution is the metric. The login count is background noise.

Collapsing Everything into a Single ROI Number

As the Measuring AI Impact guide makes clear, a single ROI percentage encourages gaming, hides functional variation, and produces a number nobody trusts. Measure across the five dimensions: effectiveness, quality, time, revenue, and cost; with function-specific primary metrics and guardrail metrics. The board needs a narrative, not a number.

Measuring Quarterly Instead of Continuously

Proficiency benchmarks need recalibration every 30 days. A quarterly cadence means you are always three months behind reality. Transformation metrics should be continuous: real-time telemetry for adoption and proficiency, monthly reviews for inner orbit metrics, 90-day reassessments for the outer orbit. The 5% achieving AI value at scale are measuring constantly. The rest are reacting to problems that started months ago.

 

Where to Start

You don’t need to build all of this overnight. Three moves:

  • First, establish your proficiency baseline. Measure where your workforce sits on the proficiency spectrum: not by survey, but by behavioral data. This single metric tells you more about transformation progress than every adoption dashboard combined.
  • Second, pick one core metric. Start with expertise encoding rate; count AI workflows that embed domain-specific knowledge versus generic usage. Even a rough count forces the right conversation.
  • Third, build measurement infrastructure that connects AI usage to business outcomes. The Productivity Roof provides the architecture. The five dimensions provide the outcome categories.

For a structured approach, see the AI Transformation Roadmap. To assess where you stand, see the AI Transformation Assessment. To connect metrics to financial returns, see AI Transformation ROI.

 

#7 AI Transformation Workforce Strategy: Building the People Who Bridge Knowledge and AI

The workforce strategy for AI transformation isn’t “train everyone on prompt engineering.” It’s about developing people who can do the hardest thing in enterprise AI: take what your organization uniquely knows, and encode it into systems that scale.

TL;DR

Your AI transformation workforce strategy should focus on one capability above all others: developing people who deeply understand your organization’s tacit knowledge and can effectively encode that knowledge into AI systems. These are your knowledge champions: people with high domain expertise and high AI proficiency. The AI Proficiency Spectrum is the career development model. The goal is to move people up it: continuously, not just once.

 

The Core Argument: Knowledge Champions, Not Prompt Engineers

Most AI workforce strategies ask: “How do we train our people to use AI tools?” The better question is: “How do we develop people who can bridge what our organization knows and what AI can do?”

The first question produces a training program. The second produces a competitive advantage.

The AI Transformation Guide makes the case that your organizational intelligence: the tacit knowledge, decision patterns, and domain expertise locked inside your people; is the real AI moat. Models are commoditizing. What you feed them is the differentiator. But tacit knowledge doesn’t feed itself into AI. It requires people who can do two things simultaneously:

  1. Deeply understand the organization’s tacit knowledge. The risk analyst’s intuition, the supply chain manager’s supplier relationships, the sales lead’s read on a deal. This knowledge has never been written down.
  2. Effectively encode that knowledge into AI systems. Build workflows, configure agents, and design prompts and processes that externalize what was previously locked in someone’s head.

People who can do both are your knowledge champions. In the language of the Core-and-Orbit Model, they’re the bridge between the core (your organizational intelligence) and the inner orbit (the execution disciplines that compound over time). Without them, your organizational intelligence stays trapped in individual heads. With them, it becomes a scalable asset.

 

How This Connects to the Pillar Frameworks

The AI Proficiency Spectrum is your career development model. The AI Proficiency Guide defines five levels, from Search Replacer to AI-Native Orchestrator. Your workforce strategy is about moving people up this spectrum. The Level 5 user isn’t just someone who’s good at prompting; it’s the trader who has externalized her market intuition into an AI workflow, the engineer who has taught an agent to replicate his debugging instinct. That’s the target.

The productivity data tells you the stakes. OpenAI’s 2025 State of Enterprise AI report found a 6x engagement gap between power users and typical employees. Meta reported a 30% increase in output per engineer overall, but 80% among power users. Moving people from Level 1-2 to Level 3-4 doesn’t produce incremental improvement. It produces a step change.

The Nine Dimensions of AI Proficiency tell you what to develop. Model intuition. Interaction sophistication. Multi-model fluency. Tooling sophistication. Agentic capability. These are the specific capabilities that determine whether someone can encode organizational knowledge into AI. A workforce strategy that develops “AI skills” generically will produce generic results.

Adoption data tells you who your champions already are. The AI Adoption Guide describes four measurement layers. Layer 2 (depth and engagement) surfaces your power users and AI-native employees: people who have already moved up the proficiency spectrum on their own. Your workforce strategy starts by finding them.

 

The Workforce Framework: Identify, Amplify, Scale, Evolve

Identify: Find Your Existing Knowledge Champions

Most organizations assume they need to create knowledge champions from scratch. In reality, they already exist. You just may not know who they are.

Use adoption and proficiency data to find the people at the intersection of high domain expertise and high AI proficiency. The senior operations manager who quietly built a procurement workflow encoding 15 years of supplier knowledge. The financial analyst who configured an AI agent to replicate her risk assessment process. These people didn’t wait for a training program. They’re already bridging organizational knowledge and AI.

How to find them: Cross-reference proficiency data (who is at Level 3-4-5?) with organizational knowledge maps (who carries the most valuable domain expertise?). The overlap is your champion population. Don’t rely on self-nomination – use behavioral data from your adoption measurement layer.

Amplify: Give Champions Resources and Mandate

The worst thing you can do with knowledge champions is treat their AI work as a side project. If someone is building workflows that externalize decades of institutional expertise into scalable AI systems, that is their highest-value work.

Give them time. Not “20% time” squeezed between meetings. Dedicated, protected time to build AI workflows that encode their expertise. The workflows a knowledge champion builds in a focused sprint are worth more than months of generic AI training for 500 people.

Give them resources and mandate. Access to the full AI tool stack. Budget for experimentation. Engineering support. And make it explicit that this isn’t a hobby; it’s a strategic priority. When a knowledge champion encodes how your best underwriter evaluates risk into an AI workflow, that’s knowledge capture. It’s how organizational intelligence becomes durable and scalable.

Scale: Use Champions to Pull Others Up the Spectrum

Knowledge champions are your most effective enablement mechanism and are far more effective than formal training.

Zapier achieved 97% company-wide AI adoption not through classroom instruction but through happy hours, hackathons, and a culture of experimentation. Peer learning beats formal training because it’s contextual and immediate. A knowledge champion in your finance team can show another finance professional exactly how they use AI for their specific work, not a generic prompting session.

Pair champions with teams for ongoing partnerships, not one-off workshops. Build learning networks: forums where champions share workflows, internal showcases where teams present AI-encoded workflows and their results. Measure the pull-through: track whether teams paired with champions move up the proficiency spectrum faster. If the data shows they do, that’s the case for scaling the model organization-wide.

Evolve: Treat Proficiency as a Moving Target

This is where most workforce strategies fail. They treat AI skills as a fixed competency – something you train once and check the box. Proficiency is a moving target.

What qualified as “advanced” six months ago is baseline today. A power user in mid-2025 might be a task automator by 2026 standards if they haven’t adopted agentic workflows and multi-model strategies. At Larridin, we address this by recalibrating proficiency definitions every 30 days.

  • Continuous learning, not one-time training. Monthly tool briefings, updated use case libraries, proficiency challenges that evolve with the technology.
  • Dynamic benchmarks. Level 3 today includes capabilities that didn’t exist at Level 5 a year ago. Workforce targets must recalibrate alongside the technology.
  • Evolving champion roles. Yesterday it was building sophisticated prompts. Today it’s configuring agentic workflows. Tomorrow it’s orchestrating multi-agent systems. Champions who stop learning stop being champions.

 

Common Mistakes

One-Size-Fits-All Training

A Level 1 user and a Level 4 user need different interventions. The Level 1 user needs basic model intuition. The Level 4 user needs time and resources to build agentic workflows that encode domain expertise. Generic training wastes time for advanced users and overwhelms beginners. Segment your enablement based on proficiency data.

Focusing on Tool Skills Instead of Knowledge Encoding

Most AI training programs teach people how to use tools: how to write prompts, how to navigate features. This is necessary, but not sufficient. The real capability gap is the ability to look at tacit organizational knowledge and figure out how to encode it into an AI system. That requires domain expertise, systems thinking, and AI proficiency simultaneously. If your workforce strategy only develops tool skills, you’ll produce skilled users who have nothing unique to feed the tools.

Ignoring Your Existing Power Users

Every organization already has employees who have moved up the proficiency spectrum on their own. EY’s 2025 Work Reimagined Survey found that 88% of employees use AI daily, but only 5% use it in advanced ways. That 5% is your existing champion population. Ignoring them while building a top-down training program from scratch is like hiring external consultants while your best internal experts sit idle. Find them first. Build around them.

 

#8 AI Transformation Assessment: How to Audit Your Organizational Intelligence

Most AI assessments ask, “Are we ready for AI?” The better question: “Do we even know what we’re building AI around?” Before you transform anything, you need to map where your most valuable knowledge lives, how mature your execution disciplines are, and what your real AI landscape looks like.

TL;DR

The typical AI readiness assessment measures technology infrastructure, data pipelines, and skills gaps. A Core-and-Orbit assessment starts differently: it maps where your unique organizational intelligence sits, how much has been encoded into AI, and where the biggest amplification opportunities are. This page gives you a three-part diagnostic: the Knowledge Audit, the Maturity Check, and the Landscape Scan; to run every 90 days as the first step in each transformation cycle.

 

Why Most AI Assessments Miss the Point

Every AI readiness assessment you have seen asks some variation of the same questions: Do you have the data infrastructure? The technical talent? The executive sponsorship? The governance framework?

Valid questions. Insufficient ones.

They measure whether your organization is ready to deploy AI tools. They do not measure whether your organization knows what to build AI around. That distinction is the difference between a transformation that creates competitive advantage and one that creates expensive parity.

The Core-and-Orbit Model starts from the center: your organizational intelligence, the tacit knowledge and decision patterns that make you uniquely competitive. It then builds outward through five execution disciplines.

The assessment needs to match that architecture. You don’t start by asking “are we ready for AI?” You start with three different questions:

  • Where does our most valuable organizational knowledge live? (The Knowledge Audit)
  • How mature are we across the five inner orbit disciplines? (The Maturity Check)
  • What does our actual AI landscape look like? (The Landscape Scan)

This three-part assessment runs at the beginning of every 90-day transformation cycle: a recurring diagnostic, not a one-time readiness check.

 

Part 1: The Knowledge Audit

This is the step most organizations skip entirely, and it’s the most important one.

You need to answer a deceptively simple question: What does your organization know that no one else knows? Where does that knowledge live, who carries it, and how much of it has been made accessible to AI?

The AI Transformation Guide identifies four categories of organizational intelligence: decision patterns, operational intuition, relationship knowledge, and domain expertise. The Knowledge Audit maps these across your teams, roles, and processes.

Diagnostic questions:

  • Where do your most experienced people sit? If your top five most tenured people in a department left tomorrow, what would walk out the door with them?
  • What do they know that isn’t written down? The risk analyst who spots a bad deal before the numbers confirm it. The operations lead who reroutes before the system flags a delay. This tacit knowledge is your most defensible asset.
  • Which knowledge domains have been externalized into AI, and which haven’t? Has anyone built workflows or agent instructions that encode domain expertise? Or is AI being used generically, as a drafting tool that could belong to any organization?
  • Where is the highest-value intersection of tacit expertise and AI capability? Which knowledge domains, if amplified by AI, would create the most competitive advantage this cycle?
  • Who are your knowledge encoders? The Level 4 and 5 users on the proficiency spectrum who transfer their expertise into AI workflows. Identifying them is identifying your most strategic AI asset.

The output: a map of which teams carry the most valuable tacit knowledge, how much has been encoded, and where the biggest amplification opportunities sit.

 

Part 2: The Maturity Check

The Knowledge Audit tells you what to build AI around. The Maturity Check tells you how ready you are to build.

The AI Maturity Model defines five stages: Visibility and Controls, Adoption Measurement, Proficiency Development, Workflow Intelligence, and Agentic Deployment. The Model includes continuous impact measurement as the foundation underneath all five. The Maturity Check assesses where you sit across each stage and identifies which inner orbit disciplines are in place and which are gaps.

Diagnostic questions:

  • Stage 1: Visibility and Controls. Can you produce a comprehensive inventory of every AI tool in use right now, sanctioned and unsanctioned? Do you know where sensitive data is flowing into AI tools?
  • Stage 2: Adoption Measurement. Do you measure adoption beyond login counts? Can you segment usage by team, department, and location? Do you understand what people use AI for, not just whether they use it?
  • Stage 3: Proficiency Development. Can you identify power users versus basic users based on behavioral data, not self-reporting? Are proficiency scores rising and rework rates declining?
  • Stage 4: Workflow Intelligence. Have you mapped how work actually moves through your organization; the real workflows, not the process documentation? Can you identify the top twenty workflows where AI would create the most value?
  • Stage 5: Agentic Deployment. Do you have AI agents operating autonomously for extended periods, taking real actions in production workflows?
  • The Foundation: Impact Measurement. Do you track AI impact continuously across effectiveness, quality, time, revenue, and cost? Or are you relying on quarterly surveys?

The critical insight: stages are sequential. You cannot build proficiency programs (Stage 3) without adoption data (Stage 2). You cannot deploy agents (Stage 5) without understanding workflows (Stage 4). The assessment reveals not just where you are, but what you need to build before you can move forward.

Different parts of your organization will be at different stages. The variance is the signal. Measure maturity at the team level, not just the enterprise level, and the gaps become actionable.

 

Part 3: The Landscape Scan

The Knowledge Audit maps what you know. The Maturity Check maps how ready you are. The Landscape Scan maps what is actually happening with AI right now.

This is where the “200-tool surprise” from the Shadow AI Guide becomes directly relevant. Most CIOs estimate 60-70 AI tools in use. Actual monitoring typically reveals 200-300, with organizations spending 3-5x more than they think on AI. You cannot assess what you cannot see.

Diagnostic questions:

  • What AI tools are actually in use? Not the tools you deployed; the tools your people are actually using. Every sanctioned platform, every unsanctioned experiment, every personal account used for work.
  • How deeply are they being used? The four layers of AI adoption: AI-first products, AI-augmented features, vertical solutions, and homegrown systems; provide the taxonomy. Are people using AI as a search replacement or building multi-step workflows?
  • By whom? Adoption segmented by team, role, seniority, and location reveals where AI is creating value and where it is being ignored.
  • What is your shadow AI exposure? Is unsanctioned usage within the 3-4% healthy benchmark, or has it drifted into territory that creates data exposure and compliance risk?
  • Where is the spend? Total AI expenditure, including individual subscriptions expensed through departments, versus what leadership believes the organization spends.
  • What has changed since the last scan? The Landscape Scan is not a one-time inventory. It is a recurring snapshot that shows movement.

Combined with the Knowledge Audit and the Maturity Check, the Landscape Scan completes the picture, and that picture drives every prioritization decision in the transformation cycle.

 

Three Mistakes That Undermine the Assessment

1. Assessing technology readiness without knowledge mapping. If you skip the Knowledge Audit, your assessment tells you whether you can deploy AI. It does not tell you whether you should, or where, or around what. You end up with a transformation strategy indistinguishable from your competitor’s.

2. Treating the assessment as a one-time exercise. The landscape shifts every six months. A static assessment produces a static strategy, and a static strategy is obsolete before it is executed. Run the three-part assessment at the start of every 90-day transformation cycle. Each cycle, the questions are the same; the answers are different.

3. Relying on self-reported data. Leaders consistently overestimate their AI maturity. Managers rate governance as “mostly complete” when employees cannot find the AI policy. The most accurate assessment comes from behavioral data; actual usage patterns, actual proficiency indicators, actual tool inventories discovered through monitoring. Self-assessment is a starting point, not the answer.

 

#9 AI ROI KPIs: The Definitive Guide to Measuring What Actually Matters

Transformation seems like an ambitious goal, but it requires one more step to make real impact: mapping transformation to ROI. The specifics of how to do this vary by organization and by AI maturity level. KPIs are a valuable tool for translating measurable change into financial rewards.

Why Most AI KPIs Are Wrong

Your AI dashboard probably tracks logins. Maybe active users. Maybe “hours saved” pulled from a self-reported survey nobody filled out honestly.

Gartner identifies establishing ROI as the single biggest barrier to further AI adoption. S&P Global found that only 21% of companies measure AI impact at all. BCG reports that just 5% of organizations generate meaningful value from AI at scale. The measurement gap is not a reporting inconvenience – it is the reason most AI programs stall.

The root problem: most organizations confuse activity metrics with impact metrics. Logins, session counts, licenses activated. These tell you whether people showed up, not whether AI created value.

Workday’s January 2026 study found that 37% of time saved through AI is consumed by rework. Only 14% of employees achieve net-positive outcomes. Tracking “hours saved” without measuring hours lost to rework is celebrating a number that does not exist.

The fix is better KPIs, organized into tiers that reflect the causal chain from usage to business outcome. As the Measuring AI Impact guide argues, you need a measurement system, not a single number.

 

Tier 1: Adoption KPIs

The question: Are people actually using AI?

Adoption is the foundation; without it, nothing else matters. But adoption alone tells you almost nothing about value.

Active user rate: Daily, weekly, and monthly active users as a percentage of eligible employees. Segment by function, team, and seniority. A 60% rate that is actually 90% in engineering and 15% in finance tells a completely different story than the blended number.

Feature utilization depth: Which features within each tool employees actually engage. An employee using Copilot exclusively for email summaries is not the same as one integrating it into drafting, analysis, and meeting preparation.

Activation rate: The gap between licenses purchased and licenses used. This reveals whether your rollout creates value or funds shelfware.

Adoption trend velocity: Not just the current rate, but the slope. A plateau at 40% tells you something very different from a plateau at 85%.

These belong in every AI adoption dashboard. But if your executive dashboard stops here, you are measuring inputs and reporting them as outcomes.

 

Tier 2: Proficiency KPIs

The question: Are people using AI well?

This is the tier most organizations skip, and where the 37% rework problem lives. High adoption with low proficiency means your organization generates output that creates downstream cost. The AI Proficiency guide details why the usage-skill gap is the hidden variable in every ROI calculation.

Task completion quality: The percentage of AI-assisted work that flows through without significant revision. If 40% of drafts require substantial rework, your net productivity gain is dramatically lower than gross time saved implies.

Workflow integration score: Whether AI is embedded in how people work or bolted on as a separate step. Integrated usage compounds; bolted-on usage plateaus.

Time-to-competency: How long a new user takes to reach proficient, net-positive output. McKinsey’s research shows structured enablement achieves proficiency 40-60% faster than self-directed learning.

Net productivity score: Genuine time saved divided by total time on AI-assisted work, including rework and prompt iteration. This accounts for the 37% AI Tax. If gross time saved is 10 hours but rework consumes 4, net productivity is 6 – and that is the number your ROI should use.

 

Tier 3: Impact KPIs

The question: Is AI changing business outcomes?

Impact KPIs connect AI usage to business outcomes. They require correlating AI telemetry with business system data, which is why most organizations never get here. The ones that do are BCG’s 5%.

Revenue influence: Compare cohorts: teams with high AI proficiency against those with low, controlling for territory and experience. Deloitte’s 2026 State of AI in the Enterprise report shows AI ROI leaders define critical wins as revenue growth, not efficiency. Track pipeline conversion, deal velocity, and revenue per employee.

Cost reduction (net, not gross): Total verified savings minus AI tool licenses, infrastructure, training, governance, and rework costs. An organization spending $2 million on AI that generates $1.5 million in gross savings but $600,000 in rework is losing money, not saving it.

Time savings converted to output: Time saved only becomes value when recaptured for higher-value work. Capacity Reallocation Value calculates the difference: five hours saved on drafting at $75/hour, redirected to strategy at $200/hour, produces $625/week, not $375.

Customer satisfaction delta – First-contact resolution rates, CSAT, NPS, and escalation rates for AI-assisted versus non-assisted interactions.

 

Tier 4: Strategic KPIs

The question: Is AI changing our competitive position?

Strategic KPIs capture whether AI is transforming capabilities, not just optimizing processes. Gartner frames this as Return on Investment versus Return on the Future.

Speed of innovation: Time-to-market for new products, speed of competitive response, iteration velocity. AI’s strategic value shows up as compressed organizational cycle times, not just faster individual tasks.

Competitive position indicators: Market share, win rates against specific competitors, talent acquisition advantage. Not purely AI metrics, but if AI is transforming your organization, they should reflect it over time.

Organizational learning rate: How fast teams improve at using AI. Is the gap between best and average users narrowing? A rising learning rate compounds advantage. A flat one means standing still while competitors accelerate.

Capability elevation: Whether AI enables work that was previously impossible. A three-person team handling complexity that previously required fifteen. This is the KPI that justifies AI investment beyond efficiency.

 

Building the KPI Dashboard, with ROI

The executive dashboard should contain five to eight metrics, maximum. Include Tier 3 and Tier 4 KPIs in financial language: Capacity Reallocation Value, Cost of Delay, revenue influence, ROAI; plus one or two Tier 1/Tier 2 leading indicators as early warnings.

The operational dashboard is where Tier 1 and Tier 2 live in full detail. Your AI program manager and functional heads use this weekly to manage adoption, spot proficiency gaps, and direct enablement investment. Segment by function, team, and tool; enterprise-wide averages hide every actionable insight.

The two must connect. When the board asks “why did revenue influence increase 12%?” the operational dashboard should have the answer: sales adoption rose from 45% to 72%, proficiency improved, deal velocity accelerated. See How to Build a Unified AI Adoption Dashboard for the architecture.

 

The Right KPIs by Maturity Stage

Applying Tier 4 metrics to a nascent AI program is like measuring a startup’s market share; it’s technically possible, but practically meaningless. The AI Maturity Model defines five stages; here is what to measure at each.

Exploring. Focus on Tier 1: activation rate, active users, and Shadow AI prevalence. Establish baselines, not ROI.

Expanding. Shift to Tier 2: task completion quality, time-to-competency, net productivity score. Identify who generates value and who generates rework.

Integrating. Tier 3 becomes meaningful. You have enough data to correlate AI usage with business outcomes. Build the executive dashboard here – not before.

Optimizing. Layer Tier 4 alongside Tier 3. Track innovation speed and competitive position. Use the Copilot ROI Framework to benchmark tool-level returns.

Transforming. All four tiers active. The emphasis shifts to capability elevation and competitive advantage. Fewer than 5% of organizations operate here.

 

Common KPI Mistakes

Tracking logins and calling that a KPI. Login frequency is a system administration metric, not a business metric. An employee who logs in daily and generates output requiring complete rewriting is not a success story.

Relying on self-reported surveys. People overestimate AI proficiency and underestimate rework time. Surveys capture sentiment; they are not a substitute for behavioral telemetry showing what people actually do.

Ignoring quality entirely. Speed without quality is the 37% AI Tax in action; your dashboard shows improvement while actual productivity declines. Every time-based KPI needs a quality guardrail.

Vanity metrics disguised as KPIs. “Prompts per user per day” is not a KPI. If you cannot draw a direct line from a metric to revenue, cost, quality, or speed, it does not belong on the dashboard.

Measuring engineering and extrapolating. GitHub Copilot acceptance rates projected across finance, HR, marketing, and sales is not measurement – it is fiction.

Static measurement. AI evolves monthly. Review your metric set quarterly. Retire vanity metrics aggressively.