The definitive guide to understanding, measuring, and accelerating AI adoption across your organization — beyond Copilot dashboards and login counts.
In February 2026, Meta became the first major technology company to formally tie employee performance reviews to AI usage, according to Bloomberg. Under the new policy, “AI-driven impact” is now a core expectation for every employee — from engineers to marketers.
Managers at Meta evaluate workers on how effectively they leverage AI to accelerate development cycles, improve code quality, and deliver business results. High performers can earn bonuses of up to 200%. The message from Janelle Gale, Meta’s Head of People, was direct: “As we move toward an AI-native future, we want to recognize people who are helping us get there faster.”
Meta isn’t alone. At NVIDIA, CEO Jensen Huang responded to reports that some managers were telling employees to use less AI with a single word: “Insane.” In an all-hands meeting following record earnings, as reported by Fortune, Huang told employees he wants “every task that is possible to be automated with artificial intelligence to be automated with artificial intelligence.” He noted that 100% of NVIDIA’s software engineers and chip designers use Cursor, the AI coding assistant, and that employees should persist with AI tools even when they fall short — “use it until it does work, and jump in and help make it better.”
At Zapier, CEO Wade Foster took a different approach to the same destination. Rather than top-down mandates, Foster drove 97% company-wide AI adoption through hackathons, show-and-tells, and a culture of experimentation — proving that creative, bottom-up strategies can be just as effective as executive directives.
These aren’t isolated examples. Microsoft has told employees that AI is “no longer optional,” per an internal memo reported by Business Insider. Google CEO Sundar Pichai told employees at an all-hands meeting that they need to use AI for Google to lead the AI race. Amazon employees have actively requested access to AI coding tools like Cursor.
In many, perhaps most, companies, employees are “bringing their own” AI accounts to work. This is both a positive—employees are upskilling themselves—and a concern, as usage of personal AI accounts is likely to incur exfiltration of employee inputs and company data for model training, or other, perhaps even more severe, problems with AI tools that don’t take good care of data. Managers need to identify “shadow AI” use and evangelize movement to official accounts.
The pattern is unmistakable: the world’s most valuable companies have concluded that AI adoption is a strategic imperative — not a nice-to-have technology initiative, but a core driver of competitive advantage, productivity, and organizational performance. The companies that adopt AI effectively will outperform those that don’t. And the gap between the two is widening.
The question is no longer whether your organization should adopt AI. It’s how deeply is AI adopted today, where are the gaps, and how do you know?
This is where AI adoption measurement comes in, and it’s more complex than most organizations realize.
When most people hear “AI adoption,” they think of a single metric: how many employees are using ChatGPT or Microsoft Copilot. This is a dangerously incomplete view.
Enterprise AI adoption in 2026 is not about one tool, or even one tool per user. It’s about an entire ecosystem that spans foundation models, standalone AI products, AI-enhanced features inside existing software, homegrown systems, and increasingly, autonomous AI agents.
Consider what a typical enterprise AI landscape actually looks like today:
This complexity is the fundamental challenge. AI adoption isn’t a single number. It’s a multi-dimensional phenomenon that spans tools, teams, use cases, and levels of maturity. Measuring it requires understanding not just who is using AI, but what they’re using, how deeply they’re using it, and where across the organization adoption is taking hold.
Note: With Gemini built into Google Search, employees are using AI without even trying. Search summaries are produced with Gemini. When employees then click the Dive Deeper in AI Mode button, they slide into doing interactive work, often including the use of company data, without having made a conscious choice to use an LLM for work purposes. If they are using their personal Google account for search, the entire interaction may be exfiltrated back to Google’s servers and used for model training, just as when other LLMs are used from a personal account.
One barrier to AI adoption is lack of shared understanding of the current state of AI usage. In the Larridin report, The State of Enterprise AI 2026, respondents were asked whether they had visibility to AI use in their organization.
Confidence in AI visibility varied by reporting level, as shown in the figure:
The closer to the action managers are, the less confidence they have in AI visibility within their organization. It’s not just that management disagrees as to what’s happening; they even disagree as to whether they know what’s happening.
To make sense of this complexity, it helps to classify AI tools along multiple dimensions. At Larridin, we categorize every AI tool in an enterprise along three axes: autonomy level, modality, and scope.
The autonomy level describes the level of AI technology used by a tool. The most impressive results tend to involve at least some use of the highest levels of AI technology. The levels, from the top down:
For AI tools, modality refers to the type of information processed; including:
As with other software, AI tools can be described by how extensively they can be used in your organization:
Classification by autonomy level, modality, and scope matters because it changes how you think about AI adoption. An organization where 80% of employees use ChatGPT, but nothing else, has a very different AI adoption profile than one where 60% of employees use a diverse portfolio of AI-first, AI-augmented, and vertical tools across their daily workflows. The second organization is almost certainly extracting more value — even though its adoption rate for any single tool is lower.
The case for measuring AI adoption has never been stronger, nor more urgent. Three converging forces are making adoption the defining metric for enterprise AI strategy.
Leaders across industries have reached the same conclusion: AI adoption drives productivity, and productivity drives competitive advantage.
Jensen Huang’s vision for NVIDIA is a company of 50,000 human employees (about 50% more than the company has today) working alongside 100 million AI assistants. Meta’s performance review policy is built on the premise that employees who effectively leverage AI will deliver meaningfully better results. Zapier’s 97% AI adoption rate has allowed a relatively small company to operate with the output of a much larger one.
The data supports this conviction. Organizations that redesign work processes with AI are twice as likely to exceed revenue goals, according to Gartner’s 2025 survey of 1,973 managers. The question isn’t whether AI-proficient organizations outperform — it’s by how much, and how quickly the gap widens.
Enterprises are pouring unprecedented resources into AI. Global generative AI spending is projected to reach $2.5 billion in 2026, according to Gartner (a 4x increase over 2025). Most large enterprises have deployed multiple AI tools, funded AI training programs, and stood up AI centers of excellence.
Yet establishing ROI has become the top barrier holding back further AI adoption, according to Gartner. A staggering 95% of generative AI pilots fail to move beyond the experimental phase, according to MIT’s GenAI Divide report. And 56% of CEOs surveyed by PwC’s 2026 Global CEO Survey report getting “nothing” from their AI adoption efforts.
Boards and CFOs are starting to ask hard questions: Where is the return on our AI investment? Who is actually using these tools? Is this working? Without adoption data, CIOs have no credible answer.
Adoption isn’t just about productivity — it’s about visibility and governance. Shadow AI — the use of unauthorized AI tools by employees — is a growing concern for enterprises. Employees are signing up for AI tools with personal emails, pasting proprietary data into public models, and using AI services that haven’t been vetted by IT or legal.
The reasons for not measuring vary across companies. In Larridin’s State of Enterprise AI 2026 report, respondents were asked about barriers to AI measurement, as shown in the figure:
You cannot govern what you cannot see. And you cannot see what you don’t measure. AI adoption measurement is the foundation of AI governance: it tells you which tools are in use, who is using them, and whether they’ve been sanctioned by the organization.
Effectively measuring AI adoption requires moving beyond simple login counts. True adoption measurement operates across four layers, each providing a progressively deeper view of how AI is embedding into your organization.
The foundational layer of adoption measurement is straightforward activity tracking:
Usage metrics answer the most basic question: Are people using AI at all? But they tell you almost nothing about whether that usage is meaningful.
This is where adoption measurement gets interesting. Usage tells you that someone logged in. Depth tells you whether AI is becoming part of how they work.
The adoption spectrum is critical. Not all usage is equal. An employee who asks ChatGPT one question per week is fundamentally different from one who uses AI across multiple workflows every day. The goal is to understand the distribution of your organization across this spectrum:
Understanding this distribution gives leaders actionable intelligence. If 70% of your organization is stuck in the “explorer” phase, you have a habit formation problem, not a deployment problem. If you have a cluster of power users in one department but non-users in another, you have a targeted enablement opportunity.
This layer also surfaces your champions, the power users and AI-native employees who can serve as internal advocates, mentors, and proof points. And it identifies employees who are falling behind: not to punish them, but to understand what barriers are preventing adoption and how to remove them.
As employees mature in their AI usage, their tool portfolio naturally expands. A beginner might use only ChatGPT. A power user might use ChatGPT for brainstorming, Claude for analysis, Cursor for coding or Lovable for vibe coding, Midjourney for visuals, and Notion AI for documentation — all in a single week, or even a single day.
Breadth metrics capture this expansion:
Breadth matters because it signals depth of integration. An organization where employees use 5-7 AI tools across different categories has embedded AI much more deeply into its workflows than one where everyone uses a single chatbot. Breadth is a leading indicator of organizational AI maturity.
The most actionable adoption data isn’t a company-wide average — it’s the breakdown across organizational dimensions:
Segmentation transforms adoption data from a dashboard metric into a management tool. Instead of knowing that “65% of our company uses AI,” you know that “Engineering in London is AI-native, Sales in New York is experimenting, Marketing is lagging, and middle management across the board is the bottleneck.” That’s actionable. You can allocate training resources, adjust incentives, and target enablement programs where they’ll have the most impact.
If measuring AI adoption sounds straightforward in theory, it’s extraordinarily difficult in practice. The CIOs we speak with consistently describe the same set of challenges: fragmented, tool-specific dashboards and the lack of a single pane of glass; difficult board-level reporting; an expanding tool and usage landscape; and inconsistent definitions.
Every AI tool comes with its own analytics. Microsoft Copilot has a usage dashboard, as does Google Gemini. ChatGPT Enterprise has usage reports. Cursor shows activity data. And that’s just the beginning; Notion AI, Slack AI, Adobe Firefly, and dozens of other tools each surface their own metrics in their own formats with their own definitions.
The result is a CIO with 15 different dashboards, each telling a partial story in a different language. Copilot might report “monthly active users.” ChatGPT might report “messages sent.” Notion might report “AI features used.” These metrics aren’t comparable, can’t be aggregated, and don’t tell a coherent story.
In most companies, there is no native way to answer the question: “What does AI adoption look like across our entire enterprise?”
No tool vendor has an incentive to show you this. Microsoft wants you to see Copilot adoption. Google wants you to see Gemini adoption. OpenAI wants you to see ChatGPT adoption. Each vendor shows you a flattering view of their own product, not the full picture.
This means CIOs are left stitching together screenshots, exporting CSVs, and building makeshift spreadsheets to try to construct a company-wide view. It’s manual, error-prone, and always out of date by the time it’s all pulled together.
When the board asks, “How is our AI transformation progressing?,” few CIOs have a credible, data-backed answer.
They might have anecdotes: “The engineering team loves Copilot.” They might have vendor-provided statistics: “We have 5,000 Copilot licenses active.” But this doesn’t answer the broad question about AI transformation. These CIOs can’t produce a unified view of adoption across all AI tools, segmented by department, trending over time, benchmarked against industry peers.
This is a governance failure that has real consequences. Without clear adoption data, boards can’t make informed decisions about AI investment, CIOs can’t justify budget renewals, and organizations can’t course-correct when adoption stalls.
AI isn’t contained in a single application anymore. It’s in the browser (ChatGPT, Claude, Perplexity), on the desktop (Cursor, Windsurf), embedded in enterprise software (Microsoft 365, Google Workspace, Salesforce), in specialized tools (Harvey, ElevenLabs), and in homegrown systems built on open-source models.
New AI tools and features launch weekly. Employees discover and adopt them on their own. The landscape is a moving target — which means any static inventory of “our AI tools” is out of date almost immediately.
What counts as “active use” in one tool is completely different from another. Is opening Copilot and dismissing a suggestion “active use”? Is asking ChatGPT one question per month enough to count as an “active user”? Is using Notion AI’s auto-summary feature — which may fire automatically — a signal of adoption?
Without consistent, cross-tool definitions of what constitutes meaningful engagement, adoption metrics are unreliable and incomparable. You can’t benchmark, you can’t detect trends, and you can’t make apples-to-apples comparisons across your AI portfolio.
The Larridin report, The State of Enterprise AI 2026, asked respondents about the barriers they faced in AI adoption. Their answers, as shown in the figure:
The issues vary between informational (not knowing the adoption rate or the extent of impact) and governance-oriented (inconsistent government, lack of established measurement metrics.)
We’ve discussed AI adoption challenges. Where is adoption happening today?
Larridin’s AI Hiring Pulse – February 2026 tracked 428 companies across 43,422 job postings to measure which functions are hiring for AI. The gradient is steep:
|
Function |
Companies Hiring for AI |
% of Tracked Companies |
|---|---|---|
|
Product |
81 |
18.9% |
|
Customer Success |
61 |
14.3% |
|
Engineering & IT |
54 |
12.6% |
|
Data & Analytics |
49 |
11.4% |
|
HR & People |
47 |
11.0% |
|
Marketing |
31 |
7.2% |
|
Sales |
31 |
7.2% |
|
Operations |
30 |
7.0% |
|
Legal & Compliance |
24 |
5.6% |
|
Finance |
20 |
4.7% |
The gap between top and bottom is 4x. Same organizations, same leadership, same budgets – radically different adoption intensity.
Three forces explain the ordering:
McKinsey’s 2025 State of AI report confirms: 88% of organizations use AI in at least one function, but fewer than 40% have scaled beyond pilot. The curve is not a deployment problem. It is a depth problem.
Given these challenges, how should an enterprise approach AI adoption measurement? Here’s a practical framework.
Before you can measure adoption, you need to know what’s out there. This means creating a comprehensive inventory of every AI tool in use across the organization — sanctioned and unsanctioned.
This is harder than it sounds. Shadow AI means that employees are using tools IT doesn’t know about. Browser-based AI tools don’t show up in traditional software asset management systems. And AI features embedded in existing software may fly under the radar entirely.
Effective discovery requires a combination of approaches:
The goal is a living, continuously updated map of your organization’s AI ecosystem.
Once you have visibility into the landscape, classify each tool using a consistent framework (autonomy level, modality, scope). Not all AI tools warrant the same level of measurement attention. Focus your deepest measurement efforts on:
Define a consistent set of metrics that apply across all AI tools, regardless of vendor. This creates the common language needed for cross-tool comparison and company-wide aggregation:
Deploy the instrumentation needed to collect adoption data at the required granularity. This may include:
With data flowing, build the reporting layers that different stakeholders need:
The key principle: adoption data is only valuable if it drives action. Every metric should connect to a decision — where to invest, where to train, where to intervene, when to celebrate.
Organizations don’t go from zero to AI-native overnight. AI adoption follows a maturity curve, and understanding where your organization sits, and what the next stage looks like, is essential for setting realistic goals and allocating resources effectively.
Stage 1: AI Curious. A small number of employees are experimenting with AI, mostly on their own. There’s no formal AI strategy or tool deployment. Usage is sporadic and untracked. Shadow AI risk is high, because there’s no visibility or governance.
Stage 2: AI Exploring. The organization has deployed one or two enterprise AI tools, typically Copilot or ChatGPT Enterprise. Usage is growing, but unevenly distributed, often concentrated in technical teams. Basic adoption metrics are tracked via vendor dashboards. Leadership is interested in, but not yet committed to, a measurement program.
Stage 3: AI Scaling. Multiple AI tools are deployed across the organization. Adoption is expanding beyond technical teams into business functions, and a formal AI adoption measurement program is likely to be in place. Champions and power users are identified and leveraged; training and enablement programs are active. Board-level reporting on AI adoption exists, but is still maturing.
Stage 4: AI Embedded. AI tools are part of daily workflows for the majority of employees. Adoption is measured across all four layers: usage, depth, breadth, and segmentation. AI governance policies are enforced based on adoption data, and spend optimization is driven by usage analytics. The organization benchmarks its adoption against industry peers.
Stage 5: AI-Native. AI is the default way of working; employees reach for AI tools instinctively. The organization operates with a diverse portfolio of agentic, AI-first, and AI-augmented tools. Adoption data informs strategic decisions about workforce planning, technology investment, and competitive positioning, and AI proficiency is part of performance evaluation and career development. The organization is a talent magnet for AI-skilled professionals.
Most enterprises in 2026 are somewhere between Stage 2 and Stage 3. The organizations that build strong adoption measurement foundations now will be the ones that reach Stage 4 and 5 fastest — and capture the competitive advantage that comes with these higher levels.
As enterprises build their adoption measurement programs, several common pitfalls emerge.
The most common mistake is equating AI adoption with Copilot adoption (or ChatGPT adoption, or any other single tool). This gives you a vendor-specific view, not an enterprise view. Your employees are using more AI than any single dashboard shows.
Many organizations track how many AI licenses they’ve purchased, not how many are actively used. A 10,000-seat Copilot deployment with 15% weekly active usage is not an adoption success story; it’s a spend optimization problem.
Knowing that 5,000 employees “used AI this month” tells you very little. Did they ask one question, or did they integrate AI into daily workflows? Did they use a basic feature once, or are they power users? Without depth metrics, usage numbers are misleading.
Adoption is a dynamic, evolving metric. Measuring it one time, or even quarterly, misses the trajectory of adoption. Weekly and monthly trends reveal whether adoption is accelerating, plateauing, or declining, allowing for timely intervention.
Company-wide averages hide enormous variance. If your average adoption rate is 60%, that might mean engineering is at 95% and finance is at 20%. Without segmentation, you can’t identify where enablement efforts are needed most.
AI adoption measurement is not a reporting exercise. It’s strategic infrastructure — the data layer that connects AI investment to business outcomes.
As organizations like Meta tie AI usage to performance reviews, as Jensen Huang demands 100% AI automation of every possible task, and as boards scrutinize the return on hundreds of billions of dollars in AI investment, the ability to accurately measure, benchmark, and optimize AI adoption becomes a core enterprise capability.
The organizations that build this capability now — that understand their AI landscape, measure adoption across all four layers, segment with precision, and act on insights — will be the ones that realize the full promise of AI transformation.
The ones that don’t will be guessing. And in a market moving this fast, guessing isn’t a strategy.
Larridin is the AI execution intelligence platform that gives enterprise organizations complete visibility into AI adoption, opportunities, and impact across every team, function, and location.
If you’d like to learn more about how to accelerate your AI transformation with Larridin, sign up for our newsletter or book a demo.