AI adoption tells you who's using AI tools. AI proficiency — what Larridin measures — tells you who's using them well enough to generate ROI. That distinction is the difference between a dashboard that says "80% adopted" and one that explains why you're still not seeing results.
We had a conversation with a 175-person hospitality company recently that captured this perfectly. Their CTO said: "Half the company uses AI, but we can't tell if it's working." They'd rolled out Copilot broadly, watched the adoption numbers climb, and then sat in a quarterly review unable to connect any of it to business outcomes. The adoption metric was green. Everything else was a question mark.
They're not alone. Over 95% of US firms report using generative AI, per Thomson Reuters' 2025 adoption survey, yet only about 26% have achieved tangible value from those initiatives. That gap has a name. We call it the adoption ceiling.
Adoption metrics answer a binary question: did the employee use the tool? Yes or no. That's it. They don't tell you whether someone asked ChatGPT to rewrite a single email subject line or used it to restructure an entire quarterly analysis. Both count as "adopted."
This is why you can hit 80% adoption and still see zero measurable ROI. The number itself is a vanity metric dressed up in an executive dashboard.
McKinsey's 2025 State of AI report found that only 39% of organizations see enterprise-wide EBIT impact despite widespread use-case adoption. Deloitte's survey puts the timeline even more starkly — most organizations report needing two to four years to achieve satisfactory AI ROI, and only 6% got there within the first year.
The adoption ceiling hits because adoption measurement stops at the first layer. You know people showed up. You don't know what they did when they got there.
Proficiency isn't a single score. It's two dimensions measured simultaneously: use-case diversity and interaction quality.
Use-case diversity captures how many distinct workflows someone applies AI to. An employee who only uses AI for grammar checking is adopted. An employee who uses it for data analysis, customer communication drafts, competitive research, and process documentation is proficient. The second person discovered that AI is a general-purpose amplifier, not a single-trick tool.
Interaction quality measures how effectively someone works with AI. This includes prompt specificity, iterative refinement (do they follow up or accept the first output?), and output integration — whether the AI-generated work actually makes it into deliverables or gets abandoned.
Neither dimension alone tells the story. Someone who uses AI across ten workflows but always accepts the first mediocre output has breadth without depth. Someone who writes brilliant prompts for exactly one use case has depth without breadth. Proficiency requires both.
We built Larridin's measurement framework around this dual-axis model because surveys can't capture either dimension accurately. People overestimate their own sophistication. Every time.
When we analyze proficiency data across organizations, users cluster into four distinct bands. These aren't arbitrary labels — they emerge from observable behavior patterns.
Beginner — Uses AI for 1-2 simple tasks (grammar, basic search). Accepts first output. Prompts are vague, one-sentence requests. Typically 30-40% of an organization post-adoption rollout.
Intermediate — Applies AI across 3-5 workflows. Starts iterating on outputs and providing context in prompts. Shows signs of developing personal patterns. Usually 35-45% of users.
Advanced — Consistent use across 6+ workflows. Multi-turn conversations. Provides structured context, examples, and constraints. Integrates AI output into real deliverables routinely. Roughly 15-20% of users.
Power User — Has fundamentally restructured how they work. Creates reusable prompt templates, chains multiple AI tools together, trains colleagues informally. These are your 3-5% — and they're disproportionately your highest performers.
The distribution matters because it's persistent. Without intervention, the bell curve barely shifts over six months. People find their comfort zone and park there.
Here's what makes proficiency measurement urgent rather than academic: the value difference between bands is not incremental. It's exponential.
A BCG study of 758 consultants using GPT-4 found that bottom-half skilled workers improved 43% with AI, while top-half improved 17%. That's the floor of the gap — and it's measuring AI as an equalizer, not accounting for advanced users who've rebuilt entire workflows around it.
Nielsen Norman Group's research across three studies showed AI increased throughput by 66% on average for cognitively demanding tasks. But that average obscures a wild range. The gap between someone using AI to fix a typo and someone using it to generate, validate, and iterate on a complete analysis is easily 10x in time saved. Across a quarter, across a team, the multiplier compounds.
We've seen this internally and across our customer base. A Power User in marketing generates the equivalent output of 3-4 Beginners — not because they work longer hours, but because they've eliminated entire workflow steps. When you factor in quality differences and rework reduction, the effective value multiplier between a Beginner and a Power User ranges from 10x to 50x depending on the role and task complexity.
That means your proficiency distribution is your ROI equation. Two companies with identical 80% adoption rates can have wildly different returns based solely on where their users cluster across these four bands.
The first objection we hear is always the same: "We can't read people's prompts." Good. You shouldn't.
Invasive monitoring — keylogging, screen recording, prompt capture — destroys trust and triggers legal landmines. California's CPRA, Maryland's 2025 Online Data Privacy Act, and a growing list of state laws now require algorithmic impact assessments for AI-based performance tools. Beyond compliance, 61% of workers oppose AI-based movement tracking according to FM Magazine's 2025 survey. You can't build a proficiency program on a foundation of surveillance.
What you can measure without reading content:
Larridin's approach uses ephemeral telemetry — we capture behavioral metadata without storing prompt content. Individual-level data stays in controlled exports. The measurement is structural, not surveillance.
Aggregate proficiency data is interesting. Individual proficiency data is actionable.
Once you know someone is stuck in the Beginner band, you don't send them to a generic "Intro to AI" workshop. You look at what they're using AI for (email rewrites, probably) and show them two adjacent use cases relevant to their role. A finance analyst stuck at Beginner doesn't need prompt engineering theory — they need someone to show them how to use AI for variance analysis and report summarization.
This is where the proficiency bands become a coaching framework:
Beginner → Intermediate: Focus on use-case expansion. Share role-specific playbooks. Pair with an Intermediate user on their team. The goal is getting from 1-2 use cases to 4-5.
Intermediate → Advanced: Focus on interaction quality. Introduce iterative prompting, context-setting techniques, and output evaluation habits. This is where prompt engineering training actually matters — but only after someone already has broad enough usage to apply it.
Advanced → Power User: This transition is mostly cultural, not technical. Advanced users need permission and time to experiment. They need their workflow innovations recognized and shared. Create internal showcases. Give them 10% time for AI experimentation.
The biggest mistake we see is treating AI training as a single event. A company-wide "AI Day" pushes Beginners to Intermediate at best and does nothing for anyone above that. Proficiency coaching has to be segmented by band, personalized by role, and sustained over months.
McKinsey's research confirms this: organizations that treat AI skill-building as a continuous program rather than a one-time initiative are twice as likely to see revenue impact from their AI investments.
The hospitality company we mentioned earlier? Their real problem wasn't adoption. It was distribution. When we mapped their users across proficiency bands, 68% were Beginners, 24% were Intermediate, and the remaining 8% were Advanced or above. Their "80% adoption" stat was masking the fact that most employees had barely scratched the surface.
Adoption got them the tools. Proficiency determines the return. If your AI measurement strategy stops at "how many people logged in this month," you're flying blind on the metric that actually predicts business impact.
The companies pulling ahead aren't the ones with the highest adoption rates. They're the ones systematically moving users up through proficiency bands — and they're measuring every step of the climb.
Adoption measures whether someone uses an AI tool at all. Proficiency measures how effectively they use it — across how many workflows, with what level of interaction sophistication, and with what quality of output integration. You can have near-universal adoption with almost no proficiency, which is exactly why many organizations see high usage numbers but no ROI.
Yes. Proficiency signals come from behavioral metadata — tool-switching patterns, session characteristics, use-case diversity, and workflow integration. Larridin uses ephemeral telemetry that captures these structural patterns without storing prompt content, keeping measurement privacy-preserving and compliant with evolving state regulations like California's CPRA.
The bands are Beginner (1-2 simple use cases, no iteration), Intermediate (3-5 workflows, emerging patterns), Advanced (6+ workflows, structured multi-turn interactions), and Power User (fundamentally restructured workflows, tool chaining, informal training of others). Most organizations cluster 65-75% of users in the first two bands without targeted intervention.
Because adoption is binary — it counts everyone from the person who asked ChatGPT one question to the power user who's automated half their role. Deloitte's research shows only 6% of organizations achieve AI ROI within the first year. The missing variable is usually proficiency: how well people use AI determines whether those adoption numbers translate into time savings, quality improvements, and measurable business outcomes.
Band-specific coaching, not generic training. Beginners need role-specific use-case expansion (move from 1-2 to 4-5 applications). Intermediates need interaction quality coaching — iterative prompting and output evaluation. Advanced users need cultural permission to experiment and share innovations. Each transition requires different interventions sustained over months, not a single AI training day.
Research from BCG and Nielsen Norman Group shows AI productivity gains range from 17% for already-skilled workers to 66% for cognitively demanding tasks. In practice, the compound effect of use-case diversity, interaction quality, and workflow restructuring creates a 10-50x effective value multiplier between Beginner and Power User bands, depending on role complexity and task type.
Stop guessing where to deploy AI next.
Larridin's AI Opportunity Discovery finds high-impact automation opportunities hiding in your workflows — in minutes, not months.
Discover AI Opportunities →