Skip to main content

When AI usage spikes at your company, is your team getting better at their jobs - or just good about logging into AI?

Key Takeaway

The 2025 Year in Review report from legal AI firm Harvey reports that they have more than 1,000 customers and 92% monthly usage. These are impressive engagement metrics that raise a critical question: Are these the right numbers for measuring AI's business impact?

Quick Navigation

Key Terms

  • AI Adoption: The rate at which employees and teams begin using artificial intelligence tools and technologies.
  • AI Impact: The measurable business value and productivity improvements that result from AI implementation.
  • Engagement Metrics: Activity indicators, such as logins, file processing, and usage frequency, that show AI tool usage.
  • Outcome Metrics: Business performance indicators for time savings, quality improvements, and ROI that reveal whether AI is delivering value.
  • Governance Metrics: Administrative visibility into access controls, security compliance, and license utilization across AI tools.
  • Utilization × Proficiency × Value: Larridin's framework that measures AI effectiveness: who uses AI, how well they use it, and what value it drives.

The Numbers Behind the Headlines

According toHarvey's 2025 Year in Review, the legal AI platform achieved remarkable growth, including: over 50% adoption by the AmLaw 100 (top US law firms); more than 1,000 customers across 59 countries; a 92% monthly adoption rate; 213.7M files analyzed; and 262 platform updates.

These numbers paint a picture of widespread AI adoption. But they don't tell you whether any of it actually worked.

Across the enterprise AI landscape, vendors highlight similar engagement metrics: daily active users, files processed, prompts generated. These prove people are showing up. They don't prove whether AI is creating business value.

Three Layers of AI Metrics

  1. Governance Metrics: Harvey administrators see access, provisioning, and basic usage patterns. These control metrics support security and compliance, but don't measure impact.
  2. Engagement Metrics: Harvey reported 92% adoption, 213.7M files analyzed, 830K+ prompts, and 19K+ workflows - all activity indicators. According totheir Year in Review Report, they achieved an 81% increase in daily and monthly average users (DAU/MAU) since launching. But a 92% adoption rate tells you people are logging in, not whether they're getting more effective.
  3. Outcome Metrics: This is the measurement gap. What's the productivity gain? Time saved? Quality improvement? Business value? The answer isn't in Harvey's metrics—it's in yours.

The Measurement Framework That Matters

While platforms like Harvey provide tools, measurement platforms track what matters: impact. At Larridin, we express impact as Utilization × Proficiency × Value.

Larridin Scout discovers your complete AI landscape — sanctioned usage and unmonitored, shadow AI — then measures utilization in real time.

  • Utilization: Who's using what AI tools, how often, and for which tasks? This goes beyond login tracking to understand actual interaction patterns.
  • Proficiency: How well are people using AI? Who are your power users versus beginners? This missing layer between "they're using it" and "it's working" reveals who needs training and who has advanced capabilities.
  • Value: What business impact is AI driving? Time saved, quality improvements, cost reductions. These outcome metrics justify investments and guide decisions about where to scale.

Three Questions Your Vendor Can't Answer

  1. Who are my most proficient AI users? Your vendor shows who logs in most. They can't identify who has the best outcomes or develops the most effective prompts and workflows.
  2. What's the actual productivity gain? Your vendor counts prompts. They can't quantify whether those interactions translated into faster document review or better quality work.
  3. Where should I invest next? Your vendor shows where engagement is highest. They can't tell you where proficiency is lowest or which teams are ready for advanced capabilities.

These aren't gotcha questions. They're legitimate measurement needs that vendors can't answer. Harvey provides tools and tracks usage. Measuring business impact requires a different approach.

From Engagement to Impact

Harvey's 1,000+ customers and 92% adoption represent significant progress. The platform became the daily companion for thousands of legal users. But adoption is the starting line, not the finish line.

Consider that 81% increase in DAU/MAU. Without measuring proficiency, you don't know which users dramatically improved productivity and which logged in out of habit, or to meet requirements set out by management.

Evolution from engagement to impact requires three shifts:

  1. Adoption → optimization (improve how people use AI)
  2. Activity → capability (measure skill development)
  3. Usage → outcome (quantify results)

This isn't criticizing Harvey's metrics. Their numbers indicate healthy adoption. But we recognize that engagement metrics don't answer the strategic questions leaders need to address.

What Measurement-Driven Strategy Looks Like

Organizations that implement true AI measurement discover insights that engagement metrics can’t provide. They identify power users by proficiency and impact, not by login frequency. They pinpoint adoption barriers before they become failures. They make investment decisions based on outcome data rather than vendor promises.

This measurement foundation turns potential AI chaos into competitive advantage. It transforms scattered experiments into coordinated strategies backed by evidence.

You Can't Manage What You Don't Measure

Harvey built essential infrastructure for legal AI and achieved adoption at scale. Platforms will continue pushing what AI can do. That's exactly what vendors should do.

But measuring whether capabilities translate into business value? That's not a vendor's job. It's yours.

Early adopters that start comprehensive AI measurement now by tracking utilization, proficiency, and value across their AI landscape will have evidence and insights to use in optimizing AI’s impact, while competitors are still celebrating login counts.

Engagement metrics tell you people are using AI. Outcome metrics tell you whether AI is working. Many organizations measure the first. Few measure the second.

The question isn't whether your team uses AI. The question is whether your team is getting stronger, faster, more effective because of AI. That requires measurement beyond what your vendor provides.

Get to Impact Measurement Faster

It’s not that hard to measure AI usage. You can put together measurement yourself, or adopt a platform such as Larridin to start getting results faster. But what Larridin provides, that’s much harder to achieve in other ways, is impact measurement. That’s the change that AI usage makes - or, in some cases, fails to make - on your bottom line. If you haven’t decided how to do AI monitoring and measurement, or if you’re unhappy with the approach you’re already using, you’re likely to find Larridin a breath of fresh air. To find out more about Larridin, reach out for a demo.

Are you ready to measure what actually matters?

Schedule a Demo

Larridin
Feb 16, 2026