Skip to main content

Your employees are using more AI tools than you think. That’s not necessarily a bad thing—but without visibility and governance, it’s a risk you can’t afford to ignore.

The 200-Tool Surprise

When we ask CIOs how many AI tools their employees are using, the answer is usually somewhere between 60 and 70. They’ve deployed a handful of enterprise AI platforms and approved a few dozen specialized tools. They assume that covers the landscape.

Then they turn on monitoring that detects AI tools automatically. And the real number is 200 AI tools in use. Sometimes 300.

The reaction is always the same: a mix of surprise and concern, followed by a grudging acknowledgment that it makes sense. Because the reality of enterprise AI in 2026 is that the landscape is in a state of unprecedented experimentation.

And that experimentation is warranted.

We don’t yet know which LLM will be the definitive enterprise standard. We don’t know the best AI coding tool, the best AI writing tool, the best AI design tool, the best AI-powered accounting or legal or HR platform. In every category, new tools with new capabilities launch weekly. The landscape hasn’t settled, and it won’t for some time to come.

It’s natural for employees to experiment. It’s natural for teams to try different tools, compare them, and find what works best for their specific workflows. The last three years of enterprise AI have been driven by bottom-up experimentation, and that experimentation has been the single most important driver of AI adoption and proficiency.

But experimentation without visibility creates real problems. And those problems are growing.

 


Why Ungoverned (“Shadow”) AI Tool Usage Is a Growing Enterprise Risk

The risks of widespread, unmonitored AI tool usage fall into several interconnected categories. None of them are theoretical; they’re happening in enterprises today.

Data Exposure and Training Data Leakage

This is the risk that keeps CIOs up at night, and it’s more subtle than most people realize.

Consider the difference between enterprise ChatGPT and personal ChatGPT. With an enterprise account, OpenAI contractually guarantees that your data is not used to train their models. With a personal account, there’s no such guarantee, and OpenAI improves its model by training on data from personal accounts.

The problem is that from the employee’s perspective, the interface looks identical. The experience is the same. The URL might even be the same. They might even prefer the personal account, because the LLM may adapt to their work habits over time. Using a personal account retains any such changes. And a personal account may include access to features that the enterprise account doesn’t.

An employee using an LLM to edit a spreadsheet might upload a company Excel file containing every customer record in their territory. If they’re using the enterprise account, the data is protected. If they’re using their personal account, the data in the Excel file will potentially be used for the next training run of the model.

The use case isn’t malicious. It’s mundane. An employee reformatting a file, summarizing meeting notes, drafting a client email. But the data inside those files, such as customer records, financial projections, strategic plans, or proprietary code, is the organization’s most valuable asset. It’s important that this data not be used to train publicly available models.

In the age of AI, your data is your primary moat. It needs to be protected accordingly.

Varying Risk Profiles Across Tools

Not all AI tools carry the same risk. A designer using Midjourney for image generation has a fundamentally different data exposure profile than an analyst pasting financial data into an unvetted LLM.

Risk varies across several dimensions.

Data sovereignty. Where is the data processed and stored? Tools built on models hosted in different countries have different legal and regulatory implications. For industries subject to data residency requirements, such as financial services, healthcare, government, and defense, this requirement isn’t optional. But an employee using a personal LLM account may well not be meeting it. Some industries with strict data handling requirements may be slower to adopt AI for that reason, as shown in Figure 1.

Figure 1. Concerns about governance may be slowing the adoption of AI in healthcare.
(Source: The State of Enterprise AI 2026)

Training data policies. Every AI tool has different policies around whether and how user data is used for model training. These policies vary widely, change frequently, and are often buried in terms of service that no employee reads. Even among major providers, such as OpenAI, Anthropic, Google, and Meta, the policies differ in meaningful ways, and they are subject to change over time. Unless you have a contract with the provider that specifies the policies that will apply to data from your company.

Data handling and retention. How long does the tool retain your data? Who has access to it? Is it encrypted in transit and at rest? Can it be deleted on request, as required in Europe? The answers vary enormously across the 200-300 tools your employees might be using.

Enterprise vs. consumer tiers. Many AI tools offer consumer versions with fewer data protections and enterprise versions with stronger guarantees. Employees often default to the consumer version because it’s easier to sign up for, they already have a personal account that they’re used to using, they don’t know the enterprise version exists, or they don’t want to risk that their employer will monitor their AI usage.

Without a purpose-built tool, enterprise companies can’t see AI usage delivered to users with a personal, rather than company, license. And employees are often reluctant to volunteer that they are doing so, worried that they might have done something wrong, or be told to stop.

Compliance and Regulatory Exposure

For regulated industries, such as financial services, healthcare, legal, government, ungoverned AI tool usage creates serious compliance risk. And companies that are suppliers to one or more of these industries may be subject to some or all of the compliance restrictions that bind their customers.

The HIPAA standard, for example, requires that protected health information (PHI) is only processed by tools that meet specific security and privacy standards. An employee at a hospital pasting patient notes into an unapproved AI tool has just created a HIPAA violation, regardless of their intent.

The European GDPR law requires clear documentation of how personal data is processed, by whom, and under what legal basis. If employees are using AI tools that the organization hasn’t vetted, assessed, or documented, the organization cannot demonstrate compliance.

SOC 2, ISO 27001, and other security frameworks require organizations to maintain an inventory of systems that process sensitive data. If AI tools are being used outside IT’s visibility, that inventory is incomplete, and the organization’s compliance posture is compromised.

When auditors ask questions such as “What AI tools does your organization use?” and “How is data handled?,” most CIOs today cannot give a complete, accurate answer.

Tool Sprawl and Wasted Spend

When AI experimentation happens without visibility, the financial waste is significant.

Different teams might be paying for different AI writing tools, each with individual subscriptions expensed through different departments. At the same time, the organization might be considering the purchase of, or already have, an enterprise license for a single platform that would serve all of them. Procurement can’t negotiate volume discounts or enterprise agreements because they don’t know what’s being used.

We see organizations where the total spend on AI tools, when you add up every individual subscription, every team-level purchase, and every expensed SaaS fee, is three to five times what they had estimated it would be. The problem isn’t that the money is being spent. It’s that it’s being spent inefficiently, without negotiation leverage, and without any connection to a coherent AI strategy.

No Audit Trail

When something goes wrong, such as a data breach, a compliance violation, or an IP dispute, organizations need to understand what happened. What tool was used? What data was shared? When? By whom?

If AI tools are being used outside IT’s visibility, there is no audit trail. The organization cannot investigate, remediate, or learn from incidents because the basic facts are unknowable. Problem investigations might not return useful answers due to this lack of observability.

Missed Proficiency Opportunities

Unsanctioned AI usage presents an opportunity. Organizations that can “see” such usage can ask users what the tool in question is doing for them. The organization can compare such findings to recommended practice. For instance, it’s sensible for an organization to want to standardize on a small number of AI tools. But expert recommendations suggest that marketers or engineers, for instance, use different LLMs for different aspects of their work—and that they keep experimenting with various tools, since features change regularly and new tools appear all the time.

When the user has found a new, productive application for an AI tool, they can include that information in licensing decisions and company training materials, so as to create and promote new best practices. Organizations that don’t have this visibility, or that pursue a one-size-fits-all, “shut it down” approach to unsanctioned AI usage, miss out on these opportunities. Organizations that fail to learn and grow in their AI usage are likely to fall behind competitors.

Figure 2 shows how training programs are associated with better results for AI implementation, as described in the Larridin report, The State of Enterprise AI 2026. The best way forward is not to use automatic monitoring instead of training; it’s to use them together, each supporting the effectiveness of the other.

 

Figure 2. Formal AI training programs are associated with AI-powered success.
(Source: The State of Enterprise AI 2026)

 


Rethinking “Shadow AI”: From Fear to Governance

The term “shadow AI” carries connotations of secrecy and threat. It implies that employees are doing something wrong, that they are sneaking around IT policies to use unauthorized tools. The term itself makes it more likely that employees will conduct their AI usage via personal accounts, and not talk about it.

That framing is counterproductive. In most cases, employees using unapproved AI tools aren’t being malicious or even negligent. They’re being resourceful. They found a tool that helps them do their job better, and they started using it. The problem isn’t the employee’s actions; it’s the organization’s lack of visibility and governance.

Rather than “shadow AI,” a more productive framing is responsible AI governance, creating the visibility, policies, and guardrails that allow experimentation to continue while managing risk appropriately.

The goal is not to lock everything down. Organizations that respond to AI tool proliferation by blocking everything will kill the experimentation culture that drives adoption and proficiency. The best organizations channel unsanctioned AI experimentation into responsibly governed AI exploration, making it easy to try new tools within a framework of visibility and protection.

The Governance Spectrum

Effective AI governance isn’t binary, limited to “approved” or “blocked.” When an organization has a tool that detects AI usage without a corporate license, they can choose from a spectrum of responses that can be tailored to the tool, the risk level, the industry, and the use case.

Educate. When an employee uses an AI tool that isn’t sanctioned, surface a message explaining the organization’s AI usage policies, why they exist, and what approved alternatives are available. Don’t block the action; inform the user. This is the lightest touch and is appropriate for low-risk tools where the primary concern is awareness, not enforcement.

Warn. For medium-risk scenarios, display an active warning that requires the user to acknowledge the risk before proceeding. “This tool has not been vetted by IT. Data shared here may not be protected under our enterprise agreements. Here is the recommended alternative.” The user can continue, but they’ve been made aware.

Monitor. Allow usage but log it for review. This creates the audit trail needed for compliance and risk management without blocking productive work. Monitoring data feeds into adoption analytics, giving IT and security teams visibility into what’s being used, by whom, and how often.

Restrict. For specific data types or high-risk scenarios, prevent certain actions, such as uploading files containing sensitive data to unapproved tools. Allow other usage to continue. This is targeted restriction, not blanket blocking.

Block. For tools that present unacceptable risk—based on data sovereignty, compliance requirements, or security assessments—prevent access entirely. This should be reserved for genuinely high-risk scenarios and used sparingly.

The right position on this spectrum varies by industry, by tool, and by the nature of the data involved. A financial institution might operate closer to the “restrict/block” end for tools that handle customer financial data, while operating closer to “educate/warn” for tools used in marketing or internal communications. A technology company might lean heavily toward “educate/monitor” to preserve experimentation velocity.

The key insight is that governance isn’t about saying “no.” It’s about saying “yes, and here’s how to proceed safely.”

 


Protecting Data at the Point of Action

Beyond knowing which tools employees are using, effective AI governance requires preventing sensitive data from leaving the organization in the first place.

This is a fundamentally different capability than tool-level blocking. Even with an approved AI tool, such as an enterprise ChatGPT account, employees can inadvertently share data that shouldn’t leave the organization. A policy that says “don’t paste customer data into AI tools” is only as effective as every employee’s ability to remember and follow it in the moment.

Policy-based data protection at the browser level solves this problem by enforcing data policies in real time. Rather than relying on employees to make the right judgment call every time, the system evaluates what’s being shared and prevents data that violates organizational policies from ever leaving the employee’s computer.

This creates a safety net that works alongside governance policies:

  • Sensitive data categories. Define which types of data (customer PII, employee PII, financial records, source code, medical records, strategic plans) should never be shared with external AI tools.
  • Real-time enforcement. When an employee attempts to share protected data with an AI tool via the browser, the system intervenes before the data is transmitted.
  • Policy-based flexibility. Different policies can be created and applied for different tools, data types, and user groups. An engineer might be allowed to share code with an approved AI coding tool, but not with a general-purpose chatbot. A salesperson might be allowed to use AI for email drafting, but without using files that contain customer financial data.

This approach acknowledges that human judgment is imperfect. especially in the fast-paced, task-focused context of daily work, where an employee is thinking about the deadline, not the data policy. The system enforces governance at the point of action, not solely through training sessions that teach lessons which employees might forget.

 


The 3-4% Benchmark

For organizations building their AI governance programs, a practical question emerges: what level of unauthorized AI tool usage is acceptable?

The answer isn’t zero. Zero unauthorized usage means you’ve locked everything down so tightly that experimentation is dead. Employees aren’t trying new tools, discovering new capabilities, or pushing the boundaries of what AI can do for your organization. That’s a pyrrhic victory.

A healthy benchmark is approximately 3-4% of total AI usage occurring in unauthorized tools. This number represents a governance posture that:

  • Allows meaningful experimentation and discovery
  • Keeps the vast majority of AI usage within sanctioned, protected channels
  • Is small enough to monitor closely, with intervention when necessary
  • Provides a leading indicator; if the percentage rises, it signals that sanctioned tools aren’t meeting employee needs, and it’s time to evaluate and approve additional options

This benchmark should be monitored continuously and reviewed by industry. A healthcare organization subject to HIPAA might target 1-2%. A technology company that prizes innovation speed might be comfortable at 5-6%. The key is having a number, tracking it, and acting when it deviates from target.

The benchmark also serves as a feedback loop for your AI strategy. If unauthorized usage spikes in a particular tool category, such as a wave of employees adopting a new AI design tool that the organization hasn’t sanctioned, that’s a signal to evaluate that tool for enterprise deployment, not to punish the employees who discovered it.

 


Building a Responsible AI Governance Program

Effective AI governance balances three objectives: enabling productive AI usage, protecting sensitive data, and maintaining compliance. Here’s a practical framework for building a program that achieves all three.

Figure 3 highlights the lack of AI governance as a major barrier to successful AI implementation, as described in the Larridin report, The State of Enterprise AI 2026. Governance doesn’t appear to scare people away from using AI; instead, it seems to be a critical success factor in making AI usage more successful.

Figure 3. The lack of AI governance is a major barrier to AI adoption.
(Source: The State of Enterprise AI 2026)

 

Step 1: Establish Visibility

You can’t govern what you can’t see. The first step is comprehensive discovery of every AI tool in use across the organization: sanctioned and unsanctioned, enterprise and consumer, web-based and desktop.

This requires monitoring at the level where AI is actually used: the browser and the desktop. Network-level monitoring is insufficient, because it can’t distinguish between enterprise and personal accounts of the same tool. It can block ChatGPT entirely, but it can’t allow enterprise ChatGPT while flagging personal ChatGPT usage. The governance gap exists at the application and account level, not the network level.

Browser-based and desktop-based monitoring closes this gap by providing visibility into:

  • Which specific AI tools and versions are being used
  • Whether users are on enterprise or personal accounts
  • What types of data are being shared with each tool
  • Usage patterns by individual, team, department, and geography

Step 2: Classify and Assess Risk

With full visibility, classify every discovered AI tool based on its risk profile:

  • Data handling policies. Does the tool use customer data for training? What are data retention policies?
  • Security posture. Encryption, access controls, compliance certifications (SOC 2, ISO 27001, HIPAA)
  • Data sovereignty. Where is data processed and stored? What jurisdictions apply?
  • Enterprise availability. Is there an enterprise tier with stronger protections?
  • Tool category and modality. What kind of data does this tool typically handle? Text, code, images, audio, financial data?

This classification drives governance decisions. Tools with strong enterprise agreements and favorable data policies can be fast-tracked to “approved” status. Tools with unclear or unfavorable data practices get flagged for deeper review.

Step 3: Define Policies

Based on risk classifications, establish clear policies that operate on the governance spectrum (educate → warn → monitor → restrict → block):

  • By tool risk tier. High-risk tools get stricter governance; low-risk tools are handled with a lighter touch
  • By data type. Sensitive data categories trigger different policies than general usage
  • By user group. Engineers handling source code may be subject to different policies than marketers working with public-facing content
  • By industry regulation. Overlay compliance requirements from HIPAA, GDPR, SOC 2, and PCI-DSS, or industry-specific frameworks

Document these policies clearly, communicate them broadly, and make them enforceable through technology, not just training.

Step 4: Deploy Data Protection

Implement policy-based data protection at the browser and desktop level:

  • Define sensitive data categories that should never be shared with external AI tools
  • Configure real-time enforcement that prevents protected data from being transmitted
  • Set up alerts for attempted policy violations, and use alerts to identify training opportunities
  • Allow policy flexibility by tool tier; approved enterprise tools may have broader data permissions than unapproved tools

Step 5: Educate and Enable

Governance without education creates friction and resentment. Pair every governance control with clear communication:

  • Why the policy exists (protect customer data, maintain compliance, manage risk, improve effectiveness)
  • What the approved alternatives are (and what makes them better than unsanctioned options)
  • How to request access to new tools through a streamlined evaluation process
  • When to escalate questions or concerns

The best AI governance programs make the right thing to do the easy thing to do. If the approved enterprise AI tools are harder to use or less capable than the consumer alternatives, employees will find workarounds. Governance is most effective when the sanctioned path is also the best path.

Step 6: Monitor, Benchmark, and Iterate

Establish ongoing monitoring with clear KPIs:

  • Unauthorized usage rate. Track against the 3-4% benchmark for unsanctioned tool usage (or your industry-appropriate target)
  • Tool inventory. Maintain a continuously updated catalogue of all AI tools in use
  • Policy violation trends. Track attempted data policy violations by type, team, and tool
  • Governance coverage. Track the percentage of total AI usage that falls under active governance
  • New tool discovery rate. Track how frequently new, previously unknown AI tools appear in usage data

Review these metrics regularly and use them to improve policies. Rising unauthorized usage in a tool category signals unmet needs. Frequent policy violations in a department signals a training gap. A declining new tool discovery rate might signal that employees have stopped experimenting, which is a problem of its own.

 


The Balance: Innovation and Protection

The tension at the heart of AI governance is real. Too little governance and you’re exposed to data leakage, compliance violations, and uncontrolled spend. Too much governance and you kill the experimentation culture that drives AI adoption and proficiency.

The organizations that get this balance right share several characteristics:

They treat governance as enablement, not restriction. Their governance programs are designed to make it easy to use AI safely, not to make it hard to use AI at all. Every “no” comes with a “yes, and here’s how.”

They move fast on tool evaluation. When employees discover a promising new AI tool, the organization has a streamlined process to evaluate, classify, and—if appropriate—approve it within days, not months. If the evaluation takes six months, employees won’t wait.

They invest in their sanctioned tool stack. The best defense against unauthorized tool usage is approved tools that are genuinely excellent. If enterprise ChatGPT, Claude, and Copilot are readily available, well-configured, and easy to use, the incentive to go outside the sanctioned toolset drops dramatically.

They measure governance, not just adoption. They track unauthorized usage rates, data policy compliance, and governance coverage alongside adoption and proficiency metrics. Governance is a first-class metric, not an afterthought.

They communicate transparently. Employees understand why governance exists, what the risks are, and how to work within the system. There are no surprise blocks, no unexplained restrictions, and no sense that IT is trying to prevent people from using AI.

The organizations that achieve this balance—enabling broad, enthusiastic AI experimentation within a framework of responsible governance—are the ones that will capture the full value of AI while protecting themselves from the risks that come with it.

 


Larridin measures AI proficiency across nine dimensions, recalibrated every 30 days, giving enterprises a real-time view of how effectively their workforce is using AI—and exactly where to invest to move the needle.

Learn how Larridin measures AI proficiency


 

Mar 9, 2026 2:26:22 PM