Skip to main content

AI usage up 485% YOY, 90%+ in shadow AI accounts. Do you have visibility into unauthorized AI tools in use at your organization?

 

Key Takeaway

Shadow AI is a critical blind spot for enterprise AI governance. While boards discuss top-down AI strategy, AI usage grew 485% year-over-year, with over 90% happening in unsanctioned AI tools and unauthorized AI accounts. This creates significant risks including data leakage, security vulnerabilities, and compliance issues. Organizations need real-time visibility into shadow AI tools to manage AI risks, prevent data breaches, implement governance frameworks, and transform unauthorized AI use into measurable competitive advantage.

 

Key Terms

  • Shadow AI: Unauthorized AI tools and generative AI applications that employees use outside official IT governance, creating security risks, data leakage concerns, and compliance issues through unmanaged AI usage.

  • AI Governance: Frameworks and AI policies for managing AI adoption, ensuring data protection, implementing guardrails for responsible AI use, and preventing risks of shadow AI through security teams oversight.

  • Unauthorized AI: Unsanctioned AI systems, apps, and AI tools including ChatGPT, open-source LLMs, and other generative AI tools used without IT teams approval, often exposing sensitive data and company data.

  • AI Security Risks: Security vulnerabilities from shadow AI including data breaches, data leaks, unauthorized access to sensitive information, customer data exposure, and non-compliance with GDPR and data privacy regulations.

Over the last few years, AI board room discussions have moved from an interesting topic to an absolute imperative. In early 2023, generative AI was often covered after reviewing financials, department updates, and strategic planning. Initially, many discussions evaluated artificial intelligence technology's potential as a game changer with measured enthusiasm colored by the fading buzz around RPA and blockchain.

By 2024, however, the focus had shifted decisively to how AI models and AI capabilities could be incorporated into products and workflows to drive productivity through automation. AI is now a major part of the financial conversation, especially around productivity gains. Business leaders want to understand how different teams are using AI tools, and AI has become part of almost all corporate strategy, at least in the tech world.

 

The Critical Gap: Strategy vs. Shadow AI Reality

But here's the critical gap between AI strategy and reality: while boards discuss top-down AI initiatives, a massive, unmanaged ecosystem of shadow AI is growing from the bottom up. Most of the AI use in an organization involves unauthorized AI tools brought in by employees without oversight from security teams or IT teams.

A recent Cyberhaven study tracking 3 million workers found that overall AI usage grew by a staggering 485% year-over-year. More than 90% of this growth occurred not within enterprise-sanctioned AI systems, but in personal “shadow AI” accounts—unsanctioned AI tools like ChatGPT, open-source large language models, and other generative AI tools operating outside AI governance frameworks.

 

The Dangerous Shadow AI Blind Spot

This creates a dangerous blind spot with significant risks. Most AI reporting focuses on approved apps and SaaS applications, while much of the real AI usage goes unreported because it takes place on the web through unauthorized AI tools. The use of AI through shadow IT creates multiple security risks:

  • Data leakage of sensitive data and sensitive information through unapproved AI interactions

  • Security vulnerabilities from unauthorized access to company data and customer data

  • Data breaches and data leaks when employees input source code or confidential information into chatbots

  • Compliance issues including non-compliance with GDPR, data privacy regulations, and data protection requirements

  • Cybersecurity threats from shadow AI tools operating without proper guardrails or permissions

  • Risk management failures when AI-powered systems process outputs without oversight

I've been working with many of the leaders tasked with the difficult job of building AI governance and risk management programs, and the challenge is immense. A tremendous amount of innovation happens in these web-based AI interactions through GenAI apps and AI applications, but they are also a significant source of AI risks and security risks.

You can't govern what you can't see, and you certainly can't improve what you can't measure.

 

The Risks of Shadow AI: Beyond Security Concerns

The risks of shadow AI extend far beyond immediate cybersecurity threats. When employees use unauthorized AI tools without understanding AI security implications:

  • Sensitive information gets processed by external LLM providers without data security controls

  • API integrations and functionality bypass IT governance and permissions frameworks

  • Data loss occurs when shadow AI tools store company data on external servers

  • Responsible AI principles and AI policies go unenforced across unsanctioned AI use

  • Data Loss Prevention (DLP) systems fail to monitor AI usage in real-time

LinkedIn, professional forums, and other platforms frequently share use cases where employees leverage powerful AI capabilities through shadow AI tools—often unaware they're creating security vulnerabilities or exposing sensitive data to unauthorized access.

 

Illuminating the Shadow AI Blind Spot

At Larridin, we illuminate this blind spot by showing companies what their employee AI usage actually looks like across all AI tools—sanctioned and shadow AI tools alike. Visibility into shadow AI is the foundation of AI strategy and AI governance.

CFOs cannot assess the utilization of their AI investments without measurement of both approved and unauthorized AI. Measurement requires real-time visibility into AI adoption patterns. As AI is heavily browser-based, there are unique challenges in discerning what AI tools people are using, how much, and for what use cases.

With comprehensive visibility into shadow AI, leaders can:

  • Assess what AI usage is driving productivity versus just creating noise and AI risks

  • Quickly identify pockets of innovation using AI-powered workflows and automation

  • Discover where teams get the most value from AI applications while managing security risks

  • Make data-driven decisions about AI solutions investments while right-sizing spend

  • Implement governance frameworks and AI policies to prevent data leakage and data breaches

  • Deploy guardrails for responsible AI use across all generative AI tools

  • Scale successful AI adoption patterns while maintaining data protection and compliance

 

From Shadow AI Chaos to Competitive Advantage

This is how leading companies are transforming the chaos of shadow AI into a measurable competitive advantage. Rather than viewing unauthorized AI as purely a risk to be eliminated, forward-thinking organizations recognize that shadow AI tools often represent where employees find the most value and functionality.

VerifyWise says it clearly, "Disconnected AI tools mean blind spots in your risk exposure." 

The goal isn't to shut down all unsanctioned AI use—it's to gain visibility, implement appropriate AI governance, manage significant risks through security teams oversight, and channel innovation into secure, compliant AI systems. This requires:

  • Real-time monitoring of AI usage across all platforms, apps, and shadow AI tools

  • Understanding specific use cases and workflows where employees leverage AI capabilities

  • Implementing data security controls without blocking innovation and automation

  • Creating AI policies that acknowledge the reality of shadow AI while managing AI security

  • Providing approved alternatives with similar functionality to unauthorized AI tools

  • Training IT teams and security teams on identifying and managing risks of shadow AI

Organizations that master shadow AI governance don't just reduce security vulnerabilities and compliance issues—they also gain strategic insights into how artificial intelligence can drive business value. They understand where GenAI delivers ROI, which AI models employees prefer, and what use cases generate the most significant productivity gains.

 

The Path Forward: Visibility Enables Strategy

The boardroom-usage gap isn't just about security risks from shadow IT and unauthorized AI tools. It's about the fundamental disconnect between AI strategy discussions at the executive level and the reality of AI adoption on the ground. Boards can't make informed decisions about AI investments, AI policies, and governance frameworks without understanding the full picture of AI usage—including the 90%+ happening in shadow AI accounts.

Visibility into shadow AI transforms risk management from reactive to proactive. It enables leaders to identify data leakage before it becomes data breaches, to spot compliance issues before they trigger non-compliance penalties, and to recognize security vulnerabilities before unauthorized access occurs.

Most importantly, visibility into shadow AI usage—including ChatGPT, open-source LLMs, AI-powered chatbots, and other generative AI tools—enables organizations to channel employee innovation into secure, governed AI systems that drive measurable competitive advantage while protecting sensitive data, customer data, and company data.

The question isn't whether your organization has a shadow AI problem. The question is whether you have visibility into it—and what you're doing about it.

 

Ready to illuminate your shadow AI blind spot?

Schedule a Demo

Tags:

Justin Smith
Justin Smith
Aug 14, 2025
Head of Client Success