Skip to main content

With incredible speed, AI has emerged as a daily business necessity. But companies face tough choices: encouraging experimentation brings security risks, while telling employees to wait might mean falling behind the competition, losing out on efficiencies, and losing energetic and motivated staffers.

Ameya Kanitkar, CEO of Larridin, on the InformationWeek podcast discussing AI compliance and BYOD policies

Larridin CEO Ameya Kanitkar joined a discussion on the InformationWeek Podcast to tackle these challenges head-on. The focus: a compliance-led crackdown on the use of AI on "bring your own device" (BYOD) and company-issued hardware.

Key Takeaways from the Discussion

1. The Security-First vs. Experimentation-First Divide

NetSPI's Eddie Taliaferro advocated a governance-first approach: employees should make a business case and get approval before using AI with company data. Ameya countered by highlighting the risks of not experimenting; organizations that lock down AI entirely risk missing innovation opportunities, losing competitive advantage, and driving employees to use shadow AI tools with even less oversight.

2. Shadow AI Is Already Happening

One of the key tensions the panel explored: even with strict policies, employees are already using AI. According to the 2026 State of Enterprise AI Report, nearly 50% of AI usage in enterprises occurs outside sanctioned channels. The question isn't whether to allow AI; it's whether you have visibility into what's already happening.

3. Compliance Doesn't Have to Mean "No"

Ameya emphasized that effective AI governance isn't about blocking tools; it's about measuring, monitoring, and managing AI usage in real time. Organizations can protect sensitive data while still enabling experimentation by deploying browser-level monitoring and real-time policy enforcement, rather than blanket bans.

4. The BYOD Challenge Amplifies AI Risk

When employees use personal devices for work, the AI compliance challenge compounds. Personal ChatGPT accounts, unsanctioned Copilot usage, and consumer-grade AI tools with permissive data-sharing EULAs create blind spots that traditional IT security can't see. The panel discussed practical approaches to gaining visibility without becoming surveillance-heavy.

5. A Framework for Working Through AI Security Decisions

InformationWeek Senior Editor Joao-Pierre S. Ruth suggested a fictional scenario: goblins and gremlins at "QuestionableIdeas" want to adopt new technology, while kobolds enforce security standards. This lighthearted framing led to a substantive discussion about risk-based AI policies that balance innovation with protection.

Why This Matters for Enterprise Leaders

The AI compliance landscape is evolving rapidly. With regulations such as the EU AI Act taking effect and board-level scrutiny of AI governance increasing, CIOs and CISOs need a framework that goes beyond "allow or deny." The approach Ameya outlined: visibility first, then governance; gives organizations a path to responsible AI adoption without stifling innovation.


Frequently Asked Questions

Should companies restrict AI use on personal devices?

Rather than blanket restrictions, the most effective approach is visibility and monitoring. Organizations should know what AI tools employees are using, which data flows through them, and whether usage complies with data protection policies, on any device. Browser-level monitoring solutions such as Larridin provide this visibility without requiring full device management.

How do you balance AI innovation with compliance requirements?

Start with measurement, not mandates. Deploy AI usage monitoring to understand your current landscape, then build risk-tiered policies: low-risk experimentation can be open, while high-risk use cases involving sensitive data require approval workflows. This approach, as discussed in the podcast, lets teams innovate freely within guardrails.

What is shadow AI and why is it a compliance risk?

Shadow AI refers to artificial intelligence tools used by employees without IT knowledge or approval: personal ChatGPT accounts, browser-based AI extensions, and consumer AI apps used for work tasks. It's a compliance risk because company data may flow through tools with permissive data-sharing policies, creating potential breaches of data protection, privacy regulations, and contractual obligations.

Larridin gives enterprises complete visibility into their AI tool landscape — sanctioned and unsanctioned — with browser-level monitoring, real-time data protection, and governance analytics segmented by team, department, and risk level.

Learn how Larridin measures and governs AI adoption →