First Came ChatGPT. Then Came Sprawl. Now Comes Accountability.
First came ChatGPT - and with it, a collective gasp. Then came the sprawl: AI tools multiplying faster than anyone could track, employees experimenting in every department, executives asking "are we an AI company yet?" without anyone quite knowing what that meant.
Now, we're entering something different. Call it the accountability era - the shift from experimentation to evidence.
The sprawl era had its charms. "AI is making us more productive" was a fine thing to say in 2024. But in 2026, auditors, regulators, and board members will want more than good vibes. They want measurement. And measurement requires something most organizations still don't have: a clear view of what AI is actually being used for, by whom, and why.
The Governance Conversation Is Missing a Foundation
ISO 42001 is having a moment. The world's first AI management system standard is quickly becoming the benchmark for responsible AI, with 76% of organizations planning to pursue it or similar frameworks soon. The EU AI Act's August 2026 deadline for high-risk AI systems is focusing minds. Boards are asking questions. Compliance teams are scrambling.
But here's what's getting lost in the rush toward certification: most organizations are skipping the foundational step that makes accountability possible in the first place.
Consider the compliance officer preparing for her organization's first ISO 42001 readiness assessment. She's been asked to produce an inventory of AI systems in scope. She starts with the obvious ones - the enterprise ChatGPT license, the Copilot rollout in engineering. Then she sends a survey to department heads asking what else is in use. The responses trickle in, inconsistent and incomplete. One manager mentions a tool she's never heard of. Another says "just the usual stuff" without specifying. She realizes, halfway through the exercise, that she's not documenting what exists - she's documenting what people happen to remember and feel like reporting. The gap between those two things is where audit findings live.
This is the pattern playing out across enterprises right now. Organizations are writing AI policies before they know what AI is actually being used. They're designing governance frameworks for tools they haven't inventoried. They're preparing for audits without the continuous measurement those audits will require.
You can't be accountable for what you can't see. Put another way, you can't measure what you haven't inventoried.
The Inventory Problem
ISO 42001 is explicit about this. Clause 4 and Annex A require organizations to identify and maintain an inventory of AI systems in scope. The EU AI Act demands the same - you can't classify AI systems by risk level if you don't know what systems exist.
This sounds straightforward until you realize what's actually happening inside most enterprises.
Employees aren't waiting for governance frameworks. Nearly 60% are already using AI tools that haven't been formally approved - not because they're circumventing policy, but because most organizations simply haven't established clear policies yet. People are using ChatGPT, Claude, Copilot, and dozens of other tools because they make work faster. The organization just hasn't caught up.
The result is a growing gap between what leadership thinks is happening with AI and what's actually happening. There is research that 63% of organizations haven't established any governance program for AI usage at all. Meanwhile, 90% of enterprises express concern about unmonitored AI from a privacy and security standpoint. These two stats highlight the same organizational paralysis from different angles.
This isn't a policy failure. It's a visibility failure. And without visibility, accountability is impossible.
What Accountability Actually Requires
Here's where the compliance conversation often goes wrong. Many organizations assume they can approach ISO 42001 the way they approached ISO 27001 - document policies, implement controls, prepare for an annual audit, repeat next year.
But AI accountability demands something different: continuous measurement.
ISO 42001 Clause 9 requires ongoing monitoring, measurement, and evaluation of AI systems - not point-in-time assessments. Auditors will want to see that you know what AI tools are in use across your workforce, that you're tracking this continuously rather than through periodic surveys, that you can demonstrate trends over time, and that you can identify when new AI tools enter the environment. A spreadsheet updated quarterly won't cut it. Neither will employee self-reporting. The standard expects verifiable, ongoing visibility into AI usage patterns.
This represents a fundamental shift from how most compliance programs operate. The CISO who's spent years building mature security controls may discover that his existing tooling tells him almost nothing about AI adoption patterns. The GRC team with well-oiled ISO 27001 processes may find that their annual audit rhythm doesn't map onto what AI accountability actually requires. The infrastructure that worked for the previous generation of compliance challenges simply wasn't designed for this one.
Accountability in the AI era isn't a document you produce once a year. It's a capability you maintain every day. AI has to grow up - and growing up means being measured.
The Living Inventory
Before you can govern AI, measure AI productivity, or prove ROI on AI investments, you need to answer a deceptively simple question: what's actually happening? Not what tools are officially approved. Not what policies say should be happening. What are people actually using, how often, and for what?
This requires what we'd call a living AI inventory - a continuously updated view of AI usage across the organization that provides apples-to-apples activity comparison across tools, users, and teams.
The word "living" matters here. A spreadsheet updated quarterly captures a snapshot that's already outdated by the time anyone reviews it. A living inventory surfaces unapproved AI usage before it becomes an audit finding or a data incident. It reveals adoption patterns that should inform investment decisions - like discovering that 40% of your engineering team uses Copilot daily while only 5% of finance uses any AI tools at all. That's not a compliance data point. That's strategic intelligence about where enablement efforts will actually land.
And this is where accountability cuts both ways. Yes, you're accountable for governing AI responsibly - for assessing risk on tools before they create incidents, for ensuring that sensitive data isn't flowing into systems that shouldn't have it. But you're also accountable for making sure your AI investments actually get adopted. The same visibility that satisfies auditors also tells you where training will have impact, where tools are gathering dust, and where the productivity gains everyone's chasing might actually materialize. The ROI conversation that every executive wants to have is impossible without this measurement layer underneath it.
Understanding who's actively using AI - and who isn't - transforms abstract governance requirements into actionable intelligence. You stop asking "are we compliant?" in a vacuum and start asking questions that actually drive decisions: Which teams need enablement support? Which unapproved tools represent real risk? Where should we double down on investments that are working?
The End of the Sprawl Era
ChatGPT was the toddler years - wide-eyed wonder at what was suddenly possible. The sprawl was teenage rebellion: AI tools adopted in every corner of the organization, no one asking for permission, everyone figuring it out as they went. Now AI has to, well, grow up.
ISO 42001 and the EU AI Act are signs of this inevitability. As AI becomes part of our work fabric, we need actual infrastructure for visibility, measurement, and accountability. But the frameworks themselves don't solve the foundational problem. They assume you already know what AI is in use. They assume you have mechanisms for continuous monitoring. They assume you can produce evidence that your controls are operating.
For most organizations, those assumptions don't hold. The sprawl era left enterprises with AI tools scattered across every department, adopted bottom-up, invisible to the systems that are supposed to govern them.
The accountability era demands something different. Not a crackdown on AI usage - that ship has sailed, and it would be counterproductive anyway. What's needed is the measurement infrastructure that enables accountability. The vibes era is over. What comes next is visibility.
The first step toward AI accountability isn't writing a policy or forming a committee. It's building an ‘always on’ visibility layer that makes governance possible. Start with the inventory. Keep it current. Everything else follows from there.
See what the Living AI Inventory looks like.
Sources & Further Reading
ISO 42001 Standard & Adoption
- Cloud Security Alliance: ISO 42001: Lessons Learned from Auditing and Implementing the Framework
- ISACA: ISO 42001: Balancing AI Speed & Safety
EU AI Act Timeline & Requirements
- Future of Life Institute: EU AI Act Implementation Timeline
Unmonitored AI & Enterprise Visibility
Jan 22, 2026