Most AI transformations fail because they start from the outside in — with tools, vendors, and pilots. This guide introduces the Core-and-Orbit Model: a framework for building AI transformation around your organization’s unique intelligence, not someone else’s technology.
Why do AI transformation? “AI transformation” is shorthand for “organizational transformation powered by AI.” You’re not reactively transforming your organization because of AI; you’re using AI to help execute a transformation that solves what may be longstanding problems and that will yield long-term competitive advantage.
Currently, AI transformation is failing at scale. 80% of AI projects fail (RAND). Only 1% of organizations describe their AI rollouts as mature (McKinsey). 56% of CEOs report no revenue or cost benefits (PwC). The problem isn’t the technology — it’s that most organizations build their AI strategy around tools, instead of around what makes them uniquely competitive.
Four takeaways from this guide:
“AI transformation” is useful shorthand, but it can be misleading. It implies that your organization was fine, but then AI came along, so you decided to transform it.
The picture is actually more complex. “AI transformation” encompasses two different, complementary processes:
So AI transformation is both reactive, driven by a need to keep up, and proactive, inspired by the opportunity to merge past lessons learned and organizational advantages with new opportunities. Keep both drivers in mind as you pursue AI transformation.
To sum up, an AI transformation should leave your organization “lean and mean.” “Mean” has a somewhat bad reputation, but here we use it in a good sense: aggressive and effective.
In September 2025, CNBC published a detailed look at JPMorgan Chase’s AI strategy. The picture was striking: over 200,000 employees using the bank’s proprietary LLM Suite platform, an estimated 15 million hours of work saved annually, and more than $2 billion in business value generated. The bank had built its AI strategy around something no competitor could replicate: its proprietary data asset spanning over $10 trillion in daily transaction flow, decades of financial intelligence, and the accumulated expertise of 900 data scientists, 600 ML engineers, and a 200-person AI research team. CEO Jamie Dimon called AI “a living, breathing part of how we do business.”
That same year, a very different AI transformation story was playing out at Klarna. The Swedish fintech had aggressively deployed OpenAI-powered chatbots to replace customer service staff, cutting headcount from 5,527 to approximately 3,400 — a 40% reduction.
Klarna’s initial results looked promising: an AI chatbot handled two-thirds of customer inquiries. But by early 2025, internal reviews revealed the AI lacked the nuance and empathy that experienced service agents carried. Customer satisfaction suffered. Quality dropped. CEO Sebastian Siemiatkowski made a public admission that few tech leaders are willing to make: “We went too far.” Klarna began quietly rehiring human staff, as reported by Bloomberg.
Both companies are in financial services. Both went all-in on AI. Both had ample resources.
The difference wasn’t the technology. JPMorgan and Klarna had access to the same foundation models, the same APIs, the same tools. The difference was what they built around. Figure 1 shows major barriers that organizations face to effective AI transformation.
Figure 1. Many companies face barriers to effective AI transformation.
(Source: The State of Enterprise AI 2026)
JPMorgan built its AI transformation around its organizational intelligence; the proprietary data, domain expertise, and institutional knowledge that no competitor could replicate by buying the same software. Klarna built its transformation around the tools themselves, deploying AI to replace human judgment rather than amplify it.
At the World Economic Forum in Davos in January 2026, Microsoft CEO Satya Nadella put a name to this distinction. He argued that “firm sovereignty,” a company’s ability to embed its tacit knowledge in models it controls, matters more than geographic data location. His framing was direct: “The future belongs to companies that treat models as components, and treat orchestration, context, and proprietary knowledge as their true differentiators.”
The data supports this emphatically. BCG’s 2025 global analysis of over 1,250 firms found that only 5% of organizations are achieving AI value at scale. McKinsey’s 2025 State of AI report found that, while 88% of organizations use AI in at least one function, only 1% describe their rollouts as mature. RAND found that more than 80% of AI projects fail, twice the rate of non-AI IT projects. PwC’s 2026 Global CEO Survey of 4,454 CEOs across 95 countries found that 56% have realized neither revenue nor cost benefits from AI.
The pattern is unmistakable: the vast majority of AI transformations are failing. And they’re failing not because the technology doesn’t work, but because they’re building from the outside in, starting with tools and hoping value follows, instead of building from the core out.
This guide introduces a different approach.
When most executives hear “AI transformation,” they picture a deployment initiative. Pick tools. Run pilots. Scale the winners. Hire an AI team. Stand up a center of excellence. Report progress to the board.
This is how most organizations approach AI transformation. It’s also why most AI transformations stall.
AI transformation is not a technology deployment. It’s the multi-year process of making your organization’s unique intelligence: the tacit knowledge, domain expertise, decision patterns, and operational wisdom that live in your people and processes; the foundation of how AI creates value for you.
Three distinctions matter:
It’s not a project; it’s a permanent capability. Projects have end dates. AI transformation doesn’t. The landscape shifts every six months: new models, new capabilities, new competitive dynamics. The organizations that win aren’t the ones that “completed” their transformation; they’re the ones that built the muscle to continuously adapt. Bain’s 2025 Technology Report found that most AI prototypes fail to reach production at scale. The difference isn’t a better prototype; it’s a better operating capability.
It’s not AI replacing work; it’s AI amplifying what you uniquely know. Klarna tried replacement. JPMorgan built amplification. In a similar vein to JPMorgan, Walmart built Wallaby, a series of retail-specific LLMs trained on decades of proprietary product, customer, and operational data; knowledge no general-purpose model carries. John Deere has spent over a decade building AI around its unmatched understanding of precision agriculture. Starbucks processes 100 million weekly transactions through Deep Brew, its proprietary AI platform, turning customer behavior into personalized experiences no competitor can replicate. The pattern is consistent: the companies succeeding at AI transformation are amplifying what makes them unique, not automating what makes them generic.
It’s not tool selection; it’s strategic self-knowledge. Before you pick a single tool, you need to answer three questions: What does our organization know that no one else knows? Where does that knowledge live? And how do we make AI serve it? If you can’t answer these questions, your AI strategy is indistinguishable from your competitor’s. And an indistinguishable AI strategy produces no competitive advantage.
The rest of this guide gives you a framework for answering those questions and a playbook for executing on the answers.
Every AI transformation framework we’ve studied starts from the outside: which tools to deploy, which processes to automate, which vendors to evaluate. BCG offers the 10-20-70 rule. McKinsey prescribes Steer-Scale-Institutionalize. Andrew Ng recommends starting with pilot projects. Databricks frames it as Process-People-Platform.
All useful. All incomplete. Because none of them answer the fundamental question: What should your AI transformation orbit around?
The Core-and-Orbit Model starts from the center.
At the center of the model is something that doesn’t change when a new foundation model drops: your organizational intelligence. This is the tacit knowledge, domain expertise, institutional judgment, and operational wisdom that make your organization uniquely competitive. It’s the knowledge that lives in your people and processes, not the data in your databases.
Around the core sit five execution disciplines for AI implementation; multi-year commitments that remain true regardless of which models, vendors, or tools dominate. These are the things that compound over time:
At the periphery sit the things that change—and should be designed to change. Which models you use. Which vendors you bet on. Where the highest ROI sits this quarter. Build vs. buy decisions. Regulatory specifics. The pace of agentic AI. You invest here, but you hold your decisions loosely.
Build from the core out. Never from the outside in.
Most organizations get this backwards. They start by selecting tools (outer orbit), skip the execution disciplines (inner orbit), and never identify what makes their organization uniquely valuable (core). When the outer orbit shifts, as it will, they lose their footing because they never built from the center.
AI models are commoditizing. Satya Nadella said it directly at Davos: models are becoming components. Access to GPT, Claude, Gemini, or Llama is not a differentiator; your competitors have the same access you do. Sonya Huang, a partner at Sequoia Capital, framed it sharply: “The companies that survive have something OpenAI can’t or won’t do. They’re not better chatbots. They’re solving incredibly hard problems that require domain expertise, proprietary data, and years of customer workflow integration.”
The differentiator is what you feed those models. And “what you feed them” is not just the data in your systems. It’s the organizational intelligence that has never been written down:
Decision patterns. How your best people actually make calls under uncertainty. The risk analyst who senses a bad deal before the numbers confirm it. The portfolio manager whose market intuition has been built over decades. Goldman Sachs built its proprietary GS AI Platform, designating it as the single secure gateway for all generative AI, specifically to channel the firm’s accumulated financial intelligence through AI rather than around it.
Operational intuition. Accumulated experience that lives in habits, not handbooks. The warehouse manager who reroutes shipments before the system flags a delay. The manufacturing technician whose instinct for quality catches defects no sensor detects. Siemens is building an Industrial Foundation Model, a massive AI system specialized in the language, physics, and logic of the industrial world. They’re building this internally because general-purpose models cannot carry this kind of domain-specific operational knowledge.
Relationship knowledge. Who to call, who delivers, who doesn’t. The procurement lead who knows which suppliers perform under pressure. The account manager whose client relationships carry institutional memory that no CRM captures. This knowledge is relational, contextual, and largely invisible to systems.
Domain expertise. Deep understanding of your specific industry that no general-purpose model carries. John Deere’s understanding of farming operations. Bloomberg’s financial data corpus. The clinical judgment of an experienced physician; research published in npj Digital Medicine shows that expert physicians outperform AI diagnostic models by 15.8% in accuracy, a gap driven almost entirely by tacit clinical knowledge.
A December 2025 analysis called “The Uncodifiable Advantage” argued that LLMs can structurally only learn from knowledge that has been externalized into text. Tacit knowledge, by definition, hasn’t been. This makes your organization’s tacit knowledge the one asset your competitors cannot replicate by buying the same tools that you buy.
The first question of AI transformation isn’t “which AI tools should we deploy?” It’s “where does our most valuable organizational knowledge live, and how do we build AI around it?”
This is where AI proficiency becomes strategic, not just operational. In our AI Proficiency Guide, we define proficiency across nine dimensions, from model intuition to agentic capability. But here’s the insight most organizations miss: proficiency isn’t just about how well your people use AI. It’s about how effectively they encode their unique expertise into AI.
The Level 5 “AI-Native Orchestrator” on the proficiency spectrum isn’t simply someone who’s good at prompting. It’s the trader who’s learned to externalize her market intuition into an AI workflow. It’s the engineer who’s taught an agent to replicate his debugging instinct. It’s the logistics coordinator who’s built a system that embeds her supplier knowledge into automated routing decisions.
Your most proficient people are the ones who are best at transferring organizational intelligence into AI systems. That’s why proficiency development, found in Stage 3 of the AI Maturity Model, isn’t a training initiative. It’s a knowledge capture strategy. Every time a power user builds a sophisticated AI workflow, they’re externalizing tacit knowledge that was previously locked in their head.
This reframes the entire proficiency investment. You’re not just making people faster. You’re building an institutional knowledge asset that compounds over time — and survives even when individuals leave.
MIT Sloan and BCG found that only 10% of organizations achieve significant financial benefits from AI. The critical differentiator: those 10% intentionally invest in organizational learning, not just machine learning. They build systems where human expertise flows into AI and AI insights flow back into human practice. The strategic focus isn’t deploying tools. It’s creating a learning loop between your people and your AI.
The outer orbit will shift every quarter. New models will drop, vendors will pivot, regulations will evolve. But the inner orbit holds steady. These are the disciplines that compound over years, and they’re what separate the 5% achieving AI value at scale from the 60% reporting minimal gains.
1. Adoption Depth
You’ll always need to know what AI your people are actually using, across the full landscape of AI-first products, AI-augmented features, vertical solutions, homegrown systems, and autonomous agents. The specific tools will change. The need for complete visibility across every tool, team, and employee won’t. Without this, you’re governing in the dark and measuring nothing.
→ See: The Complete Guide to AI Adoption and The AI Adoption Workbook
2. Proficiency Development
This is the mechanism by which organizational intelligence gets encoded into AI systems. As discussed above: proficiency isn’t a training program; it’s a knowledge capture strategy. The models will get better every quarter. The need for people who can push them to do what only your organization needs them to do? That need is permanent. OpenAI’s 2025 State of Enterprise AI report found a 6x engagement gap between power users and typical employees. That gap is your proficiency opportunity—and your competitive leverage.
→ See: The Complete Guide to AI Proficiency
3. Governance and Visibility
Regulations only tighten. Tool sprawl only accelerates. Most CIOs estimate their organization uses 60-70 AI tools; actual monitoring typically reveals 200-300 tools in use. Organizations spend 3-5x what they think on AI, and they have unknown exposures to unsanctioned, “shadow” AI. (Which can be an opportunity, but only if you know about it and can manage it.) And as agentic AI matures, governance expands to cover autonomous actions: not just data inputs, but what AI does independently on your behalf. The principle of governance is constant, even as the specific rules evolve: you can’t govern what you can’t see.
→ See: The Complete Guide to AI Governance
4. Measurement Infrastructure
You can’t manage what you can’t measure. The specific metrics will evolve; leading indicators will shift as AI capabilities change, and the financial translation layer will mature as organizations get better at connecting AI usage to business outcomes. But the discipline of connecting AI activity to measurable impact is permanent. Without it, you’re spending on faith. The 56% of CEOs who report no benefits from AI? They’re not measuring. The 12% who report both cost and revenue benefits? They are.
→ See: The Complete Guide to AI Impact
5. Maturity Progression
The five-stage journey from visibility and controls, through adoption measurement, proficiency development, workflow optimization, and agentic deployment. Where you are on the maturity curve determines what’s possible next, and what’s premature. You can’t skip stages. You can accelerate them. Your position on this curve is the single best predictor of whether the next AI initiative will succeed or become another failed pilot.
→ See: The Complete Guide to AI Maturity
These aren’t five separate initiatives. They’re one integrated operating system for AI transformation. The Core-and-Orbit Model connects them: adoption tells you what’s happening, proficiency tells you how well, governance keeps it safe, measurement proves the value, and maturity tracks the progression. Together, they form the inner orbit; the stable, compounding disciplines that make your AI transformation durable, regardless of what happens in the outer orbit.
The inner orbit is where you commit. The outer orbit is where you deliberately stay flexible. The mistake most organizations make is treating variables as if they were constants: locking into a vendor for three years, betting the strategy on one model’s capabilities, or writing a transformation roadmap that assumes that today’s landscape is permanent.
The landscape is moving too fast for that. Here’s what to invest in, but hold loosely:
Which models and tools dominate. Today’s frontier model is tomorrow’s commodity. Six months from now, the model landscape will look different. Design your architecture for model-agnosticism. JPMorgan and Walmart both built proprietary platforms that can swap underlying models without rebuilding workflows. IBM’s “Client Zero” initiative embeds AI across 70+ workflows using an internal platform that isn’t locked to any single model provider. The question isn’t “which model is best?” It’s “can we switch when something better arrives?”
The pace of agentic AI. Agentic AI could reach production maturity in 18 months; it could take five years. Gartner predicts over 40% of agentic AI projects will be canceled by the end of 2027. Build governance frameworks that can accommodate autonomous agents, but don’t restructure your organization around capabilities that aren’t production-ready yet. Stage 5 of the maturity model is a direction, not a deadline.
Regulatory specifics. GDPR, the EU AI Act, sector-specific rules for healthcare, financial services, and critical infrastructure: these are all still forming. The principle of governance is a constant (inner orbit). The specific regulations are variables. Invest in governance infrastructure that can absorb new requirements without starting over every time a new rule drops.
Where the highest ROI sits. It shifts as capabilities evolve. Six months ago, the biggest gains came from code generation. Today, customer-facing agents are showing outsized returns. Tomorrow, autonomous workflows may deliver the next step change. Reassess quarterly. Use your measurement infrastructure, the inner orbit discipline, to follow the value, not your assumptions.
Build vs. buy. This changes as open source models mature and API costs drop. In 2024, 47% of AI solutions were built internally; by 2025, 76% were purchased. The calculus will keep shifting. Revisit annually. The only constant: whatever you build, build it around your core organizational intelligence.
Review the outer orbit every 90 days. If you’re still running the same tool strategy you had a year ago, you’re not being adaptive; you’re being complacent.
AI transformation is not a project with a completion date. It’s a capability you build — and rebuild — continuously. The landscape moves too fast for static multi-year roadmaps. By the time you’ve executed year two of a three-year plan written in 2025, the foundation model landscape, the regulatory environment, and the competitive dynamics will have shifted underneath you.
Instead: run repeating 90-day transformation cycles that deepen over time. Each cycle follows the same five-step loop. What changes over time isn’t the rhythm — it’s the depth.
1. Assess. Where does your most valuable organizational knowledge sit right now? What’s changed in the landscape since last cycle? Which outer orbit variables have shifted? Where are the biggest gaps between what your people know and what your AI systems can leverage?
2. Prioritize. Which knowledge domains should AI amplify this cycle? Where’s the highest-value intersection of your unique tacit expertise and current AI capability? Not everything can move at once. Pick the domains where encoding organizational intelligence into AI will create the most competitive advantage.
3. Execute. Run targeted initiatives that encode organizational intelligence into AI workflows. Develop proficiency where it matters most this cycle. Build or adjust the tools and workflows, but build them around the core, not around the vendor.
4. Measure. Did AI amplify what you intended? Are you capturing knowledge, or just automating tasks? Use the five dimensions of AI impact: effectiveness, quality, time, revenue, and cost; to evaluate. If the measurement shows activity without amplification, the initiative is tool deployment, not transformation.
5. Adapt. Reassess the outer orbit. Are the tools still the right ones? Has a new capability emerged that changes your priorities? Should the build-vs-buy balance shift? Feed every lesson into the next cycle’s assessment.
Early cycles (0-6 months): Establishing visibility across your AI landscape. Running the knowledge audit: mapping where organizational intelligence lives. Getting the inner orbit disciplines in place: baseline adoption measurement, initial proficiency assessment, governance infrastructure, first measurement frameworks.
Mid cycles (6-18 months): Moving from mapping to encoding. Building workflows that systematically transfer tacit knowledge into AI systems. Scaling proficiency across the teams that carry the most valuable domain expertise. Connecting measurement infrastructure to business outcomes. The 90-day loop starts producing compounding returns.
Mature cycles (18 months+): The compounding advantage becomes visible. Each AI workflow builds on the knowledge encoded in previous cycles. Agentic capabilities begin operating on your proprietary knowledge base. The gap between your AI’s capabilities and the capabilities of a competitor—a company that has access to the same models, but not your organizational intelligence—widens. IBM reported $4.5 billion in productivity gains through its “Client Zero” transformation, with 3.9 million employee hours saved in 2024 alone. That kind of value doesn’t come from a pilot. It comes from years of disciplined, compounding investment in an AI implementation effort that’s built around what the organization uniquely knows.
The failure data is overwhelming: 80% project failure rates (RAND), 95% of pilots that never scale (MIT), 42% of companies abandoning most initiatives (S&P Global). Behind these numbers, the same mistakes appear again and again.
Picking tools before understanding where your organizational intelligence lives. You end up with impressive technology that amplifies nothing unique. This is the single most common failure pattern, and it’s how you produce an AI transformation that looks identical to your competitor’s. If it’s identical, it creates no competitive advantage. You’ve spent millions on table stakes.
Assigning it a budget, a timeline, a project manager, and a “done” date. AI transformation is a permanent capability, not a deliverable. The companies winning, such as JPMorgan, Walmart, John Deere, Siemens, and IBM, are years into their transformations, but still iterating every quarter. Insurance group Ping An started its technology transformation in 2013. Netflix has been building its data flywheel for more than two decades. There is no finish line.
“Everyone gets Copilot” is not a strategy. If your AI transformation looks identical to every other enterprise that bought the same tool bundle, it isn’t creating competitive advantage; it’s institutionalizing parity. The value comes from AI that amplifies what you uniquely know, not AI that does what everyone else’s AI also does.
Building AI around the data in your systems while ignoring the expertise in your people’s heads. The structured data is the easy part, and also the less defensible part. The hard part, and the real moat, is the unstructured, uncodified organizational intelligence that powers your best decisions. As Nadella argued at Davos 2026: firms that can embed their tacit knowledge into models they control will define the next era of competition.
Signing three-year vendor contracts. Building everything on one model’s API. Writing a transformation roadmap that assumes today’s landscape is permanent. The outer orbit will shift, bringing new models, new capabilities, new cost structures, and new regulations. Design for it. Model-agnosticism isn’t a technical preference; it’s a strategic imperative.
Deploying tools without investing in the capability to use them deeply. This produces the pattern EY’s 2025 Work Reimagined Survey documented: 88% daily AI usage, but only 5% using AI in advanced ways. Deloitte’s State of AI in the Enterprise 2026 survey found that fewer than 60% of employees with approved AI tools use them regularly. Activity without proficiency is noise, not transformation. And without proficiency, tacit knowledge never gets encoded; the core remains untapped.
Tracking statistics such as logins, session counts, and license utilization, instead of asking the question that really matters: “Is AI making our unique organizational knowledge more powerful?” If your metrics can’t distinguish between an employee who dismissed 47 autocomplete suggestions and one who built a sophisticated AI workflow that encodes decades of domain expertise, then your measurement system is not telling you anything useful.
The next three years will determine which organizations build a durable AI advantage and which spend billions achieving parity. The technology is available to everyone. The foundation models will keep improving and keep commoditizing. The tools will keep proliferating.
What won’t commoditize is what your organization uniquely knows. The accumulated judgment, the operational instinct, the domain expertise, the relationship intelligence that your people carry: those are the assets that no competitor can replicate by buying the same software.
The organizations that recognize this, that build from the core out, commit to the inner orbit disciplines, and keep the outer orbit deliberately adaptive, will be the elite that achieve AI value at scale. The rest will keep chasing tools, running pilots, and wondering why the board keeps asking the same question every quarter.
AI transformation isn’t a technology initiative. It’s a strategic capability. Build it from the core.
Larridin is the AI execution intelligence platform that gives enterprises complete visibility into how AI is being adopted, how proficiently it’s being used, and whether it’s delivering real business impact. If you’re building an AI transformation strategy that starts from the core, your organizational intelligence, Larridin provides the measurement infrastructure to track progress across every discipline in the inner orbit: adoption, proficiency, governance, and impact.
Learn how Larridin enables AI transformation