A retailer we know deployed AI across three departments last year. Inventory management got a stockout reduction of 15%. Customer service cut response times by 40%. Marketing saw email open rates rise 25%. Each team celebrated. The executive summary looked strong.
Overall customer satisfaction scores remained flat.
The individual wins were real. The system-level outcome -- the one the board actually cares about -- did not move. This is not a technology failure. It is a structural one. Three AI implementations deployed in three silos produced three local optima and no aggregate value. The AI did exactly what the org chart told it to do.
BCG surveyed 1,000 CxOs and senior executives across 59 countries in 2024. Seventy-four percent of companies have yet to generate tangible value from AI. Only 26% have made it work at scale. MIT's 2025 research on generative AI pilots puts the failure rate at 95% -- roughly 19 out of 20 AI initiatives fail to deliver measurable business impact. These numbers have been explained as technical immaturity, change management failures, and data quality problems. All of those contribute. None of them is the primary variable.
The primary variable is where AI lives in the org chart -- and what that placement tells the organization to optimize for.
The Three Placement Traps
Most companies place AI in one of three places. Each produces a predictable, bounded outcome.
Under IT: the infrastructure ceiling
When AI lives under IT, IT does what IT does: manages infrastructure, controls access, enforces security policy, and optimizes for stability and compliance. These are not bad goals. They are the wrong goals for an AI function whose primary job is to change how the business operates.
The IT-owned AI team becomes a provider of services rather than a driver of transformation. Business units submit requests. The AI team evaluates feasibility, builds integrations, maintains systems. The feedback loop runs from business to IT, not from AI to strategy. Every initiative is scoped to what IT can support -- which means every initiative is filtered through infrastructure constraints before it ever reaches business impact analysis.
The practical outcome: technically sound AI deployments that don't move business metrics. The model works. The integration is stable. The compliance posture is clean. The outcome is a well-maintained pilot that never scales.
Under Marketing: the volume ceiling
Marketing-owned AI teams optimize for marketing goals: content production, personalization, campaign performance, lead generation. These are measurable and frequently impressive. A marketing team with AI capabilities can produce more content, faster, with better targeting. The metrics go up.
The problem is that marketing AI generates no leverage on product development, no improvement to operational efficiency, no connection to customer success or engineering. The improvements are real but bounded. You have made one function faster without changing the underlying system. And when marketing pushes AI-generated personalization that doesn't connect to inventory -- as in the retailer example above -- you get locally optimized outputs that cancel each other at the system level.
Marketing leadership is also structurally incentivized to protect its AI function from cross-functional complexity. Sharing data, aligning with engineering timelines, waiting for compliance review: all of these slow marketing down. The AI team learns to work around dependencies rather than through them.
Under Operations: the efficiency ceiling
Operations-owned AI is the most common placement in industrial and mid-market companies. The logic is sound: operations has the process data, the workflow context, and clear efficiency metrics. AI applied to operations produces cost reductions that appear in real financial statements.
The ceiling is that operational AI is backward-looking by design. It optimizes existing processes rather than redesigning them. It makes the current workflow faster, not different. For companies whose competitive position depends on workflow superiority -- which is most B2B companies -- this means AI acceleration of processes that competitors are also accelerating. The relative position doesn't change. Cost goes down, but so does everyone else's cost.
McKinsey's 2025 research found that fewer than 30% of companies have CEO-level sponsorship of their AI agenda. The implication is that more than 70% are running AI as a departmental initiative -- which means they are running it through one of these three placement traps, with the predictable ceiling attached.
We Have Seen This Before
In the 2010s, companies faced an equivalent challenge with digital transformation. The response was to create Chief Digital Officers -- executives whose mandate was to modernize the business through technology. It did not work out the way the org charts intended.
IMD Business School tracked the CDO failure pattern and identified a consistent arc. The CDO arrives with excitement and a broad mandate. The role becomes the "Chief Dazzling Officer." Then reality sets in. Efforts to consolidate digital projects are interpreted by business unit leaders as interference. Lines of business that initially supported the digital agenda start to withdraw. The CDO becomes "Chief Disconnected Officer." The team shrinks, gets partially absorbed into IT, and the ambitions are quietly scaled back.
The average CDO tenure was 31 months -- shorter than any other C-suite role. PwC found that one-third of CDOs had left their positions in 2018 alone. The exit rate was not explained by incompetence. IMD's conclusion was direct: CDOs do not fail because they are unqualified. They fail because they are set up to fail.
The structural problem was that the CDO role sat between the CEO mandate and the actual business -- without authority over budget, headcount, or the processes that needed to change. The CDO could recommend. Business units could ignore. And since business unit leaders controlled the P&L, the CDO's leverage was always advisory.
The same pattern is now playing out with AI. Chief AI Officers are being appointed at roughly the same rate as CDOs were in 2015. IBM's 2025 survey found that 26% of organizations now have a CAIO -- up from 11% in 2023. The role is accelerating in adoption. The structural problem is identical.
In many organizations, the Chief AI Officer is doing for AI what the Chief Digital Officer did for digital: creating a dedicated function that can be ignored by the business units that matter, and that exits within three years when the mandate proves toothless. The role is not the problem. The placement is.
What IBM's Data Actually Shows
IBM surveyed more than 600 CAIOs across 22 geographies and 21 industries in 2025. The results contain a data point that clarifies the structural argument: 57% of CAIOs report directly to either the CEO or the board. The other 43% report somewhere else -- typically to the CIO, CDO, or CTO.
Organizations with a CAIO see 10% greater ROI on AI spend and are 24% more likely to outperform peers on innovation. The performance differential is real. But it is not uniformly distributed across CAIOs. It concentrates in the ones with budget authority, cross-functional mandate, and a direct line to the CEO. The CAIOs who report into IT or existing technology functions are operating with the same structural constraints as a departmental AI team -- they just have a more senior title.
IBM's research identifies three focus areas that distinguish high-impact CAIOs: measurement, teamwork, and authority. All three are organizational, not technical. The CAIO who lacks authority over budgets, who cannot convene cross-functional teams, who has no direct reporting relationship with business unit heads -- that CAIO is producing reports, not outcomes.
Foundry's 2025 State of the CIO survey found that 40% of CAIOs report to the CEO; 24% report to the CIO. That 24% is not running an AI function. They are running an AI feature inside someone else's function. The reporting line is the strategy.
The Silo Tax
HBR research in 2025 documented the mechanism by which organizational silos defeat AI. The term they use is "AI reinforcing silos" -- the observation that AI deployed inside a function tends to make that function more efficient and more isolated simultaneously. The function's AI learns its own data, optimizes its own metrics, and becomes better at producing outputs the rest of the organization cannot consume.
Western Pacific Bank (anonymized in the research) deployed AI in risk management and marketing simultaneously. Risk management's AI flagged certain customers as high-credit-risk. Marketing's AI identified the same customers as high-lifetime-value prospects and targeted them for growth campaigns. Neither team's AI was wrong by its local metric. The organization was actively working against itself.
This is not an edge case. It is the default outcome when AI is deployed department by department without a coordinating function that owns the cross-functional view. Each department's AI becomes a locally optimal solution to a locally defined problem. The system-level outcome -- which is the only one that matters for company performance -- is not owned by anyone.
MIT's analysis of why AI pilots fail to scale identified unfocused rollouts as a primary culprit: "success often comes when organizations tackle one pain point at a time, instead of pursuing broad, unfocused rollouts." The key word is unfocused. Broad rollouts across multiple departments without a coordinating function are not broad -- they are scattered. They produce the departmental wins in the retailer example above: real, measurable, and collectively worthless.
The New Organizational Primitive
The companies generating disproportionate value from AI are not doing it through better technology, larger models, or more aggressive mandates. They are doing it through a specific organizational structure that most companies have not yet built.
Call it the AI integration function. It is not an AI team in the traditional sense -- not a center of excellence that owns model development, not an IT function that manages infrastructure, not a consulting team that produces slide decks. It is a cross-functional coordinating layer that sits between every domain and translates business problems into AI interventions and AI outputs into business decisions.
The characteristics that distinguish this function from the three placement traps:
| Dimension | Departmental AI Team | AI Integration Function |
|---|---|---|
| Reporting line | IT, Marketing, or Operations head | CEO or President; dotted lines to every business unit |
| Primary output | AI systems and tools within the function | Business outcomes that require AI across functions |
| Success metric | Departmental KPIs (efficiency, cost, volume) | Cross-functional outcomes (revenue, NPS, margin) |
| Budget authority | Line item within department budget | Independent AI budget; co-owns business unit budgets for AI initiatives |
| Relationship to business units | Service provider or peer | Embedded partner with authority to convene and coordinate |
| Failure mode | Local optimization; does not scale | Coordination overhead; requires strong leadership support |
Shopify is the clearest public example of what this looks like in practice. Thawar Hamid, Shopify's VP of Engineering, reached out directly to GitHub's CEO for early Copilot access in late 2021 -- before most companies had a defined AI strategy at all. The framing to legal was not "is this allowed?" but "we are doing this; how do we do it safely?" That framing is not a communication tactic. It reflects a structural reality: Shopify's AI function had enough authority and executive proximity to move without waiting for permission.
The result: 80% engineering adoption before Lütke's April 2025 mandate. An internal LLM proxy. MCP servers connected to internal data sources. 1,500 Cursor licenses deployed within weeks. When the memo arrived, it codified an adoption curve that had already happened. The mandate was the announcement after the strategy worked, not the strategy itself.
Duolingo's version of the same story is instructive precisely because it followed the same script without the structural foundation. The memo landed, the backlash was immediate, and von Ahn spent weeks publicly clarifying and announcing the workshops, advisory councils, and experimentation time that should have preceded it. The words were identical. The org chart behind the words was different.
McKinsey's 2025 agentic AI research is explicit about what the transition requires: moving "from siloed AI teams to cross-functional transformation squads" and "from scattered initiatives to strategic programs." The language is clear. The organizational model it describes -- cross-functional, strategically anchored, workflow-redesigning -- is not a center of excellence. It is not a departmental team. It is a new kind of function.
Why This Is Hard
The AI integration function model is structurally threatening to existing leadership. It requires budget authority that currently lives in IT, Marketing, or Operations. It requires the ability to convene and direct work across functions that currently have their own leadership chains. It requires a reporting relationship to the CEO that competes with existing C-suite access.
The CDO parallel is instructive here too. CDOs failed not because they were wrong about digital transformation, but because the business units they needed to change controlled the levers they needed to change them. The CDO could diagnose. They could not act. AI integration functions face the same structural resistance.
BCG's research on the 26% of companies generating real AI value identifies a pattern: leaders invest 70% of resources in people and processes, 20% in technology and data, and 10% in algorithms. This is almost the inverse of the typical departmental AI team budget, which centers on model procurement, data infrastructure, and tooling. The companies that win spend 7x more on organizational change than on technical capability -- and organizational change requires authority, not expertise.
The implication for leadership teams: the decision about where AI lives in the org chart is not a technical decision. It is a political one. Someone powerful will lose budget authority, headcount, and strategic relevance when AI is elevated from a departmental function to a cross-functional one. That political reality has to be decided at the CEO level before the technical work begins. Most companies never have that conversation. They add an AI team to an existing function, announce a mandate, and wonder why outcomes don't match expectations.
The Diagnostic
Before any discussion of AI tools, models, or roadmaps, the org chart question has to be answered. Where your AI initiative currently lives predicts your ceiling more accurately than any other single variable.