McKinsey's 2025 State of AI report covers nearly 2,000 organizations across 105 countries. The headline number is 88%: that is the share of organizations using AI in at least one business function. It is a number that gets cited to show how rapidly AI has penetrated enterprise. The more important number -- the one that rarely makes the headline -- is 6%.
Six percent of those organizations qualify as AI high performers, defined as achieving 5% or greater EBIT impact attributable to AI investment. The other 94% are not failing to use AI. They are failing to capture its value. The tools are deployed. The subscriptions are active. The announcements have been made. The returns are not materializing at meaningful scale.
This is not a technology adoption problem. The 94% have adopted the technology. It is a value capture problem -- and the research is clear on what separates the organizations that solve it from those that do not.
The Productivity Paradox
Atlassian's 2025 State of Developer Experience report surveyed 3,500 developers. Sixty-eight percent report saving 10 or more hours per week with AI tools. This is the number that gets put on slides. The number that does not get put on slides: 50% of those same developers report losing 10 or more hours per week to organizational friction -- unclear requirements, excessive meetings, poor internal tooling, and context switching across too many competing priorities.
The math for half the developer population: net productivity gain of approximately zero. The tools are working. The organizations have not adapted to capture what the tools are producing.
This is the productivity paradox at the core of the 94%. AI tools have accelerated individual tasks across the organization. Code is generated faster. Content is produced faster. Analysis is completed faster. But the downstream processes that consume these outputs -- review queues, approval workflows, coordination meetings, integration pipelines -- have not changed. Faster inputs into unchanged workflows produce the same outputs, faster. That is not a productivity improvement. It is a bottleneck shift.
The organizations in the 6% solved the bottleneck problem. They did not just deploy AI. They redesigned workflows around it. That is the distinction McKinsey's data shows clearly: high performers are "redesigning workflows and processes" rather than layering AI on top of existing ones. The redesign is the work. The AI is the enabler of the redesign. Most companies got that backwards.
The 16% Problem
GitHub's analysis of developer time allocation finds that developers spend approximately 16% of their time writing code. This is the activity that AI code generation tools have gotten very good at accelerating -- by 20-55% in controlled studies, with strong positive effects on junior and mid-level developers in particular. The problem is that code writing is 16% of the job. AI tools have gotten very good at accelerating that 16%. The remaining 84% -- understanding requirements, navigating meetings, reviewing code, debugging, documentation, context switching, coordinating across teams -- does not change unless the organization deliberately redesigns it.
Amazon's internal AI program is the clearest documented case of what deliberate redesign looks like. The program saved 450,000 hours annually -- not by giving developers better autocomplete but by connecting AI to internal knowledge bases and redesigning how documentation was created and consumed. The bottleneck Amazon solved was not code generation. It was knowledge access: developers spending time searching for information that existed somewhere in the organization but was not surfaced where they needed it. The AI addressed that specific bottleneck. The 450,000 hours came from fixing the 84%, not the 16%.
Most AI programs go after the 16% because it is visible and measurable. Lines of code generated per hour is a number you can report. Time saved on code writing is a number you can put in an executive update. Time saved on the ambiguous work of finding and synthesizing internal knowledge is harder to measure and harder to attribute. So most programs report on what they can measure and leave the 84% untouched. The 6% found ways to address both.
What the 6% Do Differently
Three organizational behaviors consistently distinguish high performers from the rest. They are not technology choices. They are not model selection decisions. They are organizational decisions, which is why they are harder to implement and why most companies have not made them.
1. A named owner for AI adoption
McKinsey found that 57% of high-performing organizations have an explicitly named owner for AI adoption -- one person whose primary accountability is closing the gap between tool availability and business impact. Among low performers, 20% have this role defined. The 37-point gap is the largest single differentiator in the study -- larger than tooling budget differences, larger than the presence or absence of executive sponsorship, larger than technical capability gaps. One named owner with clear accountability predicts outcome better than any other single variable.
This is not a Chief AI Officer role, though it can be. It is the role that answers the question "who is responsible for whether our AI investments are producing business results?" In companies that are winning, that question has a specific name attached to it. In companies that are not winning, the answer is "the team" or "everyone" or "our AI initiative" -- which means nobody. Accountability diffused across a team or committee is accountability that does not exist in practice.
The named owner is not primarily a technical role. The job is organizational: identifying where AI can change how work happens, driving the workflow redesign that captures that change, measuring outcomes rather than activities, and eliminating the coordination friction that makes adoption stall. The most effective people in this role are not the most technically sophisticated AI users. They are the ones with the organizational credibility to change processes and the persistence to measure whether the changes are working.
2. Structured enablement tailored to specific work
Generic AI training produces generic results. A two-hour workshop on "how to use ChatGPT" does not change how developers work in their specific codebase on their specific problems. The 6% invest in enablement that is specific: workshops tied to their actual tools, their actual codebase, their actual workflows. One-on-one coaching for engineers who are blocked. Peer advocate programs that deploy power users as embedded resources for their teams rather than centralizing expertise in a training function nobody attends.
The peer advocate model is worth dwelling on. GitHub's internal research and the AI adoption programs that reference it consistently find that peer advocates outperform formal training in adoption velocity and depth. A developer who is genuinely skilled with AI tools in your specific stack, answering questions in the context of the work you are actually doing, is more valuable than any amount of formal training on the general principles of prompt engineering. The question "how do I get Claude to write better tests for this service?" answered by a colleague who knows the service is worth more than any generic answer. High performers have figured out how to scale that relationship.
Atlassian's research is specific about what structured enablement addresses: it is not just skill gaps. It is confidence gaps. A significant share of developers who have access to AI tools are not using them effectively because they are not confident they are using them correctly -- not because they lack the capability to learn. The enablement that closes the confidence gap is different from the enablement that closes the skill gap. High performers address both.
3. Workflow redesign -- specifically the review bottleneck
The review bottleneck is the silent killer that most AI programs miss. AI accelerates code generation. If review capacity stays flat, you have built a pipeline that is fast at one end and jammed at the other. The net throughput improvement is bounded by the review bottleneck, not by the generation speed. This is not a hypothetical concern -- it is the outcome that Atlassian's data documents directly: developers generating more output and losing the gains to organizational friction at the next step.
High performers address the review bottleneck explicitly. They restructure review processes to account for AI-generated code: clearer acceptance criteria, better test coverage that reduces review burden, automated checks that catch the common errors so human reviewers can focus on the decisions that require judgment. Some have introduced AI-assisted code review as a pre-step before human review, filtering out the mechanical issues and surfacing the structural questions.
Beyond code review, high performers redesign knowledge access. AI tools connected to internal documentation, architecture decisions, and codebase context perform substantially better than the same tools operating without that context. Amazon's 450,000-hour saving came from this. It is not primarily about code generation -- it is about making internal knowledge accessible to the tools that developers are using. The workflow redesign that delivers this is not trivial: it requires structuring internal documentation differently, building integrations between knowledge systems and AI tools, and maintaining those integrations as both the knowledge and the tools evolve.
The Compounding Problem
The 6% are 12-18 months ahead on adoption. Their developers are accumulating AI-native skills -- not just how to use the tools, but how to think about problems in ways that are amenable to AI assistance. Their organizations are developing institutional knowledge about which AI interventions work for their specific context and which do not. Their codebases are accruing AI-native infrastructure: better documentation, more structured APIs, cleaner interfaces that AI tools can work with more effectively than the legacy interfaces they replaced.
The gap is not closing on its own. The 94% are adopting tools at roughly the same rate as the 6% -- the 88% usage statistic confirms that. But the 94% are not seeing the organizational learning, the workflow redesign, or the infrastructure investment that makes the tools valuable. They are accumulating subscriptions. The 6% are accumulating capability.
The 94% are not failing to use AI. They are failing to change anything else about how they work. AI without workflow redesign is a faster version of the same process with the same bottlenecks. The ceiling on that improvement is low and arrives quickly.
The compounding dynamic works the other direction too. Teams that are 12-18 months ahead have already made the expensive mistakes: the workflow redesigns that did not work, the enablement programs that did not land, the AI tools that turned out to be wrong for their context. They have also made the expensive good decisions: the skills libraries that are now mature, the evaluation infrastructure that is now integrated, the peer advocate networks that are now trusted. Both the expensive mistakes and the expensive successes compound. The organizations that are behind are going to make the same expensive mistakes -- they just do not know which ones yet.
| Dimension | The 94% | The 6% |
|---|---|---|
| AI ownership | Shared or undefined; "the AI team" or "everyone" | Single named owner with cross-functional authority and outcome accountability |
| Enablement approach | Generic training; one-size-fits-all workshops | Codebase-specific coaching; peer advocate programs; confidence gap addressed explicitly |
| Workflow approach | AI layered on top of existing processes | Workflows redesigned around AI, including review and knowledge access bottlenecks |
| Measurement | Activity metrics (usage, licenses deployed, prompts sent) | Outcome metrics (EBIT impact, end-to-end throughput, quality rates) |
| Budget allocation | 70%+ on tools and infrastructure | 70% on people and process; 20% on technology; 10% on algorithms (BCG) |
What This Requires
The McKinsey data on budget allocation from BCG's related research makes the organizational priority explicit: high performers spend 70% of their AI program resources on people and process, 20% on technology and data, and 10% on algorithms and models. This is almost the inverse of the typical AI program budget, which centers on tool procurement, infrastructure, and model access. The companies that win spend 7x more on organizational change than on technical capability -- and organizational change requires authority, named ownership, and sustained commitment, not tool subscriptions.
The implication for leadership teams is uncomfortable. The decision to move from the 94% to the 6% is not a technology decision. It is a decision to invest organizational authority and sustained management attention in changing how people work -- not just what tools they have access to. That investment is harder to budget for, harder to measure in the short term, and harder to maintain when the initial enthusiasm fades. It is also the only investment with a demonstrated track record of producing the returns that 94% of organizations are currently missing.
The 6% figure is not a ceiling. It is where the curve is right now, 12-18 months into widespread enterprise AI adoption. The organizations that make the organizational investment today will not be fighting for the 6% in two years. They will be in a different competitive category from the organizations that did not.