In April 2025, two CEOs sent memos that went viral in the same week. Shopify's Tobi Lutke told staff that before requesting headcount, they would need to prove AI could not do the job. Duolingo's Luis von Ahn used nearly identical language in nearly identical structure. Both memos were framed as bold leadership -- clear signals that AI was no longer optional. The tech press treated them as two instances of the same story: the AI-first mandate arrives at a major company.
The outcomes were very different. One memo landed as the culmination of three years of intentional groundwork and accelerated an adoption curve that was already at 80%. The other landed into an organization without the foundation to receive it, generated immediate public backlash, and required weeks of clarification interviews, explanatory videos, and reactive program announcements.
The words were the same. The organizational context was not. And the companies currently copying the Shopify memo template without copying the Shopify organizational investment are setting themselves up for the Duolingo outcome.
Shopify's Actual Timeline
The April 2025 Lutke memo was not the beginning of Shopify's AI adoption. It was the public announcement of something that had already happened. Understanding what preceded it explains why it worked.
Duolingo's Actual Timeline
Duolingo's memo arrived without the foundation. The language was nearly identical to Shopify's. The organizational context was not.
Von Ahn's memo hit Duolingo's engineering team as a surprise -- not a surprise in the sense that nobody knew AI was important, but a surprise in the sense that the team had not been given clear policies about which tools they could use on which data, had not had codebase-specific enablement, and had not had the peer advocate structure that would have let them answer "how do I actually use this for my specific work?" The memo told them AI was now required without telling them how to use it safely, specifically, or effectively for what they were actually building.
The backlash was immediate and public. Within weeks, von Ahn was doing clarification interviews. A video explaining the intent of the memo. A Financial Times interview. A New York Times interview. Then -- weeks after the mandate -- the announcement of the workshops, advisory councils, and experimentation time that should have preceded the mandate, not followed it. The enablement that makes a mandate land as an accelerant was being built in response to the failure of the mandate to land as an accelerant.
Duolingo's sequence was: mandate, backlash, enablement. Shopify's sequence was: enablement, infrastructure, adoption, mandate. The memo was identical. The sequence was opposite. The outcomes were predictable from the sequence.
The Compliance Theater Problem
When a mandate arrives without the organizational foundation to support it, the failure mode is not visible disagreement. Visible disagreement generates the Duolingo backlash: public, documentable, correctable. The more dangerous failure mode is silent compliance theater -- developers demonstrating AI usage for performance reviews without changing how they actually work.
Compliance theater looks like success on the metrics that mandates generate: AI tool adoption rates go up, usage statistics look healthy, the dashboard shows engagement. But the underlying work -- the actual decisions, the actual code, the actual outputs that reach customers -- does not change. Developers have learned which AI-generated artifacts to show and which decisions to make the same way they always made them. The mandate has produced the appearance of transformation without any of the substance.
This is the most common AI program failure mode at companies that have cleared the "use AI at all" threshold. It is also the hardest to detect, because the metrics that mandates generate are not the metrics that reveal compliance theater. Usage rates and adoption statistics look the same whether developers are genuinely changing how they work or performing AI usage for accountability purposes. The only signal that distinguishes genuine adoption from compliance theater is business outcomes -- and those take months to measure.
By the time the compliance theater pattern is visible in business outcome data, the mandate has been in place long enough that the organization believes it has addressed the AI adoption challenge. The problem is not that AI adoption failed -- the problem is that the metrics confirmed success while the outcomes told a different story. Unwinding that false confidence is harder than building the foundation before the mandate.
What Has to Exist Before a Mandate
GitHub's internal AI adoption playbook -- published by their program director and developed from running AI adoption programs across their own engineering org and with major customers -- describes the actual intervention required before a mandate can land productively. Three elements are non-negotiable, in order.
1. Policy clarity
What tools can developers use? On what data? Under what conditions? In what environments? Without policy clarity, developers default to the most conservative interpretation they can defend -- which is often "use nothing that might be a compliance risk." Or, more dangerously, they make individual decisions about what seems reasonable, with no shared framework. The result is inconsistent AI usage across the organization with inconsistent compliance exposure, and no way to learn systematically about what is working.
Policy clarity is not about restriction. It is about removing the compliance anxiety that blocks adoption. The developers who are not using AI tools are not, primarily, resistant to them. They are risk-averse in the absence of clear organizational guidance on what "safe" looks like. Policy answers that question. It does not need to be comprehensive to be useful -- a clear, narrow policy that says "here is what you are allowed to do and here is who owns the policy" eliminates the anxiety that causes most non-adoption.
2. Codebase-specific enablement
Not generic AI training. Not a lunch-and-learn about prompt engineering. Enablement that is specific to the tools your team is using, your codebase, your engineering standards, your internal APIs, and your specific workflow. The developer who knows how to get Claude to write good tests for a generic Python service does not automatically know how to get it to write good tests for your specific service with your specific test framework and your specific internal libraries.
Peer advocates are the highest-leverage enablement mechanism available. A developer who is a power user of AI tools in your specific stack, answering questions from colleagues in the context of the actual work they are doing, is worth more than any amount of formal training. The questions that block adoption are specific: "how do I handle this error case?" and "why is this suggestion wrong for our codebase?" Those questions have specific answers that only someone who knows your codebase and your tools can give. Peer advocates provide that. Formal training does not.
3. Configured tooling
Out-of-the-box Copilot or Cursor or Claude Code works on generic code. It is useful. It is not as useful as the same tool configured to understand your architecture, your coding standards, your internal APIs, and your organizational context. Shopify's MCP servers connected to internal data are the clearest public example of what this looks like: the AI tools are not operating on generic knowledge -- they are operating on Shopify-specific knowledge, which makes them substantially more effective for Shopify's specific problems.
Configured tooling is not a one-time investment. It requires maintenance as the codebase evolves, as internal documentation changes, and as the tools themselves evolve. But the initial configuration investment -- setting up context about your system, your standards, your common patterns and anti-patterns -- pays back immediately in adoption quality. Developers who try AI tools configured for their specific context stay with them. Developers who try generic tools and find the suggestions frequently wrong for their context abandon them.
The 90-Day Foundation
GitHub's adoption playbook describes an 8-pillar, 90-day intervention that precedes any mandate. The pillars address the three prerequisites above in sequence: policy clarity in the first 30 days, enablement infrastructure in days 30-60, configured tooling and peer advocate deployment in days 60-90. A mandate applied after 90 days of this foundation lands as an accelerant. Applied before it, the mandate lands as a source of anxiety that generates compliance theater in the optimistic case and public backlash in the visible one.
| Phase | What Gets Built | Output |
|---|---|---|
| Days 1-30 | AI usage policy with legal, security, and IT; baseline adoption audit; executive sponsor who uses AI publicly | Compliance anxiety removed; leadership signal established; current state documented |
| Days 30-60 | Peer advocate identification and activation; centralized resource hub; codebase-specific onboarding materials | Expert network established; specific enablement available; organic adoption begins |
| Days 60-90 | Tool configuration for organizational context; MCP/context integrations; peer advocate program fully active | Tools working for specific codebase; adoption velocity increasing; culture shift underway |
| Day 90+ | Mandate (if needed); measurement infrastructure; outcome tracking | Mandate accelerates existing adoption; outcomes measurable; iteration loop established |
The Lutke memo was not the strategy. It was the announcement that the strategy had worked. The companies copying the memo without copying the three-year foundation are not implementing the Shopify strategy. They are implementing the memo. That is a different thing, with a different outcome.
The right question for any leadership team considering an AI mandate is not "how do we write the memo?" It is "what would have to be true about our organization for the memo to be an accelerant rather than an anxiety source?" Answering that question -- honestly, specifically, with reference to policy clarity, enablement infrastructure, and configured tooling -- determines whether the mandate will produce the Shopify outcome or the Duolingo outcome. The memo is the last step, not the first.