OpenAI grew from 1,000 to 3,000 people in a single year. The operating model never centralized.
Calvin French-Owen spent a year inside that growth curve, then left and published an honest breakdown of how the most valuable AI company on the planet actually works day to day.
Read it the way you’d read a leaked board deck. Because that’s what it is.
Email is dead. Slack is the company.
Calvin received about 10 emails the entire year. Everything else was Slack.
Dozens of workspaces. Layered permissions. Hundreds of channels. If you don’t curate notifications aggressively, you drown by week two.
For a company that tripled headcount in twelve months, this is a deliberate choice. Email is too slow. Slack gives speed, and at OpenAI speed beats order.
That’s not a tooling preference. It’s an architectural decision about how decisions get made.
One monorepo. No style guide.
Most of the code is Python - FastAPI plus Pydantic. There’s Rust where latency matters and a little Go.
There is no enforced style guide. In one file, a clean engineering library written by a former Google veteran. In the file next to it, a one-off Jupyter notebook from a PhD who joined last month. Things break more often than they should.
Tests that touch GPUs can take 30 minutes even when parallelized. The backend monolith has become a landfill, the place everything gets dumped because it’s the path of least resistance.
If you’ve worked at a hypergrowth product company, none of this surprises you. The only difference at OpenAI is the dial is turned to maximum.
Duplication is a strategy, not a bug.
This is the part that breaks every MBA framework.
Calvin counted at least 6 internal libraries doing the same job - task queues, agent loops, the basics. Codex existed in 3 to 4 parallel versions from different teams before launch. ChatGPT Connectors followed the same pattern.
Why?
Because there is no central architecture team. Researchers operate as mini-CEOs: they come up with an idea, build a prototype quickly, and ask almost no one for permission. If the prototype works, a team coalesces around it. If it doesn’t, the project quietly dies.
The math is brutal but logical. The cost of three teams writing the same queue library is small. The cost of three teams coordinating on whose queue library to use is enormous, measured in months, not engineer-weeks.
OpenAI buys speed by paying for duplication. Most companies do the opposite and wonder why nothing ships.
Knowledge lives in three places.
Slack. Code. People.
When Calvin asked about the quarterly roadmap, the answer was: “that doesn’t exist.”
Not “we’re working on one.” Not “ask your manager.” It doesn’t exist.
A huge amount of the company’s actual operating context sits in employees’ heads and Slack threads. That’s why the best research managers there aren’t planners, they’re synthesizers. People who can connect twelve disconnected experiments into a coherent product story.
When Codex needed two senior engineers from the ChatGPT team, it got them the next day. No staffing committee. No quarterly resource planning. No reorg memo.
That’s not casual. That’s the system working as designed.
The MBA inversion
If you took every operating principle business school teaches, OpenAI did the opposite. On purpose.
No single source of truth. Knowledge is distributed by design.
No centralized documentation. Code and Slack are the documentation.
Teams duplicate code instead of coordinating endlessly. Cheaper.
Slack replaces half the processes the rest of the world calls “operations.”
While speed beats order, this works.
That’s the load-bearing assumption and it’s worth saying out loud because most companies running this playbook think the model is the playbook. It isn’t. The playbook only holds while the marginal value of a shipped product feature is higher than the marginal cost of organizational debt.
For OpenAI in 2026, that math is wildly in favor of speed. New product surface. New customer segment. A research model that keeps getting better. Every quarter, the cost of waiting is larger than the cost of duplication.
That’s not true for most companies. It’s barely true for most labs.
The 10,000-person question
OpenAI is not Google. It’s not a classic enterprise software company. It’s a research lab that accidentally shipped the most viral consumer product in history and is now selling, in parallel, to enterprises and to governments.
Meta hit this exact operating wall around 10,000 people. The duplication tax stopped being affordable. Coordination overhead became the bottleneck, and the company restructured to absorb it.
OpenAI will hit the same wall. The only question is when.
If you’re a founder reading this and copying the playbook because OpenAI is winning - be careful. The playbook is the consequence of a specific stage and a specific kind of business. It’s not the cause of the winning.
For now, the model holds. Slack is the org chart. Researchers are mini-CEOs. Duplication is policy.
Most enterprise wisdom about “scaling” might be expensive folklore until the headcount forces someone to find out.
