Consulting Firms Spent $10 Billion on AI. Their Business Model Didn’t Change.
PwC, McKinsey, Deloitte - they’re all in. But beneath the press releases, the billable‑hour machine hums on. Here’s what that means if you’re the one who actually has to get AI into production.
$10B+
AI investment by Big Four & McKinsey since 2023
75%
of McKinsey fees still billed by the hour
30%
research time saved by McKinsey’s AI chatbot Lilli
6-30%
drop in graduate recruiting at Big Four (2024-25)
The numbers are staggering.
PwC dropped $1 billion over three years and became OpenAI’s largest enterprise customer.
KPMG locked in a $2 billion Microsoft alliance.
Deloitte launched a $2 billion “Industry Advantage” program.
EY invested $1.4 billion and built its own proprietary LLM platform, EYQ.
McKinsey deployed an internal AI chatbot called Lilli to 72% of its 45,000 employees by 2025.
In total, the Big Four and McKinsey have poured over $10 billion into AI since 2023.
And yet almost nothing about how consulting actually works has changed.
A recent deep dive from Future of Consulting calls this out in brutal detail. I want to unpack it from the perspective of someone who actually implements AI in enterprises.
The Productivity Paradox
Here’s the number worth noting: McKinsey’s Lilli saves consultants 30% of their research time.
30%. That’s enormous. In an implementation project, a 30% efficiency gain changes your entire delivery timeline and cost structure.
But here’s what McKinsey did with that 30%: almost nothing visible to clients. The savings stay inside the firm. The billing rates don’t change. The project timelines don’t shrink. The efficiency gain is pure margin, captured by the firm, invisible to the buyer.
This isn’t a McKinsey problem. It’s a structural one. As long as most revenue is still tied to time, any tool that makes consultants faster is more likely to pad margins than to show up as better value for buyers.
When your revenue model is built on billable hours, any tool that makes your people faster is a threat to your top line unless you quietly absorb the gains.
We see this from the other side at Customertimes. When we deploy AI that makes a pharma company’s processes 30% more efficient, the client sees it immediately. They measure it. They expect it. Because we’re building solutions, not selling time.
The consulting model incentivizes hiding efficiency. The implementation model incentivizes delivering it.
Only 25% of McKinsey’s Fees Are Tied to Outcomes
The most prestigious consulting firm in the world, the one that advises Fortune 500 companies on “digital transformation,” still collects roughly 75% of its fees based on time spent, not results delivered.
Yes, about a quarter of fees are now outcome‑based and that’s real progress, but the core economics of the firm still runs on hours.
Everyone in enterprise AI knows the industry needs to move toward outcome‑based pricing. Every conference panel says it. Every thought‑leadership piece argues for it.
But the transition is stalled. And it’s stalled for a reason that anyone who’s worked inside large organizations will recognize: the people who would need to approve the change are the same people whose compensation depends on the current model.
If you’re a partner billing $500/hour and AI makes your team twice as fast, outcome‑based pricing means you now need to deliver twice the value to maintain your revenue. Or accept that the same work is worth less. Neither option is appealing when you’re two years from retirement.
The Junior Layer Is Disappearing And Nobody Has a Plan
This is the part that worries me most for the long term.
Graduate recruiting across the Big Four has been sliding, with double‑digit drops in 2024–2025 at several firms. Firms are cutting entry‑level positions because AI now handles the work juniors used to do: data gathering, initial analysis, deck formatting, research summaries.
On the surface, this sounds efficient. Why pay a first‑year analyst $85,000 to do work that GPT‑4 can do in seconds?
But consulting has always been an apprenticeship business. Juniors learned by doing the “grunt work.” They sat in client meetings. They built models that got torn apart by managers. They learned pattern recognition through repetition.
When AI drafts the first pass of every slide deck, junior staff lose the reps of structuring arguments, anticipating objections, and seeing which ideas survive partner review. That’s where the client’s judgment used to be formed.
Remove that layer, and you have a training crisis. In five years, who becomes the senior consultant? Who has the client instincts? Who can read a room and adjust a recommendation on the fly?
We face a similar challenge in enterprise AI implementation. When we automate validation workflows in pharma or quality checks in manufacturing, we need to deliberately design new learning paths for junior team members. The work that used to train them is gone. If you don’t create something to replace it, you end up with a bimodal workforce - senior experts and AI tools, with nothing in between.
In pharma implementations, for example, the junior who used to manually walk through validation logs now needs a different path to learn how deviations actually show up in the data and why QA pushes back.
PowerPoints Don’t Deploy Themselves
Here’s my biggest frustration with the current state of consulting AI: firms are using AI to produce recommendations faster, not to deliver solutions.
A consulting engagement in 2026 still ends the same way it did in 2016: a slide deck. Maybe a nicer one. Maybe it was drafted 30% faster. But the client still gets a PDF, a “roadmap,” and a wave goodbye.
Meanwhile, the client is left to actually build the thing. They hire implementation partners (like us). They discover that half the recommendations don’t account for their legacy systems, their regulatory constraints, or their organizational politics. They spend months translating strategy into working software.
The gap between “we made a deck” and “we shipped a system” is where most AI value now lives.
What This Means If You’re a Buyer
If you’re a healthcare executive, a pharma CTO, or a manufacturing leader evaluating whether to engage a Big Four firm for your AI initiative, here’s what I’d ask:
What are you actually buying?
Are you buying a strategy deck or a working solution? If it’s a strategy, can your internal team execute it, or will you need another partner?How is the engagement priced?
If it’s time‑and‑materials with no outcome guarantees, you’re paying for both the consultants’ learning curve and the AI‑driven efficiency gains they’re keeping.Where’s the implementation plan?
Not just a “roadmap,” but an architecture, integration points, and a timeline that reflects your real systems and constraints.What happens after the engagement ends?
The most expensive consulting engagement is the one that produces a strategy nobody can implement. Ask who will own, monitor, and evolve the AI systems once the consultants leave, and what budget and skills that requires on your side.
The Real Opportunity
The article from Future of Consulting calls these firms “hollow cathedrals” - impressive from the outside, empty at the core. That’s a provocative phrase, and I think it’s partially right.
But here’s the opportunity: the gap between consulting recommendations and real‑world AI implementation is massive. And it’s growing.
Enterprises are increasingly building internal AI teams. They’re questioning why they’re paying consulting rates for AI‑augmented work. They’re looking for partners who deliver working systems, not slide decks.
This is exactly the shift we’ve been building toward at Customertimes for years: AI solutions that run in production, survive audits, and actually move the metrics that matter in pharma and manufacturing.
The $10 billion that consulting firms invested in AI? Most of it went toward making consultants more productive. Very little went toward making clients more successful.
That’s not an AI revolution. That’s an AI optimization of the same old model.
What are you seeing on your end?
Are consulting firms delivering real AI value in your organization: running systems, measurable lift or just better‑produced versions of the same advice?
If you’re comfortable sharing details, I’m especially interested in where a consulting AI “strategy” died in implementation, or where an implementation partner actually rescued a stalled initiative. Reply or drop a comment below.
What are you seeing on your end?
Are consulting firms delivering real AI value in your organization, or are you getting better-produced versions of the same advice?
I’d love to hear your experience, reply or drop a comment below.
