About the irreversible risk of the Big Four
Until recently, AI at the Big Four lived in a sterile zone.
Innovation labs. Pilot projects. Beautiful slide decks for partners’ offsite retreats. The technology was near the business, adjacent to it, occasionally touching it in controlled environments. And it wasn’t inside the machinery that generates billions in revenue and touches millions of compliance decisions.
In 2025, that glass wall shattered.
Deloitte, EY, KPMG, and PwC didn’t just experiment with agentic AI - they embedded it directly into their core operational processes. The places where mistakes mean real money, reputation damage, and legal liability. Where a hallucination isn’t amusing, it’s a $290,000 refund to the Australian government.
Yes, that actually happened. More on that in a moment.
This is a story about what happens when institutions worth hundreds of billions collectively decide the risk of not adopting AI is greater than the risk of getting it wrong.
When Theory Became Practice
When Deloitte rolled out Zora AI to hundreds of thousands of employees globally, it stopped being an experiment. When EY announced plans to scale from 1,000 AI agents in development to 100,000 by 2028 - with 150 agents already supporting 80,000 tax professionals - that’s not a pilot. That’s infrastructure.
KPMG built TaxBot with a 100-page prompt crafted over months, consolidating partner-written tax advice that had been “stored all over the place” - often on individual laptops. The system now generates 25-page cross-border M&A tax advice drafts in one day. Work that previously required two weeks.
Not two weeks faster. Two weeks total, down from fourteen days.
PwC launched agent OS in March 2025 and deployed 25,000 intelligent agents across client operations by year’s end. Not chatbots. Not assistants. Agents that execute transactions, coordinate workflows, make decisions within defined parameters.
This is the Big Four accepting that AI agents are now part of their operating rhythm. Not someday. Now.
The $290,000 Reality Check
In October 2025, Deloitte Australia was caught with AI-generated errors in a government report they’d been paid $290,000 to produce. The 237-page document included references to non-existent academic research papers and a fabricated quote from a federal court judgment.
A Sydney University researcher named Chris Rudge caught it when he noticed a citation attributing a book to a professor that sounded “preposterous” for her area of expertise. He knew instantly it was either hallucinated or the world’s best-kept secret.
Deloitte reviewed the report, confirmed the errors, quietly published a revised version, and agreed to refund the final payment installment. The updated version now includes explicit disclosure that Azure OpenAI was used in its creation.
Here’s the part that matters: this was production work, client-facing deliverables, the kind of document that influences policy decisions.
And this is exactly what accepting the risk looks like in practice.
The Australian Senator Barbara Pocock said what everyone was thinking: “The kinds of things that a first-year university student would be in deep trouble for.”
Deloitte didn’t stop using AI after this incident. Neither did any of the other firms. Because the alternative of watching competitors automate while you manually review every footnote is commercially untenable in 2025.
The Pyramid That Can’t Hold
I’ve written before about the death of the utilization pyramid - the economic model where Big Four firms built revenue on thousands of junior employees doing repeatable work at 75% billable utilization.
That pyramid didn’t collapse in 2025. But it definitely cracked.
When KPMG’s TaxBot compresses two weeks of tax analysis into 24 hours, what happens to the junior tax associates who would have spent those two weeks building Excel models and reviewing case law? When EY’s 150 tax agents handle data collection, document review, and compliance work for 80,000 professionals, where do the entry-level hires go?
The work is moving to agents. Not all of it, not yet, but enough that graduate hiring is slowing. Enough that the strongest specialists are leaving for places where AI gives them more leverage and velocity.
Why spend three years doing routine compliance work when you could be at a firm or a startup, or in-house where you’re immediately designing and managing the agent systems that do the routine work?
The talent pipeline is fundamentally changing shape. The pyramid is becoming a column with a much wider top and a much narrower base.
The Economics Are Shifting, Quietly
Hourly billing is still alive. Technically. But underneath it, the economics are transforming.
Services are becoming platforms. People plus software plus data plus agents. The unit of delivery isn’t a body working forty hours a week anymore, it’s an outcome delivered by a hybrid team where some members are silicon and some are carbon-based.
PwC’s global commercial technology and innovation officer Matt Wood told Business Insider something revealing: in 2025, organizations fitted AI around existing workflows. In 2026, the work they’re doing is about “helping clients flip that model” - designing processes with AI in mind from the outset.
EY’s global managing partner for growth and innovation, Raj Sharma, said the power of AI agents is forcing his firm to reconsider its commercial model. Instead of charging based on hours and resources spent, they’re exploring “service-as-a-software” approaches where clients pay based on outcomes.
This is exactly what I described in the utilization pyramid series. When configuration, integration, testing, and documentation go from weeks to hours, time stops being a fair proxy for value. Pricing realigns to outcomes: revenue lift, cycle-time reduction, accuracy, SLAs.
The shift hasn’t fully arrived yet. But in 2025, the Big Four positioned themselves for it. They built the infrastructure. They trained the talent. They accepted the commercial risk of operating in this transitional period where they’re charging for hours while agents do more of the work.
This can’t last. The math doesn’t work. Within 18-24 months, we’ll see the commercial models start bending toward outcomes and platform economics, or we’ll see clients demanding significant hourly rate reductions to account for agent productivity.
Our Corner of the Universe
In our business - Salesforce implementation at Customertimes - a significant portion of work has always been fixed-price. We’ve always built accelerators: reusable components, configuration templates, deployment automation.
Now those accelerators include AI Factory capabilities for project setup, agent systems for documentation and testing, intelligent tools for data migration and validation.
For large clients, these tools still face mental resistance. Compliance concerns. Security reviews. IT architecture committees that want seventeen layers of approval before anything touches production data.
And that friction is decreasing. In late 2024, if you mentioned using AI for test data generation or documentation, you’d get weeks of security review. In late 2025, the question is more often “which tool are you using and how do we govern it?”
The infrastructure of trust is being built, project by project, contract by contract.
What Nobody’s Saying Clearly
AI didn’t kill the Big Four’s business model in 2025. It didn’t break smaller integrators either.
However it knocked the whole industry off balance.
The hourly billing model still works today because we’re in a transitional period where:
Clients haven’t yet figured out how to price outcomes properly
Firms can still justify rates based on expertise and brand while agents do more execution
The talent market hasn’t fully repriced to reflect that junior work is automating
Insurance and liability frameworks haven’t caught up to agent-driven work
All four of these things are unstable. They’re resolving slowly, but they’re resolving in one direction: toward fewer bodies, more agents, outcome-based pricing, and new risk frameworks.
The firms that accepted this risk in 2025 did so because they understand the alternative is worse. If you wait until the commercial models are clear and the liability frameworks are settled, you’ve already lost. Your competitors are two years ahead in learning how to manage hybrid human-agent teams at scale.
The 2026 Race
Here’s what changed: the race is no longer about who implements AI fastest.
It’s about who learns to manage hybrid reality most effectively.
Who builds the trust infrastructure - the governance frameworks, the oversight mechanisms, the quality controls - that let clients feel confident in agent-driven work?
Who figures out the commercial models that don’t bankrupt their own firms while fairly pricing the value agents deliver?
Who develops the talent pipelines that produce professionals who can design, manage, and audit agent systems instead of doing the work agents now handle?
And critically: who navigates the inevitable mistakes: the hallucinations, the edge cases, the moments when agents confidently deliver wrong answers without losing client trust or regulatory standing?
The Big Four crossed the Rubicon in 2025. They embedded agents into their core operations. They accepted the operational risk, the commercial uncertainty, and the reputational exposure.
They did this because standing on the shore watching was no longer an option.
The question for 2026 is whether the firms that committed to this transformation first will be the ones who figure out how to make it work reliably, profitably, and at scale.
Or whether they’ll spend the next two years debugging their bold commitments while clients get increasingly sophisticated about what they’ll pay for and what guarantees they expect.
The glass wall is broken. There’s no rebuilding it. The only direction is forward, into a hybrid reality that nobody has fully figured out yet.