<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Max Votek]]></title><description><![CDATA[Co-founder of Customertimes, where we help enterprises implement software and AI in pharma, healthcare and manufacturing. Writing about business, tech, and lessons from those projects. Florida-based, passionate about sports and innovation.]]></description><link>https://maxvotek.com</link><generator>Substack</generator><lastBuildDate>Thu, 16 Apr 2026 00:56:29 GMT</lastBuildDate><atom:link href="https://maxvotek.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Max Votek]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[maxvotek@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[maxvotek@substack.com]]></itunes:email><itunes:name><![CDATA[Max Votek]]></itunes:name></itunes:owner><itunes:author><![CDATA[Max Votek]]></itunes:author><googleplay:owner><![CDATA[maxvotek@substack.com]]></googleplay:owner><googleplay:email><![CDATA[maxvotek@substack.com]]></googleplay:email><googleplay:author><![CDATA[Max Votek]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Six-Dollar Secret: Why the Next Trillion-Dollar Company Will Look Nothing Like a Software Company]]></title><description><![CDATA[A deeper dive into Sequoia&#8217;s thesis and what it means for builders right now]]></description><link>https://maxvotek.com/p/the-six-dollar-secret-why-the-next</link><guid isPermaLink="false">https://maxvotek.com/p/the-six-dollar-secret-why-the-next</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Wed, 08 Apr 2026 14:23:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8c774bd3-3160-45d0-8201-274196bc78fc_1154x1349.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a ratio sitting at the center of everything happening in AI right now, and most people building software are either ignoring it or haven&#8217;t fully reckoned with its consequences.</p><p><strong>For every dollar a company spends on software, six go to services.</strong></p><p>That single ratio explains where the real AI opportunity lives and why the companies that figure it out early will redefine what it means to be a software business.</p><p>Julien Bek at Sequoia Capital just published a piece that crystallizes the logic better than anything I&#8217;ve read this year. His thesis: <strong>the next trillion-dollar company will be a software company masquerading as a services firm.</strong> Let me unpack why I think he&#8217;s right, what it means for how we build, and what we see at Customertimes every day that confirms it.</p><h2><strong>The Founder&#8217;s Dilemma</strong></h2><p>Every founder building an AI tool is haunted by the same question: <em>what happens when the next model version makes my product a feature?</em></p><p>It&#8217;s a fair fear. If you sell the tool, you&#8217;re in a race against the model. GPT-5, Claude 4, Gemini Ultra, each release is a potential existential threat to your differentiation. But here&#8217;s the flip side that changes everything: <strong>if you sell the work instead of the tool, every improvement in the model makes your service faster, cheaper, and harder to compete with.</strong></p><p>A company might spend $10,000 a year on QuickBooks and $120,000 on an accountant to close the books. The next great company won&#8217;t sell better accounting software. It will just close the books. The software stack becomes infrastructure. The value delivered is the outcome.</p><p>It&#8217;s the difference between being a vendor and being a business partner. It&#8217;s the difference between the tool budget and the work budget and the work budget is six times larger.</p><h2><strong>Intelligence vs. Judgement</strong></h2><p>To understand where AI is actually going, you need one conceptual framework: the distinction between intelligence and judgement.</p><p><strong>Intelligence</strong> is rules-based work. Translating a spec into code. Testing. Debugging. Medical coding. Filling insurance forms. Screening resumes. Matching invoices. The rules can be breathtakingly complex but they are rules. Given enough data and compute, these tasks are fundamentally learnable.</p><p><strong>Judgement</strong> is different in kind, not degree. It&#8217;s the decision about what to build next. Whether to take on tech debt. When to ship before it&#8217;s ready. Which strategic bet to make. Judgement requires experience, taste, and instinct accumulated over years. It&#8217;s what you&#8217;re actually paying for when you hire a great CFO or a senior partner at a consulting firm.</p><p>Here&#8217;s the critical insight: <strong>AI has already crossed the intelligence threshold in software engineering, and every other profession is next.</strong></p><p>A year ago, most Cursor users treated AI as fancy autocomplete. Today, more tasks are being started by agents than by humans. Software engineering accounts for over half of all AI tool usage across professions. Every other category is still in single digits. The reason software got there first is structural, it&#8217;s primarily intelligence work. Code either compiles or it doesn&#8217;t. Tests pass or they fail. The feedback loops are tight and the outputs are verifiable.</p><p><strong>Today&#8217;s judgement becomes tomorrow&#8217;s intelligence.</strong></p><p>As AI systems accumulate proprietary data about what good decisions look like in a domain, the frontier shifts. What required a seasoned professional last year becomes automatable this year. The line doesn&#8217;t hold still.</p><h2><strong>Copilots and Autopilots: Two Very Different Bets</strong></h2><p>Bek draws a clean distinction that I think will define how we look back on this era.</p><p>A <strong>copilot</strong> sells the tool. Harvey sells to law firms. Rogo sells to investment banks. The professional is the customer, the AI makes them more productive, and the human stays responsible for the output. This is the right model when AI is still developing and you&#8217;re augmenting judgement, not replacing intelligence.</p><p>An <strong>autopilot</strong> sells the work. Crosby sells to the company that needs an NDA drafted. WithCoverage sells to the CFO who needs insurance, not to the broker. The customer is buying the outcome directly. No professional intermediary. The AI handles the full task.</p><p>The higher the intelligence ratio in a field, the sooner autopilots will win. And once an autopilot establishes itself in the intelligence-heavy outsourced work, it starts accumulating the data and the client trust, to push toward the judgement work over time. The copilot-to-autopilot transition is already happening. The starting position determines the trajectory.</p><h2><strong>Start Where Work Is Already Outsourced</strong></h2><p>This is the strategic insight that I think deserves the most attention from anyone building in this space right now.</p><p>If a task is already outsourced, three things are true simultaneously:</p><ol><li><p>The company has already accepted that this work can be done externally</p></li><li><p>There&#8217;s an existing budget line that can be substituted cleanly</p></li><li><p>The buyer is already purchasing an outcome, they&#8217;re pre-trained to pay for results</p></li></ol><p><strong>Replacing an outsourcing contract with an AI-native services provider is a vendor swap. Replacing headcount is a reorg.</strong></p><p>One of those is a procurement conversation. The other is an organizational transformation that requires board approval, union negotiation, and change management. Start with the vendor swap.</p><h2><strong>The Opportunity Map: Where the Money Is</strong></h2><p>Let me walk through the numbers that stuck with me from Bek&#8217;s piece, because the scale is genuinely staggering:</p><p><strong>Accounting and Audit: $50-80B outsourced in the US</strong> The talent crisis here is acute and structural. Roughly 340,000 accountants have left the profession over five years while demand grew. 75% of CPAs are nearing retirement. The licensing pathway is long, and starting salaries lag tech and finance. Nobody is filling this gap except AI. Companies like Rillet are building the AI-native ERP that will close the books; Basis started as a copilot and is moving toward autopilot.</p><p><strong>Supply Chain and Procurement: $200B+</strong> Most enterprises only actively manage their top 20% of suppliers. The long tail, the other 80%, gets zero attention because it&#8217;s not economical to have humans do the work. Contract leakage runs 2&#8211;5% of total procurement spend. The autopilot doesn&#8217;t need to displace anyone here. It&#8217;s capturing work nobody was doing. That&#8217;s found money with no incumbent to fight.</p><p><strong>Insurance Brokerage: $140-200B</strong> Standard commercial lines are highly standardized. The broker&#8217;s core value-add is shopping across carriers and filling forms, pure intelligence work. Distribution is fragmented across tens of thousands of small brokers running identical processes. No single incumbent controls the customer relationship. The conditions for disruption are perfect.</p><p><strong>Recruiting and Staffing: $200B+</strong> The largest services market on the list. The top of the hiring funnel, screening, matching, outreach, is pure intelligence. Closing a candidate and assessing culture fit is judgement. The wedge is the intelligence-heavy, high-volume work where matching is standardized. Juicebox, Mercor, and others are building across the spectrum.</p><p><strong>Management Consulting: $300-400B</strong> This is the hardest nut to crack. Almost all judgement or so it appears. The interesting question is whether AI can disaggregate consulting into intelligence components (data gathering, benchmarking, research synthesis) and judgement components (strategic recommendations, stakeholder navigation). Automate the former, elevate the latter. The pioneers here are still TBD, but the prize is enormous.</p><p><strong>IT Managed Services: $100B+</strong> Every SMB outsources its IT. Patching, monitoring, user provisioning, alert triage: intelligence work running on repeat across thousands of identical environments. The existing software layer sells tools to the MSP. Nobody has yet sold &#8220;your IT just runs&#8221; directly to the SMB as a guaranteed outcome. That gap is the opportunity.</p><h2><strong>Services-Led Growth</strong></h2><p>At Customertimes, we live this thesis. Our products work because we don&#8217;t parachute in with a tool and hope for adoption. We sit with clients as integrators. We learn the process before we touch the technology.</p><p>That sequencing matters enormously. You can&#8217;t build a great autopilot for accounting from the outside. You need to understand the edge cases that break the rules. The handoffs that happen between teams. The exceptions that actually define what &#8220;closing the books&#8221; means in practice for this particular company.</p><p>Services-led growth isn&#8217;t a business model compromise. It&#8217;s a data acquisition strategy. Every engagement teaches you what good judgement looks like in that domain. That proprietary understanding is the moat that makes the eventual product defensible against the next model release.</p><h2><strong>The Window Is Open, But Not Forever</strong></h2><p>The copilot companies built in 2024 and 2025 are sitting on a paradox. They have the customer relationships and the domain knowledge. They also face the innovator&#8217;s dilemma moving to autopilot means cutting their own customers out of doing work those customers are currently paid to do.</p><p>That tension is real, and it&#8217;s slow. Which means the window for pure-play autopilots built from scratch is genuinely open right now.</p><p>If you&#8217;re building from scratch in any of these verticals, you don&#8217;t inherit the copilot&#8217;s constraint. You can sell the outcome from day one. You can structure your pricing around results, not seats. You can build your data moat around what good output looks like, not what a professional found helpful.</p><p>The ratio that started this piece - six dollars in services for every one dollar in software - is the size of the untouched market. Most of it hasn&#8217;t seen a serious AI autopilot yet.</p><p>The builders who understand that are going to build very large companies.</p><p><em>This post was informed by Julien Bek&#8217;s essay<a href="https://sequoiacap.com/article/services-the-new-software/"> &#8220;Services: The New Software&#8221;</a> published by Sequoia Capital on March 5, 2026. Highly recommended reading for anyone building in this space.</em></p>]]></content:encoded></item><item><title><![CDATA[The infrastructure layer nobody’s talking about. Why Cloudflare might be the default deploy target for AI-built software.]]></title><description><![CDATA[I&#8217;ve been deploying apps on Cloudflare for months.]]></description><link>https://maxvotek.com/p/the-infrastructure-layer-nobodys</link><guid isPermaLink="false">https://maxvotek.com/p/the-infrastructure-layer-nobodys</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Fri, 03 Apr 2026 20:37:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/77652c0b-a005-4295-8a71-9216076473f6_1280x1279.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been deploying apps on Cloudflare for months. I still can&#8217;t find the catch.</p><p>Spin up a site in minutes. Deploy to a public URL. Connect a custom domain. Pay nothing.</p><p>But that&#8217;s not the interesting part.</p><h2><strong>The world changed. Most infrastructure didn&#8217;t.</strong></h2><p>Cloudflare fits a world where agents write code, not people.</p><p>Tools like Claude Code and OpenAI Codex need environments where execution is immediate and global. The bottleneck in an AI-assisted workflow is rarely the code generation, it&#8217;s everything that comes after. The deploy. The environment setup. The debugging of infrastructure that has nothing to do with your actual product.</p><p>Traditional cloud providers were designed for a different era. An era where a DevOps engineer would spend a week setting up the right IAM roles, VPC configurations, and load balancers before a single line of application code touched production. That workflow made sense when humans were the slowest part of the system.</p><p>Now the human isn&#8217;t the bottleneck. The infrastructure is.</p><p>Cloudflare was built differently, not as a data center you rent, but as a network you deploy to. That distinction matters more than it sounds.</p><p><strong>What&#8217;s already on the free tier</strong></p><p>Before getting into why this matters for AI workflows, it&#8217;s worth being concrete about what you&#8217;re actually getting:</p><ul><li><p><strong>Edge infrastructure + CDN</strong> - your app is global by default. Not &#8220;global with extra configuration.&#8221; Global on deploy.</p></li><li><p><strong>D1</strong> - serverless SQL database. Runs at the edge. No connection pooling headaches. Familiar SQL interface.</p></li><li><p><strong>R2</strong> - object storage that&#8217;s S3-compatible with zero egress fees. That last part is quietly significant. Egress fees are how AWS extracts margin from companies that scaled without noticing. R2 removes that trap entirely.</p></li><li><p><strong>Vectorize</strong> - a vector database built for AI applications. Semantic search, RAG pipelines, embeddings, the infrastructure is already there.</p></li><li><p><strong>Workers</strong> - serverless compute that runs in V8 isolates, not containers. Cold start is measured in milliseconds, not seconds.</p></li><li><p><strong>Security + DDoS protection</strong> - built in, not bolted on. Not a separate product you configure. It&#8217;s part of what Cloudflare is.</p></li></ul><p>For experiments, prototypes, and early production this is more than enough. Most startups that raised a seed round are running on less.</p><h2><strong>How the agent workflow actually changes</strong></h2><p>Here&#8217;s what an AI-assisted build cycle looks like without the right infrastructure:</p><p>You prompt Claude Code to build a feature. It writes the code. Now you need to test it somewhere real. So you either run it locally (which doesn&#8217;t reflect production) or you go through a deploy process that involves environment variables, build steps, maybe a Docker container, maybe a staging environment that&#8217;s drifted from prod. By the time you&#8217;ve verified the thing works, you&#8217;ve lost the thread.</p><p>Here&#8217;s what it looks like with Cloudflare:</p><p>You prompt the agent. It writes the code. It deploys. You have a URL. You&#8217;re testing in 45 seconds.</p><p>The feedback loop compression is the product. Not Cloudflare specifically, but Cloudflare happens to be the platform that makes this possible at zero cost and near-zero configuration.</p><p>This is why I keep saying agents operate best in environments built for immediacy. Slower infrastructure doesn&#8217;t just slow down deployment, it breaks the cognitive flow of working with an agent. You context-switch. You lose momentum. The session ends.</p><p>Fast infrastructure keeps the loop tight. And tight loops produce better software faster.</p><h2><strong>The comparison</strong></h2><p>Let&#8217;s be direct about the alternatives.</p><p><strong>Vercel</strong> is excellent and the DX is genuinely good. But it&#8217;s optimized for frontend and Next.js specifically. Once you need a database, object storage, or anything backend-heavy, you&#8217;re reaching outside Vercel&#8217;s ecosystem. The free tier is also more constrained for teams doing serious volume.</p><p><strong>AWS</strong> is the right answer at scale. It&#8217;s not the right answer for the first 90% of a project&#8217;s life. The configuration overhead is real, the learning curve is steep, and the billing surprises are legendary. You don&#8217;t give an AI agent access to AWS and expect it to figure it out cleanly.</p><p><strong>Railway and Render</strong> are solid for containerized apps. Good developer experience, reasonable pricing. But they&#8217;re not edge-native, and they don&#8217;t come with the integrated storage layer Cloudflare provides.</p><p>Cloudflare&#8217;s position is specific: it wins on the combination of global edge compute + integrated storage + free tier generosity + deploy speed. No single competitor beats it on all four simultaneously right now.</p><p>That might change. It probably will. But right now there&#8217;s a window.</p><h2><strong>The CMS replacement angle</strong></h2><p>There&#8217;s a direction here that most people are sleeping in.</p><p>Cloudflare is quietly becoming a backend for content-driven websites, not just apps. The traditional stack for a marketing site or content platform was: WordPress or a headless CMS, a hosting layer, a CDN on top, maybe a caching plugin.</p><p>That stack has a lot of moving parts. Each one is a potential failure point, a vendor relationship, a bill.</p><p>Workers + D1 + R2 can replace most of it. You get secure isolation of components (each Worker runs in its own V8 isolate, they can&#8217;t interfere with each other). You get a content delivery network that&#8217;s not separate from your compute, they&#8217;re the same thing. You get a database that lives at the edge, close to users.</p><p>We moved our own site there. The result wasn&#8217;t just cheaper. It was structurally simpler. Fewer vendors. Fewer abstractions. Faster.</p><p>The direction this points toward: Cloudflare as the default backend for AI-generated websites. An agent builds your site, deploys it to Cloudflare, and the entire thing as compute, storage, CDN, security is handled by one platform with one login and one (free) bill.</p><p>That&#8217;s a meaningfully different world than what we had two years ago.</p><h2><strong>The honest part</strong></h2><p>I don&#8217;t know when Cloudflare starts monetizing this aggressively. They probably will. The free tier is clearly a land-grab: get developers dependent on the platform before flipping the pricing lever.</p><p>But even if they do, the economics still likely work. R2&#8217;s zero egress fees alone make it competitive with S3 at any scale. Workers pricing is consumption-based and remains cheap at moderate volume. The free tier might shrink. The underlying value proposition probably holds.</p><p>Right now though, there&#8217;s an asymmetry - what you get for free is genuinely disproportionate to what it costs.</p><p>That asymmetry is time-limited. Use it.</p><h2><strong>The actual takeaway</strong></h2><p>Most infrastructure conversations in the AI-agent world focus on the models. Which model, which context window, which tool-use capabilities.</p><p>The infrastructure layer is underrated. Agents need somewhere to run. The environment you give them shapes what they can build and how fast they can build it.</p><p>Cloudflare is becoming the execution layer where AI-built applications go live instantly. It&#8217;s not the only option. But for the combination of speed, integration, and cost - nothing else is quite there yet.</p><p>If you&#8217;re building with AI agents and not using Cloudflare as your execution layer, you&#8217;re probably overcomplicating your stack.</p><p>The simpler the infrastructure, the faster the agent works. That&#8217;s the whole insight.</p><p>Have you moved anything to Cloudflare? Curious what you hit, both good and bad.</p>]]></content:encoded></item><item><title><![CDATA[OpenClaw as a Foundation for Vertical AI Agents.]]></title><description><![CDATA[And the regulated industries that adopt it first will have a serious competitive moat.]]></description><link>https://maxvotek.com/p/openclaw-as-a-foundation-for-vertical</link><guid isPermaLink="false">https://maxvotek.com/p/openclaw-as-a-foundation-for-vertical</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Wed, 01 Apr 2026 16:24:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ae14bda7-4f12-4e4b-bc04-e0511fa30249_1280x714.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people see OpenClaw and think: personal AI assistant. Smarter search. A better way to write emails faster.</p><p>That&#8217;s not wrong. It&#8217;s just not interesting.</p><p>What&#8217;s interesting is what happens when you stop thinking about OpenClaw as a productivity layer and start thinking about it as an agent runtime - one that connects to tools, runs automations, controls browsers, triggers workflows, and stays always-on across the channels your teams already live in.</p><p>That reframe changes everything.</p><h2><strong>The runtime question</strong></h2><p>The conversation about AI in the enterprise has been stuck in the same loop for two years: &#8220;How do we get employees to use it?&#8221;</p><p>&#8220;How do we make sure it doesn&#8217;t hallucinate?&#8221;</p><p>&#8220;How do we prove ROI to the board?&#8221;</p><p>These are real questions. But they&#8217;re downstream of a more fundamental one:</p><p><em>What does it actually mean to deploy an AI agent inside a regulated industry?</em></p><p>Not a chatbot sitting on top of your ERP. Not a summarization tool bolted onto your CRM. An agent that can see data, make decisions, trigger actions, and operate autonomously within defined boundaries, inside industries where the cost of getting it wrong isn&#8217;t a bad quarter. It&#8217;s a regulatory event.</p><p>That&#8217;s the question NVIDIA was trying to answer when they built NemoClaw.</p><h2><strong>What NemoClaw actually does</strong></h2><p>NemoClaw is an open-source security layer built on top of OpenClaw. The architecture is worth understanding because it changes what&#8217;s possible.</p><p>Kernel-level sandboxing. Privacy routing. Default-deny networking.</p><p>The last one is the important one. The agent can&#8217;t do anything: connect to a system, access data, trigger an action unless it&#8217;s been explicitly allowed. You&#8217;re not trying to enumerate what&#8217;s forbidden. You&#8217;re defining exactly what&#8217;s permitted and locking out everything else.</p><p>That&#8217;s a compliance architecture.</p><p>For a pharma company, that means an agent that can monitor adverse drug reaction reports across global submissions can&#8217;t accidentally reach outside its defined data boundary.</p><p>For a manufacturer, an agent monitoring quality deviations on the line can&#8217;t exfiltrate production IP through an unconstrained API call.</p><p>You stop asking &#8220;how do we prevent the bad thing from happening&#8221; and start asking &#8220;what do we want to explicitly allow.&#8221;</p><p>That&#8217;s a much better question.</p><h2><strong>What this looks like in production</strong></h2><p>At Customertimes, we deploy NemoClaw for enterprise clients in manufacturing and pharma. Not pilots. Not proof of concept. Production environments wired directly agents connected directly to SAP, Salesforce, Databricks, Snowflake.</p><p>Here&#8217;s what the use cases actually look like, the real version:</p><h3><strong>Manufacturing</strong></h3><p>Quality control agents that check production output against your standards in real time. During the batch, not after. Deviations get flagged before they become recalls. The cost differential between catching a problem at inspection versus catching it post-shipment is enormous. An agent that runs that check continuously, across every line, without fatigue, is not a nice-to-have.</p><p>Predictive maintenance agents connected to your equipment data. The agent is not just reading sensor outputs, it&#8217;s comparing against historical failure patterns, cross-referencing maintenance logs, and scheduling interventions before downtime occurs. Every unplanned hour of downtime in a manufacturing environment has a real dollar figure. Usually a large one.</p><p>Supply chain visibility agents that pull across systems most companies have siloed. Instead of five dashboards and a weekly ops meeting, one agent that surfaces what&#8217;s actually moving and what&#8217;s at risk.</p><h3><strong>CPG &amp; Pharma</strong></h3><p>Pharmacovigilance agents monitoring adverse drug reaction signals across incoming reports. The volume of data in a global pharmacovigilance operation is beyond what human teams can process at the speed regulations require. An agent that reads across that corpus, identifies emerging signal patterns, and surfaces the ones that need human review is making the pharmacovigilance team actually able to do their job.</p><p>Promotional materials review agents. Marketing content in pharma has to be reviewed against approved claims before it goes out. Every piece. This is a significant operational bottleneck at most companies. An agent that runs that review, flagging non-compliant language before it reaches the medical, legal, regulatory review cycle, compresses timelines and reduces rework.</p><p>CRM and territory intelligence agents for field sales. Reps don&#8217;t need more data. They need the right data, surfaced at the right time. An agent that pulls from CRM, identifies territory gaps, and surfaces them in a rep&#8217;s existing workflow is more useful than any dashboard.</p><h2><strong>Which industry moves first</strong></h2><p>My read: manufacturing gets there before pharma, but pharma is where the value is higher.</p><p>Manufacturing has a shorter feedback loop. The ROI on preventing one unplanned downtime event or catching one quality deviation before a recall is immediate and measurable. The compliance environment, while real, is less complex than pharma. Procurement cycles are faster.</p><p>Pharma is harder. The regulatory environment is more demanding, the data is more sensitive, the approval process for any new system is longer. But the value of catching an adverse event signal early, compressing a promotional review cycle, and improving pharmacovigilance coverage is substantial. The companies that figure out the compliance architecture will have a durable advantage.</p><p>Healthcare is a different conversation. The interoperability problem is still severe enough that the agent runtime questions are secondary to the data infrastructure questions.</p><h2><strong>The real opportunity</strong></h2><p>OpenClaw by itself is a powerful runtime. There are going to be a lot of interesting things built on it for general productivity, consumer applications, horizontal tooling.</p><p>But the sustainable business value, the kind that creates real switching costs and defensible moats, is going to be built vertically. Industry-specific agent configurations, wired into industry-specific systems, operating within industry-specific compliance architectures.</p><p>NemoClaw is what makes that possible in regulated environments.</p><p>The companies that move now by building those vertical configurations, developing the implementation expertise, earning the compliance credibility, are going to be very difficult to displace in three years.</p><p>That&#8217;s the actual opportunity.</p>]]></content:encoded></item><item><title><![CDATA[How I replaced $2,000/month in SaaS with a $200 Claude Code subscription.]]></title><description><![CDATA[I built a personal CRM from scratch.]]></description><link>https://maxvotek.com/p/how-i-replaced-2000month-in-saas</link><guid isPermaLink="false">https://maxvotek.com/p/how-i-replaced-2000month-in-saas</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Mon, 23 Mar 2026 18:03:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4f488ecc-3884-4a75-bb0a-25fc2453042f_1280x1174.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I built a <a href="https://openclaw-report-ep6.pages.dev/">personal CRM</a> from scratch.</p><p>Not a prototype. Not a demo. A production system I use every day.</p><p>47 features. 13 AI tools. A vector knowledge base. LinkedIn profile enrichment. A Telegram bot as the primary interface. Automated follow-up logic. Voice memo integration. Email parsing. Trip-based contact suggestions.</p><p>It replaced my entire stack: HubSpot, Monday.com, Notion, and Zapier.</p><p>Cost: $200/month for Claude Code.</p><p>I asked Claude Code to estimate what this would cost a traditional development team.</p><p>The answer: 516 hours. $66,400. Four months. Five people: PM, PO, two developers, QA.</p><p>I did it in roughly 20 hours across several sessions.</p><p>That&#8217;s a massive cost reduction.</p><p>But the savings aren&#8217;t even the point.</p><p>My CRM is better than anything I could buy.</p><p>It&#8217;s proactive. It reminds me about contacts I haven&#8217;t spoken to in 90 days. It suggests who to reach out to before I fly to a conference. It automatically pulls LinkedIn profiles and structures them into enriched records - role, company, mutual connections, last interaction, topics we discussed.</p><p>It&#8217;s integrated with my voice recordings from meetings, my email, and my knowledge base. It knows my professional context because everything lives in one system, not scattered across four SaaS products that don&#8217;t talk to each other.</p><p>I didn&#8217;t configure workflows in someone else&#8217;s UI. I described what I needed, and Claude Code built it.</p><p>That&#8217;s the shift.</p><p>No SaaS product does this. And none will, because they&#8217;re built for everyone. Which means they&#8217;re optimized for no one.</p><p>Here&#8217;s what most people miss about this change.</p><p>The threat to SaaS isn&#8217;t AI features inside existing products. Salesforce added Einstein. HubSpot added AI assistants. Monday.com added automations.</p><p>None of that matters.</p><p>The threat is that the customer no longer needs the product at all.</p><p>A lawyer will replace Clio plus Notion with a custom AI assistant that knows case history, drafts motions in their voice, and tracks deadlines without a dashboard they never open.</p><p>A small agency will throw out Monday.com and build a project system that mirrors how their team actually thinks, not how a product team decides agencies should work.</p><p>A consultant will build a CRM and a second brain in one tool: contacts, meeting notes, deliverables, follow-ups - all connected, all searchable, all context-aware.</p><p>These aren&#8217;t hypotheticals. I just did it in 20 hours.</p><p>The math for SMBs is brutal.</p><p>Small businesses don&#8217;t have IT departments. They don&#8217;t have procurement processes. They don&#8217;t have integration budgets.</p><p>They have a credit card and a problem.</p><p>A typical SMB SaaS stack includes CRM, project management, knowledge base, automation, and email tools. That&#8217;s easily $200 to $400 per month minimum, and often more.</p><p>And you still spend hours every week on manual data entry, copy-pasting between apps, and fighting integrations that break every time one vendor ships an update.</p><p>$200/month for Claude Code replaces all of it.</p><p>And the replacement is smarter.</p><p>Not &#8220;good enough.&#8221; Smarter.</p><p>The SaaS moat was built on three things. Distribution. Switching costs. And &#8220;good enough.&#8221;</p><p>Distribution: SaaS companies spent heavily on sales and marketing because they had to make adoption easy. Free trials, freemium tiers, one-click onboarding. The product got in front of you before you asked whether you really needed it.</p><p>Switching costs: once your data is in HubSpot, leaving means exporting CSVs, rebuilding workflows, and retraining your team. The product doesn&#8217;t have to be great. It just has to be painful to leave.</p><p>&#8220;Good enough&#8221;: nobody loves their CRM, but nobody switches either. The status quo wins by default.</p><p>A custom system built with Claude Code attacks all three.</p><p>One subscription instead of a procurement process. Your code instead of someone else&#8217;s proprietary database. A system shaped around your workflow instead of &#8220;it&#8217;ll do.&#8221;</p><p>Let me be clear about what I&#8217;m not saying.</p><p>Salesforce won&#8217;t disappear tomorrow. Neither will HubSpot.</p><p>Enterprise contracts, compliance requirements, and organizational inertia still protect the top of the market.</p><p>But the bottom?</p><p>The SMB SaaS market just got a new competitor.</p><p><strong>And that competitor is the customer.</strong></p><p>The same person who used to compare pricing pages and watch demo videos now opens Claude Code and says: &#8220;Build me a CRM that works like this.&#8221;</p><p>Twenty hours later, they have something better than what they were paying thousands per year for.</p><p>This is the multiplier applied to software itself.</p><p>Not just using AI to write code faster. Using AI to eliminate the need for someone else&#8217;s code entirely.</p><p>Every month, the models get better. The context windows get longer. The tools get more capable. The 20 hours I spent today will be 10 hours in six months and 5 hours in a year.</p><p>The SaaS industry built a massive market on the assumption that building software is hard.</p><p>That assumption just expired.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;01657530-00ee-49bc-a775-7ed105e714cb&quot;,&quot;duration&quot;:null}"></div><p></p>]]></content:encoded></item><item><title><![CDATA[How to Actually Choose an AI Agent Platform]]></title><description><![CDATA[The decision framework, interoperability protocols, open-source alternatives, and industry-specific guidance for pharma, healthcare, and manufacturing.]]></description><link>https://maxvotek.com/p/how-to-actually-choose-an-ai-agent</link><guid isPermaLink="false">https://maxvotek.com/p/how-to-actually-choose-an-ai-agent</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 19 Mar 2026 14:36:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/060e0c70-9b27-4623-aae2-fa2a5e98d517_1280x1280.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>Previously in Part 1</strong></h3><p>I broke down the five major platforms fighting for enterprise AI agents: Databricks Custom Agents, Salesforce Agentforce, Microsoft Copilot Studio, AWS Bedrock AgentCore, and Google Vertex AI. Each has clear strengths and clear lock-in risks.</p><p><strong>If you missed it:</strong> <strong><a href="https://maxvotek.com/p/the-5-platforms-fighting-for-enterprise?r=2m4r3n">Part 1</a></strong></p><p><em>Part 1 told you what each platform does. This part tells you which one to pick and more importantly, what matters beyond the platform itself.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3qt2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3qt2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 424w, https://substackcdn.com/image/fetch/$s_!3qt2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 848w, https://substackcdn.com/image/fetch/$s_!3qt2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 1272w, https://substackcdn.com/image/fetch/$s_!3qt2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3qt2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png" width="1286" height="840" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e42a9628-689c-48fc-a551-84b97febe11a_1286x840.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:840,&quot;width&quot;:1286,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3qt2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 424w, https://substackcdn.com/image/fetch/$s_!3qt2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 848w, https://substackcdn.com/image/fetch/$s_!3qt2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 1272w, https://substackcdn.com/image/fetch/$s_!3qt2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe42a9628-689c-48fc-a551-84b97febe11a_1286x840.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>The 3-Question Decision Framework</strong></h2><p>After deploying AI agents across multiple enterprise environments, I&#8217;ve learned that platform selection comes down to three questions, not fifteen.</p><p>Feature comparison matrices are comforting. They make you feel like you&#8217;re being thorough. But they optimize for the wrong thing. They compare what platforms <em>can</em> do, not what they <em>should</em> do for your specific situation.</p><p>Here&#8217;s what actually drives the decision:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wyR7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wyR7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 424w, https://substackcdn.com/image/fetch/$s_!wyR7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 848w, https://substackcdn.com/image/fetch/$s_!wyR7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 1272w, https://substackcdn.com/image/fetch/$s_!wyR7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wyR7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png" width="1278" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1278,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wyR7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 424w, https://substackcdn.com/image/fetch/$s_!wyR7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 848w, https://substackcdn.com/image/fetch/$s_!wyR7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 1272w, https://substackcdn.com/image/fetch/$s_!wyR7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F396db4e2-edb1-4226-9389-36cf6e9314a3_1278x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This isn&#8217;t laziness, it&#8217;s physics. An AI agent&#8217;s value is directly proportional to its proximity to the data it needs. Every extra hop between agent and data adds latency, integration complexity, and failure points. In production, these add up fast.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Tfj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Tfj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 424w, https://substackcdn.com/image/fetch/$s_!6Tfj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 848w, https://substackcdn.com/image/fetch/$s_!6Tfj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 1272w, https://substackcdn.com/image/fetch/$s_!6Tfj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Tfj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png" width="1280" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Tfj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 424w, https://substackcdn.com/image/fetch/$s_!6Tfj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 848w, https://substackcdn.com/image/fetch/$s_!6Tfj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 1272w, https://substackcdn.com/image/fetch/$s_!6Tfj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90849abf-b0af-4768-bb53-d43f4afefd79_1280x690.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A common mistake: choosing a developer-first platform and then expecting business users to build agents. Or choosing a no-code platform and then being frustrated when your ML team can&#8217;t customize the reasoning layer. Match the platform to the people, not the other way around.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dQNa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dQNa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 424w, https://substackcdn.com/image/fetch/$s_!dQNa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 848w, https://substackcdn.com/image/fetch/$s_!dQNa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 1272w, https://substackcdn.com/image/fetch/$s_!dQNa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dQNa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png" width="1272" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:640,&quot;width&quot;:1272,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dQNa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 424w, https://substackcdn.com/image/fetch/$s_!dQNa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 848w, https://substackcdn.com/image/fetch/$s_!dQNa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 1272w, https://substackcdn.com/image/fetch/$s_!dQNa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F457ba8b7-c836-473c-a99f-ac4bbf253596_1272x640.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If your answer is &#8220;not yet, but probably within 18 months&#8221; -  treat it as a yes. Protocol support is much easier to start with than to retrofit.</p><h2><strong>The Interoperability Question: MCP vs. A2A</strong></h2><p>This is the section most platform comparisons skip. It might be the most important part for the long term.</p><p>Two protocols are emerging as standards for AI agent communication:</p><h4><strong>MCP (Model Context Protocol)</strong></h4><p>Created by Anthropic. Handles <strong>agent-to-tool</strong> communication. Think of it as the USB standard for AI agents - a universal way for agents to connect to tools and data sources. Already has 10,000+ active servers and 97 million monthly SDK downloads. This is not theoretical. It&#8217;s infrastructure.</p><h4><strong>A2A (Agent-to-Agent Protocol)</strong></h4><p>Created by Google. Handles <strong>agent-to-agent</strong> collaboration. Agents discover each other, negotiate capabilities, and coordinate tasks across organizational boundaries.</p><p>Both protocols are now governed by the Linux Foundation&#8217;s Agentic AI Foundation, co-founded by OpenAI, Anthropic, Google, Microsoft, AWS, and Block. These aren&#8217;t proprietary standards anymore. They&#8217;re becoming industry infrastructure.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0WUY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0WUY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 424w, https://substackcdn.com/image/fetch/$s_!0WUY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 848w, https://substackcdn.com/image/fetch/$s_!0WUY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 1272w, https://substackcdn.com/image/fetch/$s_!0WUY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0WUY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png" width="1286" height="1434" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1434,&quot;width&quot;:1286,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0WUY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 424w, https://substackcdn.com/image/fetch/$s_!0WUY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 848w, https://substackcdn.com/image/fetch/$s_!0WUY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 1272w, https://substackcdn.com/image/fetch/$s_!0WUY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fb80a87-736f-4076-aa2a-ad2b7e49239e_1286x1434.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Open-Source Frameworks That Actually Work</strong></h2><p>Before you commit to a platform, know that some of the most production-ready agent frameworks are open source. They won&#8217;t replace a platform, but they handle the agent logic layer and they give you portability.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1ogd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1ogd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 424w, https://substackcdn.com/image/fetch/$s_!1ogd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 848w, https://substackcdn.com/image/fetch/$s_!1ogd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 1272w, https://substackcdn.com/image/fetch/$s_!1ogd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1ogd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png" width="1296" height="908" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:908,&quot;width&quot;:1296,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1ogd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 424w, https://substackcdn.com/image/fetch/$s_!1ogd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 848w, https://substackcdn.com/image/fetch/$s_!1ogd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 1272w, https://substackcdn.com/image/fetch/$s_!1ogd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F748daaf4-12a0-401d-8f4e-a9c5a12e9d42_1296x908.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4><strong>The critical distinction</strong></h4><p>Open-source frameworks give you <strong>agent logic</strong>. They don&#8217;t give you production infrastructure - deployment, monitoring, security, governance, memory. That&#8217;s what enterprise platforms provide. The pragmatic approach: <strong>use open-source frameworks for agent logic, deploy on an enterprise platform for everything else.</strong> Databricks Custom Agents explicitly supports this pattern - build with LangChain locally, deploy to Databricks Apps without rewriting code.</p><h2><strong>What This Means for Pharma, Healthcare, and Manufacturing</strong></h2><p>Generic platform comparisons miss the constraints that regulated industries face. Let me get specific about the industries I work in.</p><h3><strong>Pharma</strong></h3><p><strong>Top priority: Governance and validation</strong></p><p>Any AI agent that touches GxP-regulated processes needs audit trails, version control, and validated infrastructure. <strong>AWS</strong> (GxP compliance tooling) and <strong>Databricks</strong> (Unity Catalog lineage) are strongest here. <strong>Salesforce Agentforce</strong> is ideal for commercial/sales ops but doesn&#8217;t cover manufacturing or quality.</p><p><strong>The emerging use case:</strong> Multi-agent systems where a drug safety agent (monitoring adverse events) coordinates with a regulatory submission agent (preparing FDA filings) and a commercial agent (adjusting physician outreach based on safety signals). No single platform handles all three today. That&#8217;s why protocol interoperability matters.</p><h3><strong>Healthcare</strong></h3><p><strong>Top priority: HIPAA compliance + clinical data integration</strong></p><p><strong>Microsoft</strong> has the strongest healthcare story (Teams for Health, Nuance/DAX integration, Azure Health Data Services). <strong>AWS</strong> is a close second. <strong>Google&#8217;s</strong> multimodal capabilities - an agent that processes medical imaging alongside clinical notes -- are uniquely valuable for clinical AI.</p><p><strong>The cautionary note:</strong> We just saw what happens when AI prescribes medications without adequate safeguards (the Doctronic debacle in Utah). Any healthcare AI agent must have robust human-in-the-loop capabilities. Google&#8217;s mid-workflow pause and Salesforce&#8217;s Atlas hybrid reasoning (LLM + business rules) address this directly.</p><h3><strong>Manufacturing</strong></h3><p><strong>Top priority: OT/IT integration + real-time processing</strong></p><p>Manufacturing agents need to interact with PLCs, SCADA systems, MES platforms, and IoT sensors. <strong>None of the five platforms handle this natively</strong> - you&#8217;ll always need integration middleware. <strong>AWS</strong> (IoT Core + AgentCore) and <strong>Google</strong> (multimodal + BigQuery for sensor data) are closest.</p><p><strong>The Siemens + NVIDIA angle:</strong> Their partnership to build AI-driven manufacturing sites using digital twins is creating a parallel ecosystem. Manufacturing AI agents may ultimately run on industrial platforms (Siemens Xcelerator, Rockwell Plex) rather than cloud-native agent frameworks. PepsiCo is already seeing 20% throughput gains with this approach. Watch this space closely.</p><p><strong>Most organizations won&#8217;t use just one platform.</strong></p><p>A pharma company might use Agentforce for commercial operations, Databricks for manufacturing analytics agents, and AWS for GxP-validated quality agents. A healthcare system might run Copilot Studio for administrative workflows, AWS for clinical AI, and Google for medical imaging agents.</p><p>Multi-platform is the reality. Which is why interoperability protocols (MCP, A2A) will matter more than any single platform&#8217;s feature list within 18 months.</p><p>And here&#8217;s the truth that no vendor will tell you:</p><p><em>The platform is 20% of the effort. The other 80% is data readiness, integration architecture, governance design, and change management. Get the 80% right, and almost any platform will work. Get it wrong, and the best platform in the world won&#8217;t save you.</em></p><p>We&#8217;ve seen this pattern at Customertimes across every industry we serve. The teams that succeed don&#8217;t start with &#8220;which platform should we use?&#8221; They start with &#8220;what does our agent need to do, and is our data ready for it?&#8221;</p><p>That question sounds simple. Answering it honestly is the hardest part of any AI agent project.</p><h2><strong>The Checklist: Before You Choose a Platform</strong></h2><p>Run through these before you make a decision:</p><ol><li><p><strong>Map your data landscape.</strong> Where does the data your agents need actually live? This determines 60% of the platform decision.</p></li><li><p><strong>Define who builds and maintains.</strong> Technical team? Business users? Both? Match the platform&#8217;s abstraction level to your people.</p></li><li><p><strong>Assess cross-boundary needs.</strong> Will agents need to communicate with systems outside your organization? If yes (or &#8220;probably within 18 months&#8221;), prioritize MCP/A2A support.</p></li><li><p><strong>Check regulatory requirements.</strong> GxP? HIPAA? OT security? Not all platforms have equal compliance tooling. This is non-negotiable in regulated industries.</p></li><li><p><strong>Plan for multi-platform.</strong> Don&#8217;t try to force everything into one platform. Identify which platform serves which use case, and design the interoperability layer from Day 1.</p></li><li><p><strong>Invest in the 80%.</strong> Before you spend a dollar on platform licensing, make sure your data is clean, your integrations are mapped, and your governance framework exists.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[The 5 Platforms Fighting for Enterprise AI Agents]]></title><description><![CDATA[Databricks, Salesforce, Microsoft, AWS, and Google all launched enterprise agent frameworks in rapid succession.]]></description><link>https://maxvotek.com/p/the-5-platforms-fighting-for-enterprise</link><guid isPermaLink="false">https://maxvotek.com/p/the-5-platforms-fighting-for-enterprise</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 12 Mar 2026 18:37:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/82b1a3aa-d910-426f-af37-a3c4b94aad98_1280x1280.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Databricks, Salesforce, Microsoft, AWS, and Google all launched enterprise agent frameworks in rapid succession. Here&#8217;s what each one actually does, from someone who deploys them in regulated environments.<br>Every major cloud and enterprise platform now has an AI agent framework. They all claim to be &#8220;production&#8209;ready.&#8221; Having deployed AI agents across pharma, healthcare, and manufacturing, I can tell you most of them aren&#8217;t there yet for <strong>regulated</strong> settings.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0uil!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0uil!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 424w, https://substackcdn.com/image/fetch/$s_!0uil!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 848w, https://substackcdn.com/image/fetch/$s_!0uil!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 1272w, https://substackcdn.com/image/fetch/$s_!0uil!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0uil!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png" width="1282" height="1036" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1036,&quot;width&quot;:1282,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0uil!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 424w, https://substackcdn.com/image/fetch/$s_!0uil!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 848w, https://substackcdn.com/image/fetch/$s_!0uil!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 1272w, https://substackcdn.com/image/fetch/$s_!0uil!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1b2f8b4-8a2e-46b8-994b-b7936241c466_1282x1036.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>The Agent Arms Race Is Real</strong></h2><p>Something shifted in early 2026.<br>For two years, &#8220;AI agents&#8221; were mostly a buzzword.<br>Then, over the span of a few months, everything dropped at once:</p><ul><li><p>Databricks launched Custom Agents and the broader Agent Bricks stack.</p></li><li><p>Salesforce announced Agentforce 360 with autonomous multi&#8209;step workflows.</p></li><li><p>Microsoft rolled out multi&#8209;agent orchestration and enhanced governance in Copilot Studio and Azure AI Foundry.</p></li><li><p>AWS expanded Amazon Bedrock into AgentCore with a more modular architecture.</p></li><li><p>Google enhanced Vertex AI Agent Builder and its Agent Builder SDK with MCP and emerging agent&#8209;to&#8209;agent protocols.</p></li></ul><p>Every major platform now offers a production agent framework. The messaging is nearly identical: &#8220;Build, deploy, and govern AI agents at scale.&#8221;</p><p>But the implementations are very different. And for enterprise teams trying to ship AI agents in regulated industries, where &#8220;move fast and break things&#8221; gets you a consent decree, those differences matter enormously.</p><p>Let me break down each platform, not from a feature&#8209;list perspective, but from the perspective of someone who has to make these things actually work.</p><h2><strong>Databricks Custom Agents</strong></h2><p>The data&#8209;first agent platform</p><p>Databricks Custom Agents let developers build, test, and deploy AI agents as fully managed Databricks Apps. It&#8217;s the centerpiece of the broader Agent Bricks suite.</p><p><strong>Key capabilities</strong></p><ul><li><p>Framework&#8209;agnostic: Build with LangChain, CrewAI, or raw Python, and deploy via CI/CD without rewriting code. Most platforms force you into proprietary tooling; Databricks doesn&#8217;t.</p></li><li><p>Lakehouse&#8209;native memory: Agent state and conversation history persist across sessions directly in the Lakehouse, reducing the need for a separate memory database layer.</p></li><li><p>MCP catalog and marketplace: Early Model Context Protocol (MCP) integration so agents can discover and use tools from a curated catalog and marketplace.</p></li><li><p>Agent Bricks (no&#8209;code): Natural language agent creation with templates for common enterprise tasks: the business&#8209;user layer.</p></li><li><p>Unity Catalog governance: Every agent, tool, and data access point governed through the same catalog that manages the rest of your data estate.</p></li></ul><p><strong>My take from the field</strong></p><p>The strongest play is the data story. If your data already lives in the Databricks Lakehouse, Custom Agents give you one of the shortest paths from data to agent. No heavy ETL, no large&#8209;scale duplication. The agent sits directly on top of the data it needs.</p><p>In pharma and manufacturing, this matters. When a quality management agent needs to access batch records, deviation reports, and supplier data in real time, the fewer hops between data and agent, the fewer things break.</p><p>The framework&#8209;agnostic approach also means your ML team uses what they already know. That alone can cut time&#8209;to&#8209;production by weeks.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Fa-f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Fa-f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 424w, https://substackcdn.com/image/fetch/$s_!Fa-f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 848w, https://substackcdn.com/image/fetch/$s_!Fa-f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 1272w, https://substackcdn.com/image/fetch/$s_!Fa-f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Fa-f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png" width="1216" height="152" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:152,&quot;width&quot;:1216,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Fa-f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 424w, https://substackcdn.com/image/fetch/$s_!Fa-f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 848w, https://substackcdn.com/image/fetch/$s_!Fa-f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 1272w, https://substackcdn.com/image/fetch/$s_!Fa-f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fbb1d32-7f51-480c-8029-50202e50db75_1216x152.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2><strong>Salesforce Agentforce</strong></h2><p>The CRM&#8209;native agent ecosystem</p><p>Agentforce is no longer just a feature. Salesforce has been rebuilding its architecture around agents. It is already the commercial leader in CRM&#8209;native agents, with Agentforce contributing meaningful, fast&#8209;growing ARR.</p><p><strong>Key capabilities</strong></p><ul><li><p>Atlas Reasoning Engine: Hybrid reasoning that balances LLM creativity with structured business rules. The agent doesn&#8217;t just generate text; it follows your business logic.</p></li><li><p>Agent Script: JSON&#8209;based scripting for conditionals, hand&#8209;offs, and guardrails, making implicit process logic explicit.</p></li><li><p>Data Cloud grounding: Agents operate on structured, unified CRM data. They know your customers, pipeline, and cases, not just what&#8217;s in the latest prompt.</p></li><li><p>Agentforce Voice: Real&#8209;time voice agents integrated with Amazon Connect, Five9, and Genesys.</p></li><li><p>Flexible pricing: Usage&#8209;based and per&#8209;user models, typically combining per&#8209;conversation charges, per&#8209;action &#8220;Flex Credits,&#8221; and an optional per&#8209;user/month tier for heavier or unlimited internal usage. (Exact numbers change frequently; check the current Agentforce pricing page or recent SaaS analyses rather than relying on static figures.)</p></li></ul><p><strong>My take from the field</strong></p><p>For sales, service, and marketing, Agentforce is the most mature option right now. The CRM data grounding gives agents context that other platforms can&#8217;t match without massive integration work.</p><p>For pharma commercial teams, it&#8217;s compelling. An agent that pulls a physician&#8217;s prescribing history, checks compliance restrictions, drafts personalized outreach, and schedules follow&#8209;up, all within Salesforce, is a real, shippable workflow.</p><p>The limitation is real, though. The moment your agent needs to do something outside Salesforce - query a manufacturing execution system, pull data from a LIMS, check OT network status, you&#8217;re in integration territory (MuleSoft or equivalent). That&#8217;s additional cost, complexity, and risk.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuQP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuQP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 424w, https://substackcdn.com/image/fetch/$s_!zuQP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 848w, https://substackcdn.com/image/fetch/$s_!zuQP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 1272w, https://substackcdn.com/image/fetch/$s_!zuQP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuQP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png" width="1178" height="154" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:154,&quot;width&quot;:1178,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zuQP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 424w, https://substackcdn.com/image/fetch/$s_!zuQP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 848w, https://substackcdn.com/image/fetch/$s_!zuQP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 1272w, https://substackcdn.com/image/fetch/$s_!zuQP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af018e1-eb05-4cb0-8dd9-a0046701915f_1178x154.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2><strong>Microsoft Copilot Studio + Azure AI</strong></h2><p>The productivity suite agent layer</p><p>Microsoft&#8217;s approach is two&#8209;pronged: Copilot Studio for low&#8209;code agent building, and Azure AI Foundry for developers. The Microsoft 365 integration is the moat.</p><p><strong>Key capabilities</strong></p><ul><li><p>Multi&#8209;agent orchestration: Agents can call other agents as tools. A &#8220;project manager&#8221; agent delegates to a &#8220;data analyst&#8221; agent and a &#8220;report writer&#8221; agent, coordinating a multi&#8209;step workflow.</p></li><li><p>Centralized governance: Entra ID&#8209;based identities, policies, and monitoring for agents across Microsoft 365 and Copilot Studio.</p></li><li><p>Natural language creation: Describe what you want; the platform scaffolds an agent and workflow, with no coding required for common patterns.</p></li><li><p>Model flexibility: Support for Claude, a wide range of models through Azure AI Foundry, and bring&#8209;your&#8209;own&#8209;model options.</p></li></ul><p><strong>My take from the field</strong></p><p>If your organization lives in Microsoft 365, and most enterprises do, Copilot Studio is often the path of least resistance. The agent is already where your people work: Teams, Outlook, SharePoint, Excel.</p><p>For healthcare organizations on Microsoft infrastructure, an agent that pulls patient scheduling from Dynamics, checks formulary info in SharePoint, and drafts a summary in Word, all without leaving the ecosystem, cuts integration complexity significantly.</p><p>The challenge is cross&#8209;application autonomy. The moment your agent needs to act outside the Microsoft stack (and in manufacturing or pharma ops, it almost always will), you&#8217;re back to custom integrations.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CsnQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CsnQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 424w, https://substackcdn.com/image/fetch/$s_!CsnQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 848w, https://substackcdn.com/image/fetch/$s_!CsnQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 1272w, https://substackcdn.com/image/fetch/$s_!CsnQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CsnQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png" width="508" height="158.13774104683196" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e4823512-ca51-4752-9a51-71f2b65f5921_726x226.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:226,&quot;width&quot;:726,&quot;resizeWidth&quot;:508,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CsnQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 424w, https://substackcdn.com/image/fetch/$s_!CsnQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 848w, https://substackcdn.com/image/fetch/$s_!CsnQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 1272w, https://substackcdn.com/image/fetch/$s_!CsnQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4823512-ca51-4752-9a51-71f2b65f5921_726x226.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2><strong>AWS Bedrock AgentCore</strong></h2><p>The infrastructure&#8209;grade agent runtime</p><p>AWS has expanded Amazon Bedrock into AgentCore - a modular service architecture designed for production at scale. This is the infrastructure&#8209;first play.</p><p><strong>Key capabilities</strong></p><ul><li><p>Modular services: Distinct components for runtime (serverless deployment), gateway (unified tool and model access), memory (context retention), identity (auth), policy (Cedar&#8209;based access control), and observability (OpenTelemetry monitoring).</p></li><li><p>Pay&#8209;per&#8209;use economics: No per&#8209;seat licensing; you pay for actual consumption: models, calls, and underlying infrastructure.</p></li><li><p>Cedar policies: You express access control in a human&#8209;readable way, and the system compiles that into formal Cedar policies already used in other AWS security services.</p></li><li><p>Broad model selection: Multiple foundation models through Bedrock, plus integrations with partner and open&#8209;weight models.</p></li></ul><p><strong>My take from the field</strong></p><p>AWS is the most &#8220;production&#8209;grade primitives&#8221; option. If you need enterprise security, monitoring, and compliance at scale, AgentCore gives you granular control over how agents run, what they can touch, and how they&#8217;re audited.</p><p>For pharma companies that already run validated workloads on AWS, keeping agents inside the same security and compliance perimeter is a big advantage. Cedar is particularly interesting for regulated industries that need explainable, formalized access control.</p><p>The trade&#8209;off: this is the most &#8220;build it yourself&#8221; option. You get powerful building blocks, but assembling them into a working agent system takes more engineering than Salesforce or Microsoft.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jA5J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jA5J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 424w, https://substackcdn.com/image/fetch/$s_!jA5J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 848w, https://substackcdn.com/image/fetch/$s_!jA5J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 1272w, https://substackcdn.com/image/fetch/$s_!jA5J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jA5J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png" width="472" height="142.46719160104988" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0fb99698-1849-4381-a313-471ad446a507_762x230.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:230,&quot;width&quot;:762,&quot;resizeWidth&quot;:472,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jA5J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 424w, https://substackcdn.com/image/fetch/$s_!jA5J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 848w, https://substackcdn.com/image/fetch/$s_!jA5J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 1272w, https://substackcdn.com/image/fetch/$s_!jA5J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fb99698-1849-4381-a313-471ad446a507_762x230.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2><strong>Google Vertex AI Agent Builder + SDK</strong></h2><p>The multimodal + interoperability leader</p><p>Google centers its agent platform on Vertex AI Agent Builder and its associated SDK. Two things stand out above everything else: multimodal capabilities and a strong bet on interoperability.</p><p><strong>Key capabilities</strong></p><ul><li><p>Native MCP and A2A&#8209;oriented design: Among the big clouds, Google is the most aggressive on interoperability, with first&#8209;class Model Context Protocol (MCP) support and early patterns for agent&#8209;to&#8209;agent communication.</p></li><li><p>Multimodal agents: Gemini 3 Pro&#8209;class models process text, audio, video, and images natively, with very large context windows. Agents can &#8220;see&#8221; and &#8220;hear,&#8221; not just read and write.</p></li><li><p>Agent Engine for memory: Short&#8209;term and long&#8209;term memory with topic&#8209;based organization so agents remember what actually matters across sessions.</p></li><li><p>Human&#8209;in&#8209;the&#8209;loop: Agents can pause mid&#8209;workflow, request human input, and then resume with full state preserved.</p></li></ul><p><strong>My take from the field</strong></p><p>Google&#8217;s interoperability story is the most forward&#8209;leaning. When your pharma manufacturing agent needs to communicate with your supply&#8209;chain vendor&#8217;s agent, protocol standards matter. Google is betting on being the Switzerland of agent interoperability.</p><p>The multimodal capabilities are uniquely valuable in manufacturing. An agent that processes real&#8209;time video from a production line, detects visual defects, cross&#8209;references quality specs, and triggers an alert requires native multimodal processing. No other platform does this as cleanly right now.</p><p>The downside is the usual one with Google Cloud: if you&#8217;re not already on GCP, the adoption friction is real.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_8Nj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_8Nj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 424w, https://substackcdn.com/image/fetch/$s_!_8Nj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 848w, https://substackcdn.com/image/fetch/$s_!_8Nj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 1272w, https://substackcdn.com/image/fetch/$s_!_8Nj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_8Nj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png" width="456" height="135.15463917525773" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:230,&quot;width&quot;:776,&quot;resizeWidth&quot;:456,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_8Nj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 424w, https://substackcdn.com/image/fetch/$s_!_8Nj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 848w, https://substackcdn.com/image/fetch/$s_!_8Nj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 1272w, https://substackcdn.com/image/fetch/$s_!_8Nj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd89da4e-ebca-44b3-aeb9-788c090dfe93_776x230.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2><strong>The Comparison at a Glance</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W5AE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W5AE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 424w, https://substackcdn.com/image/fetch/$s_!W5AE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 848w, https://substackcdn.com/image/fetch/$s_!W5AE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 1272w, https://substackcdn.com/image/fetch/$s_!W5AE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W5AE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png" width="1456" height="1033" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1033,&quot;width&quot;:1456,&quot;resizeWidth&quot;:0,&quot;bytes&quot;:482122,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://maxvotek.com/i/190746073?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W5AE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 424w, https://substackcdn.com/image/fetch/$s_!W5AE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 848w, https://substackcdn.com/image/fetch/$s_!W5AE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 1272w, https://substackcdn.com/image/fetch/$s_!W5AE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c3f274-8d0e-4086-8911-e9ca5c6a05ad_2056x1458.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>The Pattern You Can&#8217;t Ignore</strong></h2><p>Every single platform has lock&#8209;in risk. That isn&#8217;t a bug; it&#8217;s the business model.</p><p>The real question isn&#8217;t whether you&#8217;ll be locked in, but which lock&#8209;in you can live with given where your data and workflows already reside.</p><p>More on that in Part 2.</p><h2><strong>What I&#8217;m Not Telling You Yet</strong></h2><p>This comparison tells you what each platform does. It doesn&#8217;t tell you which one to pick, because that depends on factors the feature lists don&#8217;t cover:</p><ul><li><p>Where does your data actually live (Delta on Databricks, Salesforce Data Cloud, S3, BigQuery, on&#8209;prem)?</p></li><li><p>Who builds and maintains the agents in your org (central ML team, line&#8209;of&#8209;business admins, external SI)?</p></li><li><p>Do your agents need to talk to agents outside your company (CDMOs, logistics providers, payers, partners)?</p></li><li><p>What does &#8220;production&#8221; mean in your regulatory context (GxP validation, HIPAA, FDA scrutiny, internal audit)?</p></li></ul><p>Those are the questions that matter. And that&#8217;s exactly what Part 2 covers.</p><h4><strong>Sources</strong></h4><ol><li><p><a href="https://www.databricks.com/blog/custom-agents-now-available-databricks">Custom Agents now available on Databricks</a></p></li><li><p><a href="https://docs.databricks.com/aws/en/generative-ai/agent-bricks/">Agent Bricks | Databricks Documentation</a></p></li><li><p><a href="https://www.salesforce.com/agentforce/what-is-new/">Agentforce 360 Announcements - Salesforce</a></p></li><li><p><a href="https://www.saastr.com/salesforce-now-has-3-pricing-models-for-agentforce-and-maybe-right-now-thats-the-way-to-do-it/">Salesforce Agentforce Pricing (SaaStr)</a></p></li><li><p><a href="https://www.microsoft.com/en-us/microsoft-copilot/blog/copilot-studio/6-core-capabilities-to-scale-agent-adoption-in-2026/">6 Core Capabilities to Scale Agent Adoption - Microsoft</a></p></li><li><p><a href="https://aws.amazon.com/bedrock/agentcore/">Amazon Bedrock AgentCore</a></p></li><li><p><a href="https://cloud.google.com/blog/products/ai-machine-learning/new-enhanced-tool-governance-in-vertex-ai-agent-builder">Vertex AI Agent Builder - Google Cloud</a></p></li><li><p><a href="https://sema4.ai/blog/best-ai-platforms-of-2026/">Enterprise AI Platform Guide 2026 (Sema4.ai)</a></p></li></ol>]]></content:encoded></item><item><title><![CDATA[The Real NVIDIA Moat Has Nothing to Do With GPUs]]></title><description><![CDATA[I spent a month going deep on NVIDIA as a practitioner who builds AI workloads and as a shareholder. Here&#8217;s what I found.]]></description><link>https://maxvotek.com/p/the-real-nvidia-moat-has-nothing</link><guid isPermaLink="false">https://maxvotek.com/p/the-real-nvidia-moat-has-nothing</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Tue, 10 Mar 2026 16:14:03 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/71101be9-739a-4014-a38b-2a15b9aa910a_1200x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>These numbers sound like a typo.<br>$68.1 billion in a single quarter.</p><p>Roughly 73% year&#8209;over&#8209;year revenue growth.</p><p>About $62.3 billion from the data center alone.</p><p>Guidance for next quarter: around $78&#8211;80 billion.</p><p>But behind the financials is something more interesting: an organizational story most analysts miss entirely. I&#8217;ve spent the last month studying NVIDIA from two angles: as someone who ships AI workloads in production, and as a shareholder trying to understand whether this is a cycle or a structural advantage.</p><p>My conclusion: the real moat isn&#8217;t the hardware. Let me break down why.</p><h2><strong>1. Jensen Huang Runs a 40,000&#8209;Person Company Like It&#8217;s Still a Startup</strong></h2><p>This is the part that doesn&#8217;t get enough attention.</p><p>Jensen Huang has been reported to have roughly 60 direct reports. Not six, sixty. He largely avoids traditional one&#8209;on&#8209;ones; instead, he prefers leadership sessions where everyone hears the same feedback at the same time. His reasoning: the more direct reports a CEO has, the fewer layers in the company and the more fluid the information flow.</p><p>Every couple of weeks, he personally reads &#8220;the five most important things&#8221; from people across the company, not just senior leaders. He&#8217;s known for reading emails at 5am and for staying personally involved in major acquisitions and exceptional hiring decisions.</p><p>He demands what he calls &#8220;speed of light&#8221; execution - benchmarked against the theoretical limits of the hardware, not just &#8220;fast&#8221; or &#8220;best in class.&#8221;</p><p>The culture follows: minimal silos, fluid teams, and people moving to whatever is most critical rather than clinging to permanent org boxes.</p><p>Operationally, that&#8217;s unusual at this scale. Most companies with NVIDIA&#8217;s footprint ($215.9 billion in annual revenue and 40,000+ employees) calcify. They add management layers, create review committees, and slow down. NVIDIA, by contrast, ships aggressive roadmaps like Blackwell on time while competitors struggle to land basic product timelines. That&#8217;s not an accident; it&#8217;s a direct consequence of this operating model.</p><p>Most CEOs at this scale choose comfort: fewer direct reports, more delegation, calendar buffer, layers between themselves and the work. </p><p>Jensen chose the opposite. The results speak for themselves. And that structural speed advantage flows directly into the next layer of NVIDIA&#8217;s moat: the way the hardware and software stack are actually used.</p><h2><strong>2. The Blackwell Number Everyone Misquotes</strong></h2><p>You&#8217;ll hear &#8220;Blackwell is 4x faster than H100.&#8221; That&#8217;s true only under specific, highly optimized conditions.</p><p>If you naively port an existing model from Hopper to Blackwell, you might only see something like a modest double&#8209;digit uplift. You&#8217;re running new silicon with old assumptions, and the chip can&#8217;t show you what it&#8217;s really capable of.</p><p>Once you deeply optimize the stack: kernel&#8209;level tuning, careful use of the memory hierarchy, advanced scheduling, and full use of lower&#8209;precision modes like FP8 and FP4 where appropriate, the picture changes. </p><p>On some large&#8209;model training and inference benchmarks, you can see roughly 3&#8211;4x speedups at the system level versus previous&#8209;generation setups. At rack scale, with systems like the GB200 NVL72, you can see order&#8209;of&#8209;magnitude gains on certain inference workloads, not just from the GPUs themselves, but from the way the interconnect, networking, and software stack are co&#8209;designed.</p><p>The exact numbers are workload&#8209;dependent, but the pattern is consistent: the gap between a &#8220;drop&#8209;in&#8221; port and a fully optimized deployment is huge.</p><p>That gap between naive and optimized is the moat.</p><p>NVIDIA does something here that&#8217;s hard to match at scale: they send engineers to work directly with key customers, hand&#8209; optimize kernels and end&#8209;to&#8209;end pipelines for specific workloads. When a hyperscaler like Microsoft or Meta wants to squeeze every last token per second from a Blackwell cluster, NVIDIA doesn&#8217;t just ship hardware and wave goodbye. What they do is they embed, they tune, and they co&#8209;design the full stack.</p><p>The takeaway is simple: budgeting for hardware without budgeting for optimization is like buying a Formula 1 car and filling it with regular gasoline. The chip is only as good as the stack running on it and increasingly, that stack is where NVIDIA&#8217;s deepest advantage lives.</p><h2><strong>3. The Real Moat: CUDA and 20 Years of Software Infrastructure</strong></h2><p>Think of CUDA the way you&#8217;d think about Windows in its dominant era: if all the tools, libraries, and frameworks work best on your platform, switching becomes not just expensive but operationally risky.</p><p>CUDA isn&#8217;t a product. It&#8217;s an ecosystem. Thousands of libraries, highly optimized kernels, and frameworks so deeply integrated that the switching costs are enormous. In practice, the major ML frameworks - PyTorch, TensorFlow, JAX - tend to run best on CUDA paths today. The inference stacks that power real&#8209;world deployments like TensorRT&#8209;LLM, vLLM, SGLang and others, are deeply integrated with NVIDIA&#8217;s platform.</p><p>NVIDIA keeps feeding this flywheel. Open&#8209;source families like Nemotron are released to the community, keeping developers anchored in their ecosystem. Thousands of engineers work on nothing but keeping CUDA, cuDNN, NCCL, TensorRT, and domain&#8209;specific SDKs ahead of each new hardware generation. When Blackwell ships, the software stack is already tuned for it, you don&#8217;t wait years for the ecosystem to catch up.</p><p>Is anyone chipping away at this? Yes, and it matters.</p><ul><li><p>AMD has poured resources into ROCm, and framework support plus MLPerf participation shows the gap is narrowing, especially for buyers willing to invest in engineering.</p></li><li><p>Compiler stacks inspired by Triton&#8217;s hardware&#8209;agnostic philosophy are explicitly designed to make it easier to run the same kernels on AMD, Intel, and others without wholesale rewrites.</p></li><li><p>Cerebras, pursuing a public listing, is pushing wafer&#8209;scale systems that, on their own benchmarks, deliver over 20x higher inference throughput and around 32% lower cost per token than a DGX B200 Blackwell setup, while using roughly one&#8209;third less power for those workloads. That&#8217;s genuinely interesting.</p></li></ul><p>These are real developments. The era of NVIDIA&#8217;s near&#8209;monopoly is shifting into a more competitive landscape.</p><p>But narrowing the gap on hardware is not the same as narrowing the gap on the ecosystem. You can design a chip that matches NVIDIA&#8217;s specs in a few years. You cannot, in two or three years, recreate the developer tooling, optimized libraries, framework integrations, and thousands of battle&#8209;tested production deployments that live on CUDA. That takes a decade and by the time you&#8217;ve closed that gap, NVIDIA has typically moved the goalpost again.</p><h2><strong>4. The Risk That&#8217;s Real and Shared</strong></h2><p>NVIDIA now sits at the very front of TSMC&#8217;s priority queue, alongside Apple and a handful of the world&#8217;s largest chip buyers. That tells you everything about NVIDIA&#8217;s strategic importance and its single biggest exposure.</p><p>Taiwan concentration is a genuine geopolitical risk. If anything meaningfully disrupts TSMC&#8217;s operations, NVIDIA&#8217;s supply chain takes a hit.</p><p>But this risk is shared by every major AI and mobile silicon player. AMD, Apple, Qualcomm, and many others depend on TSMC&#8217;s leading&#8209;edge nodes. If TSMC goes down, the entire advanced&#8209;node industry is in trouble, not just NVIDIA. That means this risk is effectively priced across the sector, not unique to one ticker.</p><p>For shareholders, the more relevant question is: &#8220;In a world where TSMC keeps operating, who has the strongest structural position to capture AI economics?&#8221;</p><p>Right now, that answer still looks like NVIDIA.</p><h2><strong>5. On Competitors: A Blunt Assessment</strong></h2><p>Most NVIDIA analysis either ignores competitors or wildly overstates them. Here&#8217;s the more grounded view.</p><ul><li><p><strong>AMD</strong> is the most credible challenger. The upcoming MI450 series is designed to go directly at Blackwell&#8209;class workloads, and ROCm is genuinely improving. But AMD is fighting on NVIDIA&#8217;s terms, trying to close a hardware gap while also building out a software ecosystem. Playing catch&#8209;up on both fronts simultaneously is a punishing strategic position.</p></li><li><p><strong>Cerebras</strong> has technically impressive wafer&#8209;scale systems. The CS&#8209;3 packs on the order of 4 trillion transistors and hundreds of thousands of AI&#8209;optimized cores. On their published benchmarks, they show 21x faster inference and roughly 32% lower cost per token than a DGX B200 Blackwell system, with materially lower power, for specific LLM workloads. That&#8217;s serious, but going from standout benchmarks to hyperscale, cloud&#8209;like ubiquity is a very different problem.</p></li><li><p>Other players like Fireworks AI, Together AI, SambaNova, Graphcore, and more, are building useful products in specific niches, especially around serving, fine&#8209;tuning, and verticalized stacks. They matter tactically, but they&#8217;re not yet structural threats to NVIDIA&#8217;s position at the platform layer.</p></li></ul><p>Then there&#8217;s NVIDIA&#8217;s own M&amp;A posture. The company recently struck a roughly $20 billion licensing and acqui&#8209;hire deal for Groq&#8217;s deterministic inference technology, which it is integrating into its upcoming Rubin platform. When you&#8217;re already dominant and still spending at that scale to deepen your technology stack, not just defend it, that&#8217;s an offensive move.</p><p>For most companies, the smartest move right now is definitely not trying to compete with NVIDIA at the platform level, but building on top of NVIDIA, while keeping an eye on alternatives and maintaining enough portability to pivot if economics or geopolitics force your hand.</p><h2><strong>The Bottom Line</strong></h2><p>NVIDIA isn&#8217;t just selling GPUs. It&#8217;s selling a complete AI infrastructure stack: hardware, software, libraries, frameworks, optimization services, and a developer ecosystem that&#8217;s been compounding for roughly 20 years.</p><p>$68.1 billion in quarterly revenue. Around $78&#8211;80 billion guided for the next quarter. Mid&#8209;70s gross margins. And a CEO who still reads emails at 5am and stays personally involved in the details that most leaders at his scale have long since delegated.</p><p>NVIDIA isn&#8217;t just winning the AI race.<br>It&#8217;s actually designing the track.</p><p>If this kind of practitioner&#8209;level analysis is useful to you, it&#8217;s what I publish here. Subscribe to get the next one.</p><p>Are you building on NVIDIA infrastructure, or actively betting on an alternative? What are you seeing on the ground?</p>]]></content:encoded></item><item><title><![CDATA[Consulting Firms Spent $10 Billion on AI. Their Business Model Didn’t Change.]]></title><description><![CDATA[PwC, McKinsey, Deloitte - they&#8217;re all in.]]></description><link>https://maxvotek.com/p/consulting-firms-spent-10-billion</link><guid isPermaLink="false">https://maxvotek.com/p/consulting-firms-spent-10-billion</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 05 Mar 2026 15:20:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6f6z!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1bdace3-ad0f-46f2-9d08-04ba4f79b858_800x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>PwC, McKinsey, Deloitte - they&#8217;re all in. But beneath the press releases, the billable&#8209;hour machine hums on. Here&#8217;s what that means if you&#8217;re the one who actually has to get AI into production.</strong></p><p style="text-align: center;"><strong>$10B+</strong></p><p style="text-align: center;">AI investment by Big Four &amp; McKinsey since 2023</p><p style="text-align: center;"><strong>75%</strong></p><p style="text-align: center;">of McKinsey fees still billed by the hour</p><p style="text-align: center;"><strong>30%</strong></p><p style="text-align: center;">research time saved by McKinsey&#8217;s AI chatbot Lilli</p><p style="text-align: center;"><strong>6-30%</strong></p><p style="text-align: center;">drop in graduate recruiting at Big Four (2024-25)</p><p>The numbers are staggering.</p><p>PwC dropped $1 billion over three years and became OpenAI&#8217;s largest enterprise customer.<br>KPMG locked in a $2 billion Microsoft alliance.<br>Deloitte launched a $2 billion &#8220;Industry Advantage&#8221; program.<br>EY invested $1.4 billion and built its own proprietary LLM platform, EYQ.<br>McKinsey deployed an internal AI chatbot called Lilli to 72% of its 45,000 employees by 2025.</p><p>In total, the Big Four and McKinsey have poured over $10 billion into AI since 2023.</p><p>And yet almost nothing about how consulting actually works has changed.</p><p>A recent deep dive from Future of Consulting calls this out in brutal detail. I want to unpack it from the perspective of someone who actually implements AI in enterprises.</p><h2><strong>The Productivity Paradox</strong></h2><p>Here&#8217;s the number worth noting: McKinsey&#8217;s Lilli saves consultants 30% of their research time.</p><p>30%. That&#8217;s enormous. In an implementation project, a 30% efficiency gain changes your entire delivery timeline and cost structure.</p><p>But here&#8217;s what McKinsey did with that 30%: almost nothing visible to clients. The savings stay inside the firm. The billing rates don&#8217;t change. The project timelines don&#8217;t shrink. The efficiency gain is pure margin, captured by the firm, invisible to the buyer.</p><p>This isn&#8217;t a McKinsey problem. It&#8217;s a structural one. As long as most revenue is still tied to time, any tool that makes consultants faster is more likely to pad margins than to show up as better value for buyers.</p><p>When your revenue model is built on billable hours, any tool that makes your people faster is a threat to your top line unless you quietly absorb the gains.</p><p>We see this from the other side at Customertimes. When we deploy AI that makes a pharma company&#8217;s processes 30% more efficient, the client sees it immediately. They measure it. They expect it. Because we&#8217;re building solutions, not selling time.</p><p>The consulting model incentivizes hiding efficiency. The implementation model incentivizes delivering it.</p><h2><strong>Only 25% of McKinsey&#8217;s Fees Are Tied to Outcomes</strong></h2><p>The most prestigious consulting firm in the world, the one that advises Fortune 500 companies on &#8220;digital transformation,&#8221; still collects roughly 75% of its fees based on time spent, not results delivered.</p><p>Yes, about a quarter of fees are now outcome&#8209;based and that&#8217;s real progress, but the core economics of the firm still runs on hours.</p><p>Everyone in enterprise AI knows the industry needs to move toward outcome&#8209;based pricing. Every conference panel says it. Every thought&#8209;leadership piece argues for it.</p><p>But the transition is stalled. And it&#8217;s stalled for a reason that anyone who&#8217;s worked inside large organizations will recognize: the people who would need to approve the change are the same people whose compensation depends on the current model.</p><p>If you&#8217;re a partner billing $500/hour and AI makes your team twice as fast, outcome&#8209;based pricing means you now need to deliver twice the value to maintain your revenue. Or accept that the same work is worth less. Neither option is appealing when you&#8217;re two years from retirement.</p><h2><strong>The Junior Layer Is Disappearing And Nobody Has a Plan</strong></h2><p>This is the part that worries me most for the long term.</p><p>Graduate recruiting across the Big Four has been sliding, with double&#8209;digit drops in 2024&#8211;2025 at several firms. Firms are cutting entry&#8209;level positions because AI now handles the work juniors used to do: data gathering, initial analysis, deck formatting, research summaries.</p><p>On the surface, this sounds efficient. Why pay a first&#8209;year analyst $85,000 to do work that GPT&#8209;4 can do in seconds?</p><p>But consulting has always been an apprenticeship business. Juniors learned by doing the &#8220;grunt work.&#8221; They sat in client meetings. They built models that got torn apart by managers. They learned pattern recognition through repetition.</p><p>When AI drafts the first pass of every slide deck, junior staff lose the reps of structuring arguments, anticipating objections, and seeing which ideas survive partner review. That&#8217;s where the client&#8217;s judgment used to be formed.</p><p>Remove that layer, and you have a training crisis. In five years, who becomes the senior consultant? Who has the client instincts? Who can read a room and adjust a recommendation on the fly?</p><p>We face a similar challenge in enterprise AI implementation. When we automate validation workflows in pharma or quality checks in manufacturing, we need to deliberately design new learning paths for junior team members. The work that used to train them is gone. If you don&#8217;t create something to replace it, you end up with a bimodal workforce - senior experts and AI tools, with nothing in between.</p><p>In pharma implementations, for example, the junior who used to manually walk through validation logs now needs a different path to learn how deviations actually show up in the data and why QA pushes back.</p><h2><strong>PowerPoints Don&#8217;t Deploy Themselves</strong></h2><p>Here&#8217;s my biggest frustration with the current state of consulting AI: firms are using AI to produce recommendations faster, not to deliver solutions.</p><p>A consulting engagement in 2026 still ends the same way it did in 2016: a slide deck. Maybe a nicer one. Maybe it was drafted 30% faster. But the client still gets a PDF, a &#8220;roadmap,&#8221; and a wave goodbye.</p><p>Meanwhile, the client is left to actually build the thing. They hire implementation partners (like us). They discover that half the recommendations don&#8217;t account for their legacy systems, their regulatory constraints, or their organizational politics. They spend months translating strategy into working software.</p><p>The gap between &#8220;we made a deck&#8221; and &#8220;we shipped a system&#8221; is where most AI value now lives.</p><h2><strong>What This Means If You&#8217;re a Buyer</strong></h2><p>If you&#8217;re a healthcare executive, a pharma CTO, or a manufacturing leader evaluating whether to engage a Big Four firm for your AI initiative, here&#8217;s what I&#8217;d ask:</p><ol><li><p>What are you actually buying?<br>Are you buying a strategy deck or a working solution? If it&#8217;s a strategy, can your internal team execute it, or will you need another partner?</p></li><li><p>How is the engagement priced?<br>If it&#8217;s time&#8209;and&#8209;materials with no outcome guarantees, you&#8217;re paying for both the consultants&#8217; learning curve and the AI&#8209;driven efficiency gains they&#8217;re keeping.</p></li><li><p>Where&#8217;s the implementation plan?<br>Not just a &#8220;roadmap,&#8221; but an architecture, integration points, and a timeline that reflects your real systems and constraints.</p></li><li><p>What happens after the engagement ends?<br>The most expensive consulting engagement is the one that produces a strategy nobody can implement. Ask who will own, monitor, and evolve the AI systems once the consultants leave, and what budget and skills that requires on your side.</p></li></ol><h2><strong>The Real Opportunity</strong></h2><p>The article from Future of Consulting calls these firms &#8220;hollow cathedrals&#8221; - impressive from the outside, empty at the core. That&#8217;s a provocative phrase, and I think it&#8217;s partially right.</p><p>But here&#8217;s the opportunity: the gap between consulting recommendations and real&#8209;world AI implementation is massive. And it&#8217;s growing.</p><p>Enterprises are increasingly building internal AI teams. They&#8217;re questioning why they&#8217;re paying consulting rates for AI&#8209;augmented work. They&#8217;re looking for partners who deliver working systems, not slide decks.</p><p>This is exactly the shift we&#8217;ve been building toward at Customertimes for years: AI solutions that run in production, survive audits, and actually move the metrics that matter in pharma and manufacturing.</p><p>The $10 billion that consulting firms invested in AI? Most of it went toward making consultants more productive. Very little went toward making clients more successful.</p><p>That&#8217;s not an AI revolution. That&#8217;s an AI optimization of the same old model.</p><p>What are you seeing on your end?</p><p>Are consulting firms delivering real AI value in your organization: running systems, measurable lift or just better&#8209;produced versions of the same advice?</p><p>If you&#8217;re comfortable sharing details, I&#8217;m especially interested in where a consulting AI &#8220;strategy&#8221; died in implementation, or where an implementation partner actually rescued a stalled initiative. Reply or drop a comment below.</p><p style="text-align: center;"><strong>What are you seeing on your end?</strong></p><p style="text-align: center;">Are consulting firms delivering real AI value in your organization, or are you getting better-produced versions of the same advice?</p><p style="text-align: center;">I&#8217;d love to hear your experience, reply or drop a comment below.</p>]]></content:encoded></item><item><title><![CDATA[I Self-Hosted an AI Agent Orchestrator. Here’s What I Learned About the Future]]></title><description><![CDATA[This weekend I set up OpenClaw in Docker on a small Linux box.]]></description><link>https://maxvotek.com/p/i-self-hosted-an-ai-agent-orchestrator</link><guid isPermaLink="false">https://maxvotek.com/p/i-self-hosted-an-ai-agent-orchestrator</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Wed, 25 Feb 2026 15:08:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2e9cf50b-fb15-463a-9745-a0005a97fe1f_1280x719.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This weekend I set up <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;OpenClaw&quot;,&quot;id&quot;:444575302,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68ce6969-07fd-41e1-acc8-2d6ba4d07178_876x876.png&quot;,&quot;uuid&quot;:&quot;cfe6b692-5329-4ec5-9fb1-f3e37180fb99&quot;}" data-component-name="MentionToDOM"></span> in Docker on a small Linux box. After a long back-and-forth with Claude and some config wrestling, I got it running inside Telegram. Then I fed the bot data from my Telegram channel so the agent could learn my style and preferences.</p><p>On a parallel track, I experimented with voice messages: instead of cloud-based ElevenLabs, I ran local TTS models on a beefy gaming PC. For speech synthesis, the quality is surprisingly solid.</p><h2><strong>OpenClaw is not &#8220;one smart bot&#8221;</strong></h2><p>Here&#8217;s what most people miss. OpenClaw is essentially an agent orchestration framework - a construction kit with a growing ecosystem of third-party skills and an app store. You don&#8217;t get a single chatbot. You get an environment where you wire together models, memory, tools, automations, and even multiple separate OpenClaw nodes into one system.</p><p>Think of it as the difference between buying a pre-built PC and having a full electronics workbench.</p><p>And the implications are bigger than most realize. Gokul Rajaram, product veteran from Google, Facebook, Square, and DoorDash, investor in 700+ companies, just posted a thread that nails the core question: did OpenClaw + Skills fundamentally change the architecture and math of AI?</p><p>His framing is sharp. Can perpetually running agents that train themselves on new capabilities via SKILL.md files reach the same level of expertise as the hand-crafted fine-tuned models that AI startups have spent years building? If the answer is even &#8220;partially yes&#8221; and from my weekend experiments, it&#8217;s trending that way and the downstream effects are massive.</p><p>Rajaram asks what this means for horizontal AI agent builders like Glean, ServiceNow, and Sierra. What about verticals: legal, finance, healthcare? Can we have an &#8220;OpenClaw for Legal&#8221; agent trained on a contract drafting skill file that&#8217;s as capable as what a specialized AI startup offers today?</p><p>And here&#8217;s the kicker he flags: if this self-learning agent paradigm makes expensive post-training less critical, it reshapes the entire economics of the AI startup ecosystem. The pricing models, the data companies, the venture math, all of it gets rewritten.</p><p>From what I&#8217;ve seen hands-on this weekend, I&#8217;d say the shift is real. Not finished, not polished but structurally real.</p><h2><strong>Big tech noticed. And they&#8217;re not happy.</strong></h2><p>What&#8217;s telling: last week Anthropic restricted Claude Code usage inside OpenClaw. Then Google reportedly pulled Gemini access too. The reasoning is transparent - autonomous development without human-in-the-loop feels too risky to the big players, and it undermines the role of their walled-garden ecosystems.</p><p>But the genie is out of the bottle. If the major players lock down, more open alternatives will emerge. That&#8217;s how this always works.</p><h2><strong>RAG changed everything for my assistant</strong></h2><p>For my AI assistant Katrina, I added Postgres and a vector database. RAG drastically cut token consumption and kept large context windows far more stable than the old approach of stuffing everything into markdown files.</p><p>This is starting to look like a prototype of a living institutional knowledge system with semantic search, always evolving, not a dead Jira / Confluence / SharePoint graveyard where information goes to die.</p><p>Rajaram has been saying that in an agentic future, infrastructure companies become application companies because agents don&#8217;t need a software UX. What I&#8217;m seeing in practice confirms this. The database layer is the product now. The vector store is the knowledge base. There&#8217;s no UI in between, just the agent talking to the data.</p><h2><strong>The honest take</strong></h2><p>For the mass market, the product is still raw. You have to dig into code, manage security, isolate environments, fix things when they break.</p><p>But as a sandbox for experiments and for understanding how future agent systems will actually be built and this is an incredibly powerful experience.</p><p>The paradigm that previous-generation companies were built on? It already looks like the last century.</p><p><strong>A new era of agent orchestration is starting. And it&#8217;s not waiting for permission.</strong></p><p><em>If you&#8217;re experimenting with agent frameworks, local models, or building your own AI infrastructure, I&#8217;d love to hear what&#8217;s working for you. Drop a comment or reply to this email.</em></p>]]></content:encoded></item><item><title><![CDATA[I Don’t Need to Know Databricks. An AI Agent Will Do It for Me.]]></title><description><![CDATA[From Data Warehouse to Operating System for AI]]></description><link>https://maxvotek.com/p/i-dont-need-to-know-databricks-an</link><guid isPermaLink="false">https://maxvotek.com/p/i-dont-need-to-know-databricks-an</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 19 Feb 2026 13:52:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/159575c7-c761-4716-a998-2c1b5420eb52_1080x1080.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>No docs, no tutorials. Just Claude Code and a test dataset. Here&#8217;s what that tells us about the future of enterprise platforms.</p><p>Databricks just raised $7 billion at $5.4 billion in annual revenue and 65% growth. Impressive numbers, sure. But what caught my attention isn&#8217;t the fundraise but what the company has quietly become.</p><p>And more importantly, what that transformation tells us about where all enterprise software is heading.</p><h3><strong>From Data Warehouse to Operating System for AI</strong></h3><p>If you still think of Databricks as a data warehouse with dashboards, you&#8217;re looking at a two-year-old snapshot.</p><p>Today, Databricks is an infrastructure platform of an entirely new class. It&#8217;s a place where you can store and process data, host modern LLMs, talk to your data in natural language, and most critically, build agentic solutions. All under one roof.</p><p>For large enterprises, this is a fundamentally different category of system. We&#8217;ve been working with Databricks in enterprise projects at Customertimes for a while now, and we see this shift from the inside every day. The platform isn&#8217;t just evolving, it&#8217;s being rebuilt around a new paradigm.</p><h3><strong>The Real Difference: Architecture Built for Agents, Not Humans</strong></h3><p>Here&#8217;s the thing that separates Databricks from legacy platforms like SAP or Oracle.</p><p>Old enterprise systems were designed for humans clicking through menus. Screen by screen. Form by form. The user interface <em>was</em> the product. If you wanted to automate anything, you had to reverse-engineer the UI layer, build fragile integrations, and pray nothing broke on the next update.</p><p>Databricks is built differently:</p><ul><li><p><strong>API-first</strong>: every capability is accessible programmatically</p></li><li><p><strong>Open architecture</strong>: no vendor lock-in traps, no proprietary black boxes</p></li><li><p><strong>Agentic-ready</strong>: designed so that software can interact with it as a first-class user</p></li></ul><p>This isn&#8217;t a minor technical distinction. It&#8217;s a philosophical one. The platform assumes its primary user might not be a person at all, it might be another program.</p><h3><strong>The Experiment: Zero Knowledge, Full Results</strong></h3><p>I recently ran an experiment that proved this point to myself.</p><p>I connected to Databricks through Claude Code, loaded a test dataset, and assembled dashboards with only surface-level knowledge of the platform. No documentation deep-dives. No tutorials. No certification courses.</p><p>Just an AI agent and a CLI.</p><p>The combination of agent + command-line interface genuinely changes the rules of engagement. I didn&#8217;t need to learn the platform&#8217;s UI conventions or memorize where settings live in nested menus. The agent understood the API surface, figured out the right calls, and got the job done.</p><p>This confirms something I&#8217;ve been thinking about for a while: modern systems win not because of beautiful UIs, but because of interfaces that agents can work with. The prettier your dashboard, the less it matters, if your API is solid, an agent can build whatever view a human needs on the fly.</p><h3><strong>Matching Technology to Your Stage</strong></h3><p>This also connects to something I&#8217;ve written about before: the importance of matching your technology choices to your company&#8217;s stage of development.</p><p>Databricks is a platform built for the AI era. Organizations that understand this gain an enormous advantage. They&#8217;re building their systems not on yesterday&#8217;s principles, but on the architecture of the future.</p><p>They&#8217;re not asking &#8220;what tool has the nicest interface?&#8221;. They&#8217;re asking &#8220;what platform gives my AI agents the most leverage?&#8221;</p><p>That&#8217;s a fundamentally different question, and it leads to fundamentally different decisions.</p><h3><strong>What&#8217;s Still Missing</strong></h3><p>Let&#8217;s be honest about the gap that still exists: documentation.</p><p>Almost every platform today, including Databricks, writes its docs for human developers. Step-by-step guides. Screenshots. UI walkthroughs. That&#8217;s fine for people, but it&#8217;s nearly useless for AI agents.</p><p>What agents need is different: clean API references, consistent schemas, predictable error handling, and machine-readable specifications. The platforms that figure this out first will have a massive adoption advantage.</p><p>But this is a matter of when, not if. Platform builders will catch on fast once they realize their main user is becoming software.</p><h3><strong>$7 Billion Says the Market Agrees</strong></h3><p>The $7 billion raise isn&#8217;t just an investment in a company. It&#8217;s a bet that all enterprise development and analytics will be built around AI. Databricks currently sits at the center of this new economy, offering not just a tool but an entire operating system for intelligent applications.</p><p>The market sees what&#8217;s coming:</p><ul><li><p>Enterprise platforms will be agent-first, human-second</p></li><li><p>The value shifts from UI polish to API depth</p></li><li><p>The winners will be platforms that treat software as their primary customer</p></li></ul><p>We&#8217;re at the beginning of this transition. Most enterprise software is still stuck in the old paradigm. But the signal is clear, and $7 billion of capital says the smart money agrees.</p><h3><strong>The Question for Your Business</strong></h3><p>Here&#8217;s what I&#8217;d challenge you to think about:</p><p>What platforms in your stack are agent-ready today?</p><p>Which ones could you hand off to an AI agent with a CLI and get meaningful results without deep platform expertise?</p><p>If the answer is &#8220;none&#8221; or &#8220;I don&#8217;t know,&#8221; that&#8217;s your starting point. The gap between agent-ready infrastructure and legacy systems is going to become the most important architectural decision of the next five years.</p><p>The platforms that are built for agents will compound in value. The ones that aren&#8217;t will become the new technical debt.</p><p><em>What platforms are you testing with AI agents? I&#8217;d love to hear what&#8217;s working and what&#8217;s not. Drop a comment or reply to this post.</em></p>]]></content:encoded></item><item><title><![CDATA[$240 Billion in Panic-Buying]]></title><description><![CDATA[$240 billion in pharma M&A.]]></description><link>https://maxvotek.com/p/240-billion-in-panic-buying</link><guid isPermaLink="false">https://maxvotek.com/p/240-billion-in-panic-buying</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 12 Feb 2026 16:14:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a606331a-dea9-4569-8e80-cf03583f62c8_790x961.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>$240 billion in pharma M&amp;A.<br>In one year.<br>81% more than 2024.</p><p>And the deal count actually went down more than 10%.</p><p>Meaning fewer deals, but massively bigger ones.<br>Mean deal size doubled to $2.1 billion.</p><p>On paper this looks like strategic expansion.<br>In practice, this is an industry buying time.</p><h2><strong>The cliff nobody&#8217;s talking about</strong></h2><p>Over $300 billion in branded drug revenue is at risk from loss of exclusivity through 2030.</p><p>Not gradually. Not gently.</p><p>The growth gap hits $100 billion by 2028.<br>Then expands to $370 billion by 2032.</p><p>The drugs losing protection aren&#8217;t niche.<br>They&#8217;re blockbusters:</p><p>&#8594; Eliquis &#8212; projected to lose &gt;$2.5 billion a year<br>&#8594; Entresto &#8212; &gt;$2.25 billion a year<br>&#8594; Stelara &#8212; &gt;$2.1 billion a year<br>&#8594; Keytruda, Merck&#8217;s crown jewel, ~$25 billion in annual revenue, loses key U.S. patent protection in 2028</p><p>When your highest-revenue product has a countdown clock, you don&#8217;t optimize R&amp;D pipelines.</p><p>You buy.</p><h2><strong>The buying spree</strong></h2><p>Merck agreed to acquire Verona Pharma for $10 billion.<br>Then Cidara for $9.2 billion.</p><p>And there&#8217;s still $2.1 trillion in firepower sitting on the sidelines.<br>That&#8217;s not a typo. Trillion.</p><p>Record levels of available capital, waiting for the next target.</p><p>But here&#8217;s the number that should concern everyone:</p><p>Only 32% of pharma acquisitions achieved at least 100% of expected revenue targets.</p><p>Two-thirds of these massive deals underperform.</p><p>And the success rate is even worse when acquirers step outside their existing therapeutic areas which is exactly what the patent cliff forces them to do.</p><h2><strong>The AI gold rush inside the panic</strong></h2><p>AI drug discovery deal value surged in 2025.</p><p>Recursion acquired Exscientia for $688 million.<br>AstraZeneca partnered with Tempus AI for $200 million.</p><p>The thesis is simple.<br>If organic R&amp;D can&#8217;t outrun the patent cliff, maybe AI can compress the timeline.</p><p>And it&#8217;s working for the molecule side.</p><p>AI is finding drug candidates faster than ever.<br>Protein folding. Target identification. Compound screening.</p><p>That part of the problem is getting solved.</p><h2><strong>The part that isn&#8217;t</strong></h2><p>At many academic centers, site activation alone can take 6&#8211;8 months.</p><p>Six to eight months before a single patient is enrolled.</p><p>The molecule can be designed in weeks by AI.<br>Then it sits in regulatory paperwork for years.</p><p>Paper-based enrollment processes.<br>Manual site matching.<br>Fax machines. In 2026.</p><p>AI cracked drug discovery.<br>Nobody cracked the paperwork.</p><p>This is where $240 billion in M&amp;A can&#8217;t help.<br>You can&#8217;t acquire your way out of a 1990s regulatory infrastructure.</p><h2><strong>The China variable</strong></h2><p>While U.S. and European pharma companies were buying each other, China quietly captured 34% of biopharma alliance investment.</p><p>Up from 4% in 2020.</p><p>&#8594; Pfizer&#8211;3SBio: up to $6 billion ($1.25B upfront)<br>&#8594; Takeda&#8211;Innovent: $1.2B upfront, up to $10.2B in milestones<br>&#8594; AstraZeneca&#8211;Jacobio: up to $1.91 billion<br>&#8594; Bristol Myers Squibb&#8211;Harbour BioMed: up to $1.1 billion</p><p>This isn&#8217;t outsourcing.<br>This is a fundamental shift in where the innovation pipeline lives.</p><p>And it happened while LinkedIn was debating which free AI tool to download.</p><h2><strong>What actually matters</strong></h2><p>57 novel drugs are expected to launch in the U.S. in 2026.<br>Combined fifth-year sales projection: ~$50 billion.</p><p>The pipeline isn&#8217;t empty.<br>The problem is the space between discovery and patient.</p><p>The companies that will win the next decade aren&#8217;t the ones buying the best molecules.</p><p>They&#8217;re the ones that figure out how to move a drug from lab to patient in months instead of years.</p><p>AI-powered site matching.<br>Automated pre-screening from medical records and registries.<br>Digital-first trial enrollment.</p><p>The molecule was never the bottleneck.<br>The system around it is.</p><p>$240 billion spent on acquisitions.<br>$370 billion evaporating regardless.</p><p>And the actual chokepoint, clinical trial operations, barely gets a headline.</p><p>The biggest opportunity in pharma right now isn&#8217;t a new drug.<br>It&#8217;s making the old process fast enough to matter.</p>]]></content:encoded></item><item><title><![CDATA[On First Technological Victories]]></title><description><![CDATA[I recently came across a photo on my phone of a device that significantly shaped my worldview.]]></description><link>https://maxvotek.com/p/on-first-technological-victories</link><guid isPermaLink="false">https://maxvotek.com/p/on-first-technological-victories</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 29 Jan 2026 13:45:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fb20e137-a6bf-4c6c-afa5-88cea3622d5d_951x1280.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently came across a photo on my phone of a device that significantly shaped my worldview.</p><p>The 386 computer I bought with my first earnings was an entire world, but an almost silent one.</p><p>The standard PC Speaker could only beep, and buying a Sound Blaster or AdLib, which cost about a third of the computer price, was just a dream.</p><p>This feeling of limitation, when the technology exists but is incomplete, became my first real challenge.</p><p>Then on BBS, the internet equivalent of that era, my friend Anton and I stumbled upon a schematic for a Covox device - a simple digital-to-analog converter for the LPT port.</p><p>It was like finding a treasure map. The idea seemed brilliant in its simplicity, and we were fired up to build it.</p><p>What followed was a real adventure that I still remember to this day.</p><p>We traveled to the radio market, which was a universe unto itself with its own rules, hunting for the right resistors and connectors.</p><p>We etched circuit board traces with acid using a special solution - it was an almost alchemical process. Then we soldered, assembled, and checked everything against the schematic.</p><p>Then came the long process of setting up DOS drivers. Something was constantly not working, we spent hours troubleshooting, even tried modifying the code with nothing but a text editor and our own persistence.</p><p>And then came that moment. We launched Quest for Glory from Sierra.</p><p>Instead of the dreary beeping from the speaker, real sound effects rang out. Not Hi-Fi, of course, but there were the hero&#8217;s footsteps, ambient sounds, sword strikes. The world on screen suddenly came alive.</p><p>It felt like transitioning from silent film to talkies, and doing it yourself. Our joy was boundless.</p><p>This experience, which seems naive today, actually laid the foundation for my philosophy.</p><p>First, it&#8217;s about the ability to not accept limitations, but to seek solutions.</p><p><strong>Can&#8217;t buy it - build it. Doesn&#8217;t work - figure out why.</strong></p><p>This is the entrepreneurial spirit in its purest form.</p><p>Second, it&#8217;s about the value of deep understanding of how things work. When you&#8217;re not just a user but a creator, you see technology completely differently.</p><p>This resonates with what I&#8217;ve written about how AI is changing the work of our consultants and developers.</p><p>Even today, in the era of ready-made AI solutions, those who win are the ones not afraid to look under the hood, understand the principles, and build something of their own that&#8217;s perfectly suited to the task.</p><p><a href="http://customertimes.com">At Customertimes</a>, we value exactly these kinds of people who are capable not just of executing tasks, but of creating new solutions.</p><p>The homemade sound card only remains in a phone photo after sorting through boxes during a move, but it taught me the main thing: the real magic of technology is born not at the moment of purchase, but at the moment when you yourself see an idea through to completion and hear the result of your work.</p><p>This feeling that you&#8217;ve made the impossible possible - that&#8217;s the main driver for any creator.</p><p>What&#8217;s your Covox moment? What technology did you build or hack together that changed how you think about creating versus consuming? I&#8217;d love to hear your stories.</p>]]></content:encoded></item><item><title><![CDATA[The Vertical AI Agent Opportunity. And Why It Won’t Last Forever]]></title><description><![CDATA[I spent last week discussing AI use cases in pharma and CPG with our team at Customertimes.]]></description><link>https://maxvotek.com/p/the-vertical-ai-agent-opportunity</link><guid isPermaLink="false">https://maxvotek.com/p/the-vertical-ai-agent-opportunity</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 22 Jan 2026 18:50:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d83ef63d-8aac-49c2-b75c-164db7670a4a_1280x714.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I spent last week discussing AI use cases in pharma and CPG with our team at Customertimes. We hold strong positions in our niche, and we were first to solve certain problems using AI. But beyond pride in our success, I feel a low-grade tension that hasn&#8217;t let up for several weeks.</p><p>The reason: a new existential risk that few are talking about openly yet - universal long-horizon agents. These are horizontal AI systems not trained on our specific domain, but capable of working on a single task for hours or even days, self-correcting and seeing it through to completion.</p><p>Then I listened to a YC interview on vertical AI agents at Sergei Bulaev&#8217;s recommendation, and the pieces started clicking together. There&#8217;s a massive opportunity in vertical AI agents right now. But the window is narrower than most people think.</p><h2><strong>The Case for Vertical AI Agents</strong></h2><p>The YC interview makes a compelling argument: vertical AI agents can significantly outperform traditional SaaS solutions because they replace not just software, but entire teams of employees. This is a fundamentally different value proposition.</p><p>The best opportunities exist in sectors drowning in bureaucratic overhead: healthcare and financial services top the list. In these industries, compliance requirements don&#8217;t just add cost, they shape entire business models. In CPG and retail, the calculus is different but equally compelling: low margins will push management toward experiments and nonlinear productivity gains that they&#8217;d never consider in higher-margin businesses.</p><p>Early examples of successful implementations include customer support automation, debt collection, medical billing, and software testing. Market penetration is still under 1%, which points to enormous growth potential. The technology reminds me of early SaaS evolution: initial skepticism about capabilities gradually gives way to recognition of advantages. But we&#8217;re seeing substantial progress every three months now, not every few years.</p><p>The future belongs to narrowly specialized solutions focusing on complete workflow automation. This could lead to unicorn companies with just a dozen employees - a radical departure from traditional business scaling assumptions.</p><h2><strong>What the Data Actually Shows</strong></h2><p><a href="https://www.anthropic.com/research/economic-index-primitives">Anthropic just released</a> their Economic Index with new &#8220;economic primitives&#8221; that measure real Claude usage across millions of conversations. The data tells a more nuanced story than the hype suggests.</p><p>First, the good news for vertical specialists: Claude completes very different kinds of tasks in countries at different stages of economic development. In high-GDP countries, Claude is used primarily for work and personal tasks, while lower-income countries use it more for educational coursework. This fits an adoption curve where AI use diversifies toward personal purposes as countries get richer - and where domain expertise becomes more valuable as adoption matures.</p><p>The concentration data is striking: even with 3,000 unique work tasks on Claude.ai, the top ten account for 24% of usage, up from 21% in January 2025. Computer and mathematical tasks dominate - a third of all Claude.ai conversations and nearly half of API traffic.</p><p>But here&#8217;s where it gets interesting for vertical AI agents: the success rates vary dramatically by task complexity and time horizon. Claude successfully completes tasks requiring a college degree 66% of the time, compared to 70% for tasks requiring less than a high school education. More complex tasks see bigger speedups: tasks requiring college-level understanding are sped up by a factor of 12, versus 9x for high school-level tasks.</p><p>The time horizon data is the critical piece. METR&#8217;s benchmark shows Claude Sonnet 4.5 achieves 50% success rates on 2-hour tasks. Anthropic&#8217;s own API data shows 50% success at around 3.5 hours, and on Claude.ai, the duration extends to 19 hours. Users can break down complex tasks into smaller steps, creating a feedback loop that allows Claude to correct course, which is exactly how vertical AI agents work in practice.</p><h2><strong>The Cursor Browser: A Wake-Up Call</strong></h2><p><a href="https://www.anthropic.com/research/economic-index-primitives">Michael Truell, Cursor&#8217;s 25-year-old CEO, just demonstrated</a> exactly what I&#8217;ve been worried about. His team coordinated hundreds of GPT-5.2 agents to build a functional web browser from scratch in one week of uninterrupted operation. The result: 3 million lines of code across thousands of files, including a from-scratch Rust rendering engine with HTML parsing, CSS cascade, layout algorithms, text shaping, and a custom JavaScript virtual machine.</p><p>Truell&#8217;s candid assessment: &#8220;It kind of works.&#8221; Simple websites render quickly and largely correctly. It&#8217;s nowhere near production-ready - browsers like Chromium have over 35 million lines of code refined by expert teams over decades. But that&#8217;s not the point.</p><p>The point is the speed of progress. The Cursor team built this using a hierarchical multi-agent system - Planners, Workers, and Judges - that mirrors human software company organization. They successfully managed hundreds of agents collaborating on the same codebase for weeks with minimal code conflicts. According to their blog post, they found that GPT-5.2 excels at maintaining focus and following instructions precisely over extended periods, while Claude Opus 4.5 tends to stop earlier and take shortcuts.</p><p>Building a browser kernel is traditionally compared in difficulty only to building an operating system. That an AI system could scaffold the basic architecture in a week suggests we&#8217;re entering new territory. Whether this represents the future of programming or an impressive but impractical demonstration misses the real question: how long until it&#8217;s not just impressive, but competitive?</p><h2><strong>The Corporate Adoption Reality Check</strong></h2><p>Now for the skeptical part. In the corporate American world, AI agent adoption will move much slower than enthusiasts predict. I&#8217;ve watched enough enterprise transformations to know that management resistance, trust issues, and regulations create far more friction than technology limitations.</p><p>Take our pharma clients. Even when we demonstrate clear ROI from AI implementations, the path from pilot to production stretches months or years. Compliance requirements aren&#8217;t just checkboxes, they&#8217;re woven into every workflow, every approval chain, every documentation standard. You can&#8217;t just drop in an AI agent and declare victory.</p><p>But here&#8217;s what makes healthcare and financial services different: the bureaucracy itself is a massive cost center that directly impacts business models. When 30-40% of your operational costs come from compliance overhead and administrative work, suddenly the risk calculation shifts. The same dynamic plays out in CPG and retail, where thin margins force management to take bigger swings at productivity gains.</p><h2><strong>The Strategic Response</strong></h2><p>So where does this leave vertical AI specialists? I see two critical moves:</p><p><strong>First, build your vertical expertise into a systematic process for identifying high-value automation opportunities.</strong> At Customertimes, we&#8217;ve built technological practices with deep vertical specialization. This isn&#8217;t just domain knowledge, it&#8217;s a factory for mining inefficient processes where vertical AI agents can deliver immediate value. You need the expertise to know which processes are ripe for automation and which will require years of organizational change.</p><p><strong>Second, measure horizontal agent progress against your product religiously.</strong> This is the part most teams are avoiding because it&#8217;s uncomfortable. Take one of your strongest engineers and give them an ongoing task: use a long-horizon agent like Claude Code as an external competitor. Give it the same problem your product solves, but with minimal context.</p><p>My hypothesis: horizontal agents will rapidly learn from open data, find workarounds, and become increasingly accurate. Track this monthly. Not as a panic exercise, but as an early warning system that tells you when to shift strategy before you&#8217;re caught flat-footed.</p><p>The Anthropic data shows this is already happening. Tasks are getting more complex, success rates are improving, and time horizons are expanding. The 19-hour effective time horizon on Claude.ai today will be 50 hours in six months and 200 hours in a year. That&#8217;s not speculation, it&#8217;s the trajectory we&#8217;re seeing every quarter.</p><h2><strong>The Window Is Open, But Closing</strong></h2><p>There&#8217;s a genuine opportunity in vertical AI agents right now. The combination of domain expertise, workflow integration, and specialized training creates defensible value that horizontal systems can&#8217;t easily replicate. Companies like ours that already have vertical practices and client relationships are well-positioned to capitalize on this.</p><p>But the mistake would be assuming this advantage is permanent. The moats that seem reliable today can be crossed much faster than we think. Long-horizon agents are getting better at tasks that require extended focus and multiple iterations, exactly the territory where vertical specialists thought they were safe.</p><p>The danger isn&#8217;t that horizontal agents are better than vertical solutions today. The danger is in the speed of their progress. Every three months, we see capabilities that we thought were years away. The Cursor browser experiment isn&#8217;t impressive because it built a production-ready browser and it&#8217;s impressive because it showed what&#8217;s possible with sustained autonomous work over just one week.</p><p>My view: successful vertical AI companies will emerge from this period. They&#8217;ll be the ones that combine deep domain expertise with ruthless measurement of competitive threats. They&#8217;ll move fast to capture value while the window is open, but they&#8217;ll also be clear-eyed about when their advantage is eroding and what the next move needs to be.</p><p>The companies that fail will be the ones that convince themselves their domain moat is impenetrable, that generic AI &#8220;doesn&#8217;t understand our business,&#8221; that their proprietary data and trained models create permanent advantages. Those companies will wake up one day to find that horizontal agents have figured out their domain well enough to be competitive and by then it will be too late to pivot.</p><p>AGI is coming for everyone, and long-horizon agents are its nearest harbinger. The question isn&#8217;t whether they&#8217;ll eventually match domain-specific solutions. The question is what you&#8217;re building while you still have time.</p>]]></content:encoded></item><item><title><![CDATA[About the irreversible risk of the Big Four]]></title><description><![CDATA[Until recently, AI at the Big Four lived in a sterile zone.]]></description><link>https://maxvotek.com/p/the-irreversible-risk-how-the-big</link><guid isPermaLink="false">https://maxvotek.com/p/the-irreversible-risk-how-the-big</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Fri, 16 Jan 2026 13:15:27 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/48575a3c-055b-4139-b761-30818b37b697_1200x675.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Until recently, AI at the Big Four lived in a sterile zone.</p><p>Innovation labs. Pilot projects. Beautiful slide decks for partners&#8217; offsite retreats. The technology was near the business, adjacent to it, occasionally touching it in controlled environments. And it wasn&#8217;t <em>inside</em> the machinery that generates billions in revenue and touches millions of compliance decisions.</p><p>In 2025, that glass wall shattered.</p><p>Deloitte, EY, KPMG, and PwC didn&#8217;t just experiment with agentic AI - they embedded it directly into their core operational processes. The places where mistakes mean <a href="https://fortune.com/2025/10/07/deloitte-ai-australia-government-report-hallucinations-technology-290000-refund/">real money</a>, reputation damage, and legal liability. Where a hallucination isn&#8217;t amusing, it&#8217;s a $290,000 refund to the Australian government.</p><p>Yes, that actually happened. More on that in a moment.</p><p>This is a story about what happens when institutions worth hundreds of billions collectively decide the risk of <em>not</em> adopting AI is greater than the risk of getting it wrong.</p><h2><strong>When Theory Became Practice</strong></h2><p>When <a href="https://www.deloitte.com/us/en/services/consulting/services/zora-generative-ai-agent.html">Deloitte rolled out Zora AI</a> to hundreds of thousands of employees globally, it stopped being an experiment. When EY announced plans to scale from 1,000 AI agents in development to 100,000 by 2028 - with 150 agents already supporting 80,000 tax professionals - that&#8217;s not a pilot. That&#8217;s infrastructure.</p><p><a href="https://maxvotek.com/p/the-100-page-prompt-that-changed?r=2m4r3n">KPMG built TaxBot with a 100-page prompt </a>crafted over months, consolidating partner-written tax advice that had been &#8220;stored all over the place&#8221; - often on individual laptops. The system now generates 25-page cross-border M&amp;A tax advice drafts in one day. Work that previously required two weeks.</p><p>Not two weeks faster. Two weeks <em>total</em>, down from fourteen days.</p><p>PwC launched agent OS in March 2025 and deployed 25,000 intelligent agents across client operations by year&#8217;s end. Not chatbots. Not assistants. Agents that execute transactions, coordinate workflows, make decisions within defined parameters.</p><p>This is the Big Four accepting that AI agents are now part of their operating rhythm. Not someday. Now.</p><h2><strong>The $290,000 Reality Check</strong></h2><p>In October 2025, Deloitte Australia was caught with AI-generated errors in a government report they&#8217;d been paid $290,000 to produce. The 237-page document included references to non-existent academic research papers and a fabricated quote from a federal court judgment.</p><p>A Sydney University researcher named Chris Rudge caught it when he noticed a citation attributing a book to a professor that sounded &#8220;preposterous&#8221; for her area of expertise. He knew instantly it was either hallucinated or the world&#8217;s best-kept secret.</p><p>Deloitte reviewed the report, confirmed the errors, quietly published a revised version, and agreed to refund the final payment installment. The updated version now includes explicit disclosure that Azure OpenAI was used in its creation.</p><p>Here&#8217;s the part that matters: this was production work, client-facing deliverables, the kind of document that influences policy decisions.</p><p>And this is <em>exactly</em> what accepting the risk looks like in practice.</p><p>The Australian Senator Barbara Pocock said what everyone was thinking: &#8220;The kinds of things that a first-year university student would be in deep trouble for.&#8221;</p><p>Deloitte didn&#8217;t stop using AI after this incident. Neither did any of the other firms. Because the alternative of watching competitors automate while you manually review every footnote is commercially untenable in 2025.</p><h2><strong>The Pyramid That Can&#8217;t Hold</strong></h2><p>I&#8217;ve written before about<a href="https://maxvotek.com/p/death-of-the-utilization-pyramid?r=2m4r3n"> the death of the utilization pyramid</a> - the economic model where Big Four firms built revenue on thousands of junior employees doing repeatable work at 75% billable utilization.</p><p>That pyramid didn&#8217;t collapse in 2025. But it definitely cracked.</p><p>When KPMG&#8217;s TaxBot compresses two weeks of tax analysis into 24 hours, what happens to the junior tax associates who would have spent those two weeks building Excel models and reviewing case law? When EY&#8217;s 150 tax agents handle data collection, document review, and compliance work for 80,000 professionals, where do the entry-level hires go?</p><p>The work is moving to agents. Not all of it, not yet, but enough that graduate hiring is slowing. Enough that the strongest specialists are leaving for places where AI gives them more leverage and velocity.</p><p>Why spend three years doing routine compliance work when you could be at a firm or a startup, or in-house where you&#8217;re immediately designing and managing the agent systems that do the routine work?</p><p>The talent pipeline is fundamentally changing shape. The pyramid is becoming a column with a much wider top and a much narrower base.</p><h2><strong>The Economics Are Shifting, Quietly</strong></h2><p>Hourly billing is still alive. Technically. But underneath it, the economics are transforming.</p><p>Services are becoming platforms. People plus software plus data plus agents. The unit of delivery isn&#8217;t a body working forty hours a week anymore, it&#8217;s an outcome delivered by a hybrid team where some members are silicon and some are carbon-based.</p><p>PwC&#8217;s global commercial technology and innovation officer Matt Wood told Business Insider something revealing: in 2025, organizations fitted AI around existing workflows. In 2026, the work they&#8217;re doing is about &#8220;helping clients flip that model&#8221; - designing processes with AI in mind from the outset.</p><p>EY&#8217;s global managing partner for growth and innovation, Raj Sharma, said the power of AI agents is forcing his firm to reconsider its commercial model. Instead of charging based on hours and resources spent, they&#8217;re exploring &#8220;service-as-a-software&#8221; approaches where clients pay based on outcomes.</p><p>This is exactly what I described in the utilization pyramid series. When configuration, integration, testing, and documentation go from weeks to hours, time stops being a fair proxy for value. Pricing realigns to outcomes: revenue lift, cycle-time reduction, accuracy, SLAs.</p><p>The shift hasn&#8217;t fully arrived yet. But in 2025, the Big Four positioned themselves for it. They built the infrastructure. They trained the talent. They accepted the commercial risk of operating in this transitional period where they&#8217;re charging for hours while agents do more of the work.</p><p>This can&#8217;t last. The math doesn&#8217;t work. Within 18-24 months, we&#8217;ll see the commercial models start bending toward outcomes and platform economics, or we&#8217;ll see clients demanding significant hourly rate reductions to account for agent productivity.</p><h2><strong>Our Corner of the Universe</strong></h2><p>In our business - Salesforce implementation at Customertimes - a significant portion of work has always been fixed-price. We&#8217;ve always built accelerators: reusable components, configuration templates, deployment automation.</p><p>Now those accelerators include AI Factory capabilities for project setup, agent systems for documentation and testing, intelligent tools for data migration and validation.</p><p>For large clients, these tools still face mental resistance. Compliance concerns. Security reviews. IT architecture committees that want seventeen layers of approval before anything touches production data.</p><p>And that friction is decreasing. In late 2024, if you mentioned using AI for test data generation or documentation, you&#8217;d get weeks of security review. In late 2025, the question is more often &#8220;which tool are you using and how do we govern it?&#8221;</p><p>The infrastructure of trust is being built, project by project, contract by contract.</p><h2><strong>What Nobody&#8217;s Saying Clearly</strong></h2><p>AI didn&#8217;t kill the Big Four&#8217;s business model in 2025. It didn&#8217;t break smaller integrators either.</p><p>However it knocked the whole industry off balance.</p><p>The hourly billing model still works today because we&#8217;re in a transitional period where:</p><ul><li><p>Clients haven&#8217;t yet figured out how to price outcomes properly</p></li><li><p>Firms can still justify rates based on expertise and brand while agents do more execution</p></li><li><p>The talent market hasn&#8217;t fully repriced to reflect that junior work is automating</p></li><li><p>Insurance and liability frameworks haven&#8217;t caught up to agent-driven work</p></li></ul><p>All four of these things are unstable. They&#8217;re resolving slowly, but they&#8217;re resolving in one direction: toward fewer bodies, more agents, outcome-based pricing, and new risk frameworks.</p><p>The firms that accepted this risk in 2025 did so because they understand the alternative is worse. If you wait until the commercial models are clear and the liability frameworks are settled, you&#8217;ve already lost. Your competitors are two years ahead in learning how to manage hybrid human-agent teams at scale.</p><h2><strong>The 2026 Race</strong></h2><p>Here&#8217;s what changed: the race is no longer about who implements AI fastest.</p><p>It&#8217;s about who learns to manage hybrid reality most effectively.</p><p>Who builds the trust infrastructure - the governance frameworks, the oversight mechanisms, the quality controls - that let clients feel confident in agent-driven work?</p><p>Who figures out the commercial models that don&#8217;t bankrupt their own firms while fairly pricing the value agents deliver?</p><p>Who develops the talent pipelines that produce professionals who can design, manage, and audit agent systems instead of doing the work agents now handle?</p><p>And critically: who navigates the inevitable mistakes: the hallucinations, the edge cases, the moments when agents confidently deliver wrong answers without losing client trust or regulatory standing?</p><p>The Big Four crossed the Rubicon in 2025. They embedded agents into their core operations. They accepted the operational risk, the commercial uncertainty, and the reputational exposure.</p><p>They did this because standing on the shore watching was no longer an option.</p><p>The question for 2026 is whether the firms that committed to this transformation first will be the ones who figure out how to make it work reliably, profitably, and at scale.</p><p>Or whether they&#8217;ll spend the next two years debugging their bold commitments while clients get increasingly sophisticated about what they&#8217;ll pay for and what guarantees they expect.</p><p>The glass wall is broken. There&#8217;s no rebuilding it. The only direction is forward, into a hybrid reality that nobody has fully figured out yet.</p>]]></content:encoded></item><item><title><![CDATA[The Two Ways to Build Healthcare AI: Why OpenAI and Anthropic Made Opposite Bets on Patient Data]]></title><description><![CDATA[OpenAI and Anthropic just launched healthcare AI products within weeks of each other.]]></description><link>https://maxvotek.com/p/the-two-ways-to-build-healthcare</link><guid isPermaLink="false">https://maxvotek.com/p/the-two-ways-to-build-healthcare</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Tue, 13 Jan 2026 15:18:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0b936e59-e896-4fcd-942a-4694ecb8c818_1280x955.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenAI and Anthropic just launched healthcare AI products within weeks of each other.</p><p>Same category. Same technology foundation. Opposite architectures.</p><p>ChatGPT Health pulls your medical records into OpenAI&#8217;s consumer app. You upload documents. Connect Apple Health. Share your history. Everything flows into their cloud. They store your &#8220;health memories&#8221; separately from regular chats.</p><p>For consumer plans - no Business Associate Agreement. Just Terms of Service.</p><p>Claude for Healthcare connects to clinical data inside the perimeter. Zero data retention, contractually guaranteed. The model queries CMS coverage databases, ICD-10, PubMed in real-time through Model Context Protocol. Data stays where it is. PHI never leaves the Virtual Private Cloud.</p><p>This isn&#8217;t a minor technical difference. It&#8217;s two fundamentally different theories about how AI should touch the most sensitive data humans generate.</p><h2><strong>Intelligence In, Not Data Out</strong></h2><p>Claude&#8217;s architecture inverts the traditional AI approach.</p><p>Most AI systems work by pulling data into a central repository, training or fine-tuning on it, then serving predictions back. This works fine for marketing content or customer service. It&#8217;s a compliance nightmare for healthcare.</p><p>Anthropic built Model Context Protocol specifically to avoid this. The AI connects to data sources: enterprise knowledge bases, clinical databases, coverage policies - and queries them in real-time. The model sees the data momentarily to answer a question, then the connection closes.</p><p>No data retention. No training. No persistent storage outside the health system&#8217;s infrastructure.</p><p>For enterprise deployments through AWS Bedrock, Google Cloud Vertex AI, or Azure OpenAI Service, the health system chooses where compute happens. The AI runs in their VPC. PHI doesn&#8217;t cross boundaries.</p><p>This is cloud-agnostic by design. No vendor lock-in. No requirement to move clinical data to a specific cloud provider.</p><p>OpenAI went the opposite direction.</p><p>ChatGPT Health is an aggregator. The value proposition is consolidation: bring all your health data into one place, let the AI see everything at once, and get personalized insights.</p><p>Your lab results from Quest. Your prescriptions from CVS. Your fitness data from Apple Watch. Your hospital discharge summaries. All in OpenAI&#8217;s infrastructure.</p><p>They&#8217;ve built separate storage for &#8220;health memories&#8221; and claim enhanced security. But the fundamental architecture is centralized aggregation. Your data lives in their cloud, under their Terms of Service.</p><p>For consumer accounts, which is what most people will use, there&#8217;s no BAA. OpenAI isn&#8217;t your business associate under HIPAA. You&#8217;re giving them your health data as a consumer, not as a patient receiving covered services.</p><h2><strong>The Geographic Tell</strong></h2><p>ChatGPT Health launched in the United States only.</p><p>EU, UK, Switzerland - explicitly excluded. Not &#8220;coming soon.&#8221; Not &#8220;rolling out later.&#8221; Excluded from launch.</p><p>This isn&#8217;t an accident.</p><p>GDPR Article 9 classifies health data as a special category requiring explicit consent and heightened protection. The centralized aggregation model - pull everything into our cloud, store it indefinitely, use it to improve our services - doesn&#8217;t comply.</p><p>OpenAI could probably build a GDPR-compliant version. They&#8217;d need separate infrastructure, different terms, clear data processing agreements, and demonstrated necessity for each use. But that&#8217;s not the product they built.</p><p>They are built for the US market, where consumer health apps operate under FTC rules and state privacy laws, not HIPAA (unless they&#8217;re providing covered services).</p><p>Claude for Healthcare works globally. The enterprise architecture: data stays in your infrastructure, AI connects temporarily, zero retention and fits European data protection requirements.</p><p>This geographic split tells you everything about the two strategies.</p><h2><strong>What&#8217;s Actually Going On</strong></h2><p>OpenAI has 230 million weekly active users asking health questions.</p><p>That&#8217;s the distribution advantage. Millions of people are already using ChatGPT to interpret lab results, research symptoms, and understand diagnoses. They&#8217;re not waiting for their doctor to adopt AI, they&#8217;re bringing AI to their healthcare themselves.</p><p>ChatGPT Health formalizes this. Build the habit first with a free tier. Add premium features for $20/month. Once millions of patients have their health data aggregated in ChatGPT, you have leverage with health systems.</p><p>&#8220;Your patients are already using our AI. Want to integrate it properly? Here&#8217;s the enterprise offering.&#8221;</p><p>This is a consumer-first strategy. Own the patient relationship. Make health systems adapt to where patients already are.</p><p>Anthropic doesn&#8217;t have 230 million weekly users. Claude is growing, but it&#8217;s not consumer-default the way ChatGPT is.</p><p>So they&#8217;re entering through the back door: enterprise infrastructure.</p><p>Their partner Commure - a healthcare infrastructure company - estimates Claude&#8217;s pre-built skills for prior authorization review, claims appeals automation, and care triage from patient portals could save clinicians millions of hours annually.</p><p>These aren&#8217;t consumer features. These are workflow automation tools for health systems, payers, and pharma companies.</p><p>Prior authorization takes providers 13 hours per week on average. It&#8217;s pure administrative overhead: checking coverage policies, documenting medical necessity, appealing denials. Exactly the kind of structured, high-volume, rules-based work AI can handle.</p><p>Claims appeals run through similar workflows. Coverage policies change quarterly. Medical coding updates annually. Keeping track of which codes require which documentation for which payers is cognitive overhead that doesn&#8217;t require human judgment - it requires accurate retrieval and application of policies.</p><p>Claude connects to the authoritative sources in real-time. CMS coverage database for Medicare policies. Commercial payer guidelines through health plan APIs. ICD-10 and CPT codes from the official repositories.</p><p>This is boring infrastructure work. It&#8217;s also where billions of dollars of healthcare administrative costs live.</p><h2><strong>Two Theories of the Market</strong></h2><p>OpenAI is betting on the front door.</p><p>Patients are the entry point. They have the motivation - it&#8217;s their health. They have the data - it&#8217;s scattered across multiple systems, and they&#8217;re the only ones with access to all of it. They have the ability to pay, $20/month is less than one copay.</p><p>Build the consumer habit. Create the aggregated health record. Once patients expect AI-powered health insights, health systems will need to integrate or become the slow, frustrating alternative.</p><p>This works if:</p><ul><li><p>Patients trust OpenAI with their health data</p></li><li><p>The consumer experience is dramatically better than patient portals</p></li><li><p>Health systems eventually integrate rather than compete</p></li><li><p>Regulators don&#8217;t shut down the aggregation model</p></li></ul><p>Anthropic is betting on the plumbing.</p><p>Health systems are the entry point. They have the clinical data. They have the liability. They have the compliance requirements. They have the budget - healthcare IT spending is massive, and automation of administrative work has clear ROI.</p><p>Build enterprise infrastructure. Solve workflow problems. Make the AI indispensable to operations. Once health systems depend on your AI for prior auth, claims processing, and clinical documentation, you own critical infrastructure.</p><p>This works if:</p><ul><li><p>Health systems adopt fast enough to build defensible market position</p></li><li><p>Enterprise contracts generate enough revenue to compete with consumer scale</p></li><li><p>Clinical workflow automation proves more valuable than consumer convenience</p></li><li><p>Regulations favor data localization over aggregation</p></li></ul><p>Both could be right. Or both could be wrong.</p><h2><strong>For Pharma and Life Sciences, This Isn&#8217;t Academic</strong></h2><p>Your field reps capture HCP data every sales visit. Call notes, prescribing patterns, coverage challenges, formulary positions. That&#8217;s PHI if it includes patient-level information, even de-identified.</p><p>Your clinical trials process thousands of patient records. Inclusion/exclusion criteria checking. Adverse event monitoring. Protocol compliance verification. All require AI to scale efficiently.</p><p>Your patient support programs - copay assistance, adherence monitoring, nurse navigation - handle PHI daily. Every interaction generates data that could improve outcomes if analyzed properly.</p><p>The AI vendor you choose shapes your compliance posture.</p><p>Choose a vendor that pulls data into their cloud, and you need to:</p><ul><li><p>Verify their infrastructure meets your security requirements</p></li><li><p>Ensure BAAs cover all use cases</p></li><li><p>Monitor what they do with your data</p></li><li><p>Plan for vendor lock-in</p></li><li><p>Accept geographic restrictions</p></li></ul><p>Choose a vendor where data stays in your infrastructure, and you:</p><ul><li><p>Control where compute happens</p></li><li><p>Maintain data sovereignty</p></li><li><p>Keep multi-cloud optionality</p></li><li><p>Meet European requirements by default</p></li><li><p>Own the audit trail</p></li></ul><p>This isn&#8217;t just about HIPAA. It&#8217;s about GDPR, MDR (Medical Device Regulation), EU AI Act, and whatever comes next.</p><p>The architecture you choose today determines which regulations you can comply with tomorrow.</p><h2><strong>The Geographic Split I Keep Thinking About</strong></h2><p>The US might go consumer-first.</p><p>American healthcare is fragmented. Patients already act as their own care coordinators. They collect records from multiple providers. They research treatment options. They advocate for coverage.</p><p>A consumer AI that aggregates everything and helps navigate the system fits American healthcare&#8217;s reality.</p><p>Europe will almost certainly go enterprise-first.</p><p>European healthcare is more centralized. Electronic health records are more standardized. Data protection is stricter. The European Health Data Space regulation, applying from 2027, explicitly requires data minimization and purpose limitation.</p><p>An enterprise AI that queries authorized sources without moving data fits European regulatory philosophy.</p><p>This creates a strange situation: the same AI companies building for the same use cases will likely deploy completely different architectures depending on geography.</p><p>OpenAI might never launch consumer health aggregation in Europe. Anthropic might never need to - the enterprise model could be the only viable approach.</p><p>For global companies, this means managing two different AI strategies. Your US operations might use consumer-facing AI that patients bring to appointments. Your European operations might use enterprise AI that never touches patient devices.</p><p>The convergence everyone predicts: where AI seamlessly integrates consumer and clinical data might not happen uniformly. It might fragment along regulatory boundaries.</p><h2><strong>What This Actually Means</strong></h2><p>We&#8217;re watching two different bets play out in real-time.</p><p>OpenAI is betting that convenience wins. That patients will trade data control for better insights. That regulatory frameworks will adapt to consumer demand. That health systems will integrate rather than compete.</p><p>Anthropic is betting that infrastructure wins. That health systems will pay for workflow automation. Those regulations will favor data localization. That enterprise adoption creates defensible moats.</p><p>Both companies are playing to their strengths. OpenAI has a consumer scale. Anthropic has enterprise trust.</p><p>The question isn&#8217;t which is better in some abstract sense. The question is: which architecture fits your risk profile, your regulatory environment, and your business model?</p><p>If you&#8217;re building consumer health tools, OpenAI&#8217;s aggregation model might be the only way to deliver the experience users expect.</p><p>If you&#8217;re operating clinical infrastructure, Anthropic&#8217;s zero-retention model might be the only way to satisfy compliance and security teams.</p><p>And if you&#8217;re doing both, which most healthcare companies eventually do, you might need both architectures, deployed differently depending on use case and geography.</p><p>The two ways to build healthcare AI aren&#8217;t complementary. They&#8217;re competing visions of what healthcare data architecture should look like.</p><p>One will probably win in the US. The other will probably win in Europe. And global healthcare companies will need to operate in both worlds simultaneously.</p><p>The question isn&#8217;t convenience or control anymore. It&#8217;s: which world are you building first?</p>]]></content:encoded></item><item><title><![CDATA[I Built a Truth Serum for AI Models (And Learned They Argue Like Consultants)]]></title><description><![CDATA[Over the weekend, I did something probably unnecessary but deeply satisfying: I built an app that runs the same question through 8 leading LLMs and makes them bet on each other&#8217;s answers.]]></description><link>https://maxvotek.com/p/i-built-a-truth-serum-for-ai-models</link><guid isPermaLink="false">https://maxvotek.com/p/i-built-a-truth-serum-for-ai-models</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Thu, 08 Jan 2026 15:58:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a43850e7-9098-457a-8261-dd6d73fa37e0_1280x708.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the weekend, I did something probably unnecessary but deeply satisfying: I built an app that runs the same question through 8 leading LLMs and makes them bet on each other&#8217;s answers.</p><p>Here&#8217;s how it works: I give all eight models - Claude 4.5, GPT-5.2, Gemini 2.5, Grok 4.1, DeepSeek V3, Qwen-Max, Perplexity, and Kimi K2 - the same question. They each answer it. But here&#8217;s the twist: I also ask each model to predict what the others would answer.</p><p>Then I run everything through Bayesian Truth Serum </p><p>(BTS), a statistical method that finds the most likely true answer not by looking at what&#8217;s most popular, but by finding what&#8217;s surprisingly frequent - answers chosen more often than the models themselves predicted.</p><p>I threw 72 questions at them, covering everything from business strategy and SEO to medical ethics and philosophy. The Gemini 2.5 PRO bill made me wince, but what I learned about how these models think was worth it.</p><h2><strong>The Logic Is Beautiful (and Very Human)</strong></h2><p>The genius of BTS is that it exploits a fundamental truth about deception: lying is easy, but predicting how others will lie is nearly impossible.</p><p>Think about it. If I ask you a factual question you&#8217;re unsure about, you might guess wrong. But if I also ask you to predict what answer your colleague would give, and then what their colleague would give, suddenly the layers of uncertainty compound. The truth has internal consistency; lies don&#8217;t.</p><p>This is exactly how interrogators work, by the way. They don&#8217;t just ask &#8220;what happened?&#8221; They ask: &#8220;What do you think your partner told us? What would your boss say if we asked him? How would the security footage look?&#8221; Liars struggle with these nested predictions because they have to simulate increasingly complex false narratives.</p><p>We essentially forced the models not just to answer, but to place bets on each other. And those bets revealed more than the answers themselves.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9tkS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9tkS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 424w, https://substackcdn.com/image/fetch/$s_!9tkS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 848w, https://substackcdn.com/image/fetch/$s_!9tkS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 1272w, https://substackcdn.com/image/fetch/$s_!9tkS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9tkS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic" width="1280" height="1219" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1219,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:38321,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://maxvotek.com/i/183921876?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9tkS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 424w, https://substackcdn.com/image/fetch/$s_!9tkS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 848w, https://substackcdn.com/image/fetch/$s_!9tkS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 1272w, https://substackcdn.com/image/fetch/$s_!9tkS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87340108-b9ce-44b0-a477-f0d185e80ca9_1280x1219.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>What I Found: AI Models Have Personalities (And Blind Spots)</strong></h2><p>After running 72 questions through this gauntlet, clear patterns emerged. The models sorted themselves into two distinct camps: the Consensus Builders and the Systematic Rebels.</p><p>The <strong>Consensus Builders</strong> - Gemini 2.5, Grok 4.1, Claude 4.5, DeepSeek V3, and Qwen-Max - consistently landed in the statistical sweet spot of &#8220;surprisingly frequent but not obviously popular.&#8221; These are your reliable experts. When they agree, you can usually trust them.</p><p>But each one gets there differently.</p><p><strong>Claude</strong> is the classic business analyst. Ask it anything and it immediately starts mapping structures, identifying risks, and building frameworks. It&#8217;s the consultant who walks into a meeting and starts drawing boxes and arrows on the whiteboard. Always cautious, always systematic, always asking &#8220;but have we considered the downside?&#8221;</p><p><strong>Grok</strong> is the aggressive practitioner. It loves numbers, ROI calculations, and cutting through bullshit. Where Claude builds frameworks, Grok builds spreadsheets. It&#8217;s the CFO who interrupts your strategy presentation to ask &#8220;okay but what&#8217;s the actual payback period here?&#8221;</p><p><strong>GPT-5.2</strong> is the systems architect. It thinks in platforms, ecosystems, and long-term infrastructure plays. Amazing for big-picture thinking, but sometimes too abstract when you need to know what to do tomorrow. It&#8217;s the person who responds to &#8220;we need to fix this bug&#8221; with &#8220;well, really we should rebuild our entire architecture.&#8221;</p><p>Then there are the <strong>Systematic Rebels</strong>: Perplexity, GPT-5.2 (yes, it plays both roles), and Kimi K2.</p><p>These are the models that consistently go against the grain. They&#8217;re the employees who, in every meeting, raise their hand and say: &#8220;Okay, but what if we&#8217;re looking at this completely wrong?&#8221;</p><p>And here&#8217;s the thing: you need them.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dUTR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dUTR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 424w, https://substackcdn.com/image/fetch/$s_!dUTR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 848w, https://substackcdn.com/image/fetch/$s_!dUTR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 1272w, https://substackcdn.com/image/fetch/$s_!dUTR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dUTR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic" width="985" height="1280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1280,&quot;width&quot;:985,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:60578,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://maxvotek.com/i/183921876?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dUTR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 424w, https://substackcdn.com/image/fetch/$s_!dUTR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 848w, https://substackcdn.com/image/fetch/$s_!dUTR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 1272w, https://substackcdn.com/image/fetch/$s_!dUTR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb617bcf3-e589-4115-a473-5d135d4099cc_985x1280.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Why Being Right Isn&#8217;t Enough</strong></h2><p>I wrote once about how being right is only half the battle. The real challenge starts when you need to convince others to accept your point of view, especially about strategy you&#8217;ve built from experience and deep knowledge.</p><p>Watching these models interact taught me something about this dynamic.</p><p>The Consensus Builders are usually right, but they&#8217;re right in<strong> predictabl</strong><em><strong>e</strong></em> ways. They&#8217;ve internalized the same training data, the same patterns, the same conventional wisdom. They&#8217;re excellent at navigating known territory.</p><p>The Rebels? Their answers need double-checking. Sometimes they&#8217;re confidently wrong. But they&#8217;re also the ones who surface blind spots, who ask the questions everyone else assumes, who suggest the approach no one else considered.</p><p>From my experience the most dangerous projects aren&#8217;t the ones where everyone disagrees - those force healthy debate. The dangerous ones are where everyone nods along to the same flawed assumption because it sounds reasonable and no one wants to be the contrarian.</p><p>This is why I always want at least one Perplexity in the room - someone constitutionally incapable of just going along with consensus.</p><h2><strong>The Practical Question: Which Model Should You Use?</strong></h2><p>Here&#8217;s where people always trip up. They want to know: which model is &#8220;the best&#8221;?</p><p>Wrong question.</p><p>The right question is: which model is best for this specific task?</p><p>If I&#8217;m building a business case that needs to survive CFO scrutiny: <strong>Grok</strong>. It&#8217;ll find the holes in my logic before the finance team does.</p><p>If I&#8217;m designing organizational change and need to map out all the stakeholders, risks, and contingencies: <strong>Claude</strong>. It won&#8217;t let me skip steps or ignore political realities.</p><p>If I&#8217;m thinking about platform strategy or long-term technical architecture: <strong>GPT-5.2</strong>. Just don&#8217;t expect tactical next steps.</p><p>But if I want to stress-test an idea? I run it past <strong>Perplexity</strong> or <strong>Kimi</strong>. They&#8217;ll poke holes I didn&#8217;t see coming. Half of what they say will be wrong, but the other half will save me from expensive mistakes.</p><h2><strong>The Real Lesson: Diversity of Thought Isn&#8217;t Optional</strong></h2><p>Running this experiment crystallized something I&#8217;ve learned over years of implementations: you cannot solve complex problems with a single perspective, no matter how &#8220;right&#8221; that perspective is.</p><p>I&#8217;ve seen enterprise software selections where everyone chose the &#8220;obviously correct&#8221; platform, and then spent two years fighting it because no one asked the contrarian question: &#8220;What if our process doesn&#8217;t actually fit this model?&#8221;</p><p>The Bayesian Truth Serum approach works because it exploits something fundamental: truth has a different statistical signature than consensus. Sometimes the right answer is surprising. Sometimes it contradicts what the experts agree on.</p><p>But you only find it if you&#8217;re willing to collect multiple perspectives, including the uncomfortable ones, and then apply rigorous thinking to sort signals from noise.</p><h2><strong>How to Build Your Own Truth Serum</strong></h2><p>You don&#8217;t need to code an app or pay for Gemini 2.5 PRO API calls to use this principle.</p><p>Here&#8217;s what actually works:</p><p><strong>1. Ask multiple models the same question.</strong> Not as a way to pick the &#8220;best&#8221; answer, but to understand the range of perspectives. Where do they agree? Where do they diverge? Why?</p><p><strong>2. Pay special attention to the outliers.</strong> When one model gives you a radically different answer, don&#8217;t dismiss it. Dig into <em>why</em> it&#8217;s different. Sometimes it&#8217;s wrong. Sometimes it saw something the others missed.</p><p><strong>3. Force models to explain each other&#8217;s reasoning.</strong> Take Claude&#8217;s answer and ask GPT: &#8220;Why might someone give this response? What assumptions are they making?&#8221; Then reverse it. This is the manual version of what BTS does automatically.</p><p><strong>4. Use consensus as a starting point, not an endpoint.</strong> When all models agree, you&#8217;ve probably found conventional wisdom. Which is often correct! But not always. The consensus said blockchain would revolutionize everything, remember?</p><p><strong>5. Keep your rebels close.</strong> Whether it&#8217;s an AI model or a human colleague, the person who consistently disagrees with everyone else is either a fool or seeing something important. Your job is to figure out which.</p><h2><strong>The Meta-Lesson</strong></h2><p>The most interesting finding from my weekend experiment wasn&#8217;t about AI models at all.</p><p>It was watching how my own confirmation bias kicked in. When the models agreed with my existing views, I immediately thought &#8220;see, the AI gets it.&#8221; When they disagreed, my first instinct was &#8220;well, the models don&#8217;t have full context.&#8221;</p><p>This is exactly what the BTS method is designed to counteract. It removes my judgment from the equation and asks: statistically, based purely on the pattern of answers and predictions, what&#8217;s most likely true?</p><p>Sometimes the answer aligns with what I wanted to hear. Sometimes it doesn&#8217;t.</p><p>Both are valuable. But the second type is more valuable, because it&#8217;s the only kind that actually teaches you something.</p><h2><strong>So What Am I Going to Do With This?</strong></h2><p>For now, I&#8217;m using this setup as a personal advisory board. Before I publish something, before I make a recommendation to a client, before I commit to a strategy - I run it through the gauntlet.</p><p>Eight different perspectives, forced to bet on each other, statistically analyzed for truth content.</p><p>Is it perfect? No. Will it stop me from being wrong? Definitely not.</p><p>But it will make sure that when I&#8217;m wrong, it&#8217;s because I ignored multiple warning signs, not because I never asked the question in the first place.</p><p>And in a world where being confidently wrong has never been easier, that feels like progress.</p><div><hr></div><p><em>If you&#8217;re interested in the technical details of the Bayesian Truth Serum method, or want to see the full breakdown of how different models performed across question categories, let me know.<br>Here is my <a href="https://www.linkedin.com/in/max-votek/">LinkedIn</a></em></p>]]></content:encoded></item><item><title><![CDATA[The Shadow AI Revolution in Medicine: What 1,000+ Physicians Really Think]]></title><description><![CDATA[Here&#8217;s something that should worry every hospital administrator: 67% of physicians are using AI daily in their practice.]]></description><link>https://maxvotek.com/p/the-shadow-ai-revolution-in-medicine</link><guid isPermaLink="false">https://maxvotek.com/p/the-shadow-ai-revolution-in-medicine</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Tue, 06 Jan 2026 18:01:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/532bb7f4-f0d4-4699-adee-70a968633393_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here&#8217;s something that should worry every hospital administrator: 67% of physicians are using AI daily in their practice. And most of them are paying for it themselves.</p><p>We&#8217;re witnessing a classic case of bottom-up innovation in one of the world&#8217;s most conservative industries. While healthcare organizations hold endless meetings about AI governance frameworks, their physicians have already moved on. They&#8217;re using personal ChatGPT accounts, paying for Claude subscriptions with their own credit cards, and building workflows around tools their employers don&#8217;t even know exist.</p><p><a href="https://2025-physicians-ai-report.offcall.com/">The 2025 Physicians AI Report </a>surveyed over 1,000 physicians across 106 specialties, and the data tells a story I&#8217;ve seen play out in every industry I&#8217;ve worked with: the gap between what frontline professionals need and what organizations deliver creates a shadow market. In medicine, that market is already massive.</p><h2><strong>The Numbers</strong></h2><p>Here&#8217;s the paradox that should grab your attention: 84% of physicians say AI makes them better at their jobs. 78% believe it improves patient health. 42% say AI adoption makes them more likely to stay in medicine.</p><p>And yet, 81% are dissatisfied with how their employers handle AI.</p><p>That&#8217;s not a technology problem. That&#8217;s a trust problem.</p><p>The issue isn&#8217;t adoption, physicians are already there. The issue is control. 71% of physicians report having little to no influence on which AI tools their organizations adopt. Nearly half say their employer&#8217;s communication about AI is poor. And 89% believe they should receive dedicated funding for AI tools (most want between $500-1000 annually, though some are asking for $10,000+).</p><p>Think about what this means: physicians trust AI enough to pay for it themselves, but they don&#8217;t trust their organizations to choose the right tools or deploy them properly.</p><h2><strong>Why OpenEvidence Wins</strong></h2><p>The fragmented tool landscape tells you everything about where we are. The report identified 71 unique AI tools in active use. OpenEvidence leads at 37%, but that means 63% of the market is scattered across dozens of other solutions.</p><p>What makes OpenEvidence the leader isn&#8217;t sophisticated technology&#8212;it&#8217;s physician verification and focus on vetted sources. In an industry where the cost of error can be measured in lives, trust infrastructure matters more than features.</p><p>But here&#8217;s what&#8217;s interesting: ChatGPT comes in second at 15.6%. A general-purpose tool that physicians have adapted to their specific needs, despite it having no medical specialization whatsoever. This should tell you something about the power of flexibility and user control versus purpose-built solutions that don&#8217;t actually solve the right problems.</p><p>The specialized tools that are succeeding - Abridge (4.9%), DAX Copilot, Heidi, Freed - share a common characteristic: they solve one very specific problem really well. Usually documentation.</p><h2><strong>The Documentation Problem Nobody Wants to Talk About</strong></h2><p>Ask physicians what they want from AI and the answer is embarrassingly unglamorous: eliminate paperwork.</p><p>65% named documentation and administrative burden automation as their top priority. That&#8217;s one and a half times more important than clinical decision support (43%). The dream isn&#8217;t diagnostic AI or predictive analytics. The dream is &#8220;give me back the hours I lost to inbox management.&#8221;</p><p>This reminds me exactly of what I see in pharma and manufacturing. The most successful implementations are the ones that remove friction from daily work. Abridge&#8217;s value proposition is almost comically simple: put down your phone, record the appointment, get a structured medical record. But it solves a massive pain point.</p><p>The gap between what AI vendors pitch and what physicians actually need is enormous. While companies demo sophisticated diagnostic algorithms, physicians are drowning in administrative tasks that AI could handle today.</p><h2><strong>The Control Problem</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ux4D!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ux4D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 424w, https://substackcdn.com/image/fetch/$s_!Ux4D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 848w, https://substackcdn.com/image/fetch/$s_!Ux4D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 1272w, https://substackcdn.com/image/fetch/$s_!Ux4D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ux4D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png" width="1456" height="1130" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1130,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ux4D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 424w, https://substackcdn.com/image/fetch/$s_!Ux4D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 848w, https://substackcdn.com/image/fetch/$s_!Ux4D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 1272w, https://substackcdn.com/image/fetch/$s_!Ux4D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e50d040-da9a-4e2c-8139-7f05df17ac92_1600x1242.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s where it gets really interesting. The physicians demanding more influence aren&#8217;t fresh graduates experimenting with new tech. They&#8217;re experienced professionals with 15-20 years of practice, working across all settings: hospitals, private practice, academic medicine.</p><p>These are people who know what works. And they&#8217;re being systematically excluded from decisions about which tools they&#8217;ll be required to use.</p><p>67% say having more influence over AI tool selection would increase their job satisfaction. When physicians choose their own AI tools, 95% of their colleagues have neutral or positive reactions. But when tools are imposed from above? That number drops dramatically.</p><p>Listen to what physicians actually say: &#8220;Despite physician productivity gains from AI tools, physicians will have a higher patient volume to care for with no proportional increase in compensation. The C-suite is incentivized to use AI as a cost-cutting strategy.&#8221;</p><p>Another: &#8220;The most sophisticated AI will end up in the hands of third-party payers and bureaucracy, not physicians.&#8221;</p><p>They&#8217;re not afraid AI will make them obsolete. They&#8217;re afraid AI will be used to extract more work from them while benefiting everyone except the physicians and patients.</p><h2><strong>The Shadow IT Problem in Healthcare</strong></h2><p>The scale of shadow AI usage should be a wake-up call. Physicians are paying personal subscription fees to ChatGPT, Claude, Grok, and Perplexity. They&#8217;re routing patient information through tools their compliance departments would have heart attacks over if they knew.</p><p>Why? Because the alternative is worse. The officially sanctioned tools are either non-existent, inadequate, or selected by administrators who&#8217;ve never treated a patient.</p><p>This exact pattern played out in enterprise software for years. Employees used personal Dropbox accounts because IT couldn&#8217;t provision file sharing fast enough. They bought their own SaaS subscriptions because the approved vendor list was three years out of date. Eventually, smart organizations realized they needed to meet their employees where they were, not where the procurement process wanted them to be.</p><p>Healthcare is having that same reckoning right now. The difference is the stakes are higher. Patient privacy, regulatory compliance, and clinical outcomes are all in play.</p><h2><strong>What Physicians Actually Want</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5dG7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5dG7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 424w, https://substackcdn.com/image/fetch/$s_!5dG7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 848w, https://substackcdn.com/image/fetch/$s_!5dG7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 1272w, https://substackcdn.com/image/fetch/$s_!5dG7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5dG7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png" width="1456" height="844" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:844,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5dG7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 424w, https://substackcdn.com/image/fetch/$s_!5dG7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 848w, https://substackcdn.com/image/fetch/$s_!5dG7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 1272w, https://substackcdn.com/image/fetch/$s_!5dG7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf7f9393-7848-47aa-9701-28a64a8b0924_1600x927.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Strip away all the noise and physicians are asking for four things:</p><p><strong>Control.</strong> Not complete autonomy, but meaningful input into which tools they&#8217;ll use daily. They want to be part of the decision, not just informed after the fact.</p><p><strong>Transparency.</strong> Where does the AI come from? How does it work? What are its limitations? Who&#8217;s liable when it&#8217;s wrong? These aren&#8217;t unreasonable questions.</p><p><strong>Time back.</strong> Automate the bureaucratic nonsense so they can focus on patients. One physician put it perfectly: &#8220;AI smooths out the clunkiness of present-day EMR systems and restores the physician-patient relationship.&#8221;</p><p><strong>Alignment of incentives.</strong> If AI makes physicians more productive, that benefit should be shared, not just captured by administrators to increase patient volume without increasing compensation.</p><p>When you read through the hundreds of physician comments in the report, a theme emerges: they&#8217;re not asking for the moon. They want tools that solve real problems, transparency about how those tools work, and a voice in decisions that affect their daily practice.</p><h2><strong>The Classic Innovation Adoption Pattern</strong></h2><p>We&#8217;re watching a textbook example of how innovation penetrates conservative industries.</p><p>First, individual practitioners experiment on their own. They find tools that work and share them with trusted colleagues. Use spreads through informal networks faster than official channels.</p><p>Second, organizations notice the gap between official policy and actual practice. Some adapt. Others double down on control and drive the shadow usage further underground.</p><p>Third, regulatory and competitive pressure forces broader adoption. But by this point, the organizations that moved early have already captured the benefits and built the institutional knowledge.</p><p>Healthcare is somewhere between stage one and stage two right now. 67% daily usage means we&#8217;re past the early adopter phase. But 81% dissatisfaction with organizational approach means most healthcare organizations haven&#8217;t figured out how to properly support and channel that adoption.</p><p>The question isn&#8217;t whether AI will transform medicine. Physicians have already answered that - they&#8217;re using it daily and report it makes them better at their jobs. The question is whether organizations will adapt to this reality or whether the gap will keep growing until something breaks.</p><h2><strong>What Needs to Happen</strong></h2><p>The path forward for organizations is to:</p><p><strong>Give physicians funding and agency.</strong> Start with modest stipends ($500-1000/year) that physicians can use for AI tools they find valuable. Track what they choose. Learn from their decisions.</p><p><strong>Include physicians in procurement.</strong> Not a token representative on a committee. Actual working physicians with a meaningful voice in which enterprise tools get adopted.</p><p><strong>Focus on documentation and administrative relief.</strong> These aren&#8217;t sexy AI applications, but they deliver immediate ROI in physician satisfaction and time saved. Build from there.</p><p><strong>Accept that the landscape will be fragmented.</strong> Different specialties have different needs. Emergency medicine physicians need different tools than radiologists. Stop trying to force everyone onto a single platform.</p><p><strong>Move faster.</strong> The 81% dissatisfaction rate exists because organizations are too slow. By the time a tool makes it through procurement, physicians have already found and adopted three alternatives on their own.</p><p>It&#8217;s all about healthcare organizations to trust the clinical judgment of their physicians enough to give them agency over their own tools. Every day that gap exists, the shadow AI market grows larger, more entrenched, and harder to bring back under proper governance.</p><p>The physicians are ready. The technology exists. The question is whether healthcare administration will move fast enough to close the gap before it becomes unbridgeable.</p>]]></content:encoded></item><item><title><![CDATA[Brooks’s Law in Reverse: How Four Engineers and an AI Built a Million-User App in 28 Days]]></title><description><![CDATA[&#8220;Adding people to a late software project makes it later.&#8221;]]></description><link>https://maxvotek.com/p/brookss-law-in-reverse-how-four-engineers</link><guid isPermaLink="false">https://maxvotek.com/p/brookss-law-in-reverse-how-four-engineers</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Mon, 29 Dec 2025 18:47:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1fccafff-fc29-4bb5-9ebe-ff1271d4c1c5_554x334.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#8220;Adding people to a late software project makes it later.&#8221;</p></blockquote><p>Fred Brooks wrote this in 1975, and it became one of the iron laws of software development. When you&#8217;re trying to ship something complex under deadline, throwing more engineers at the problem usually backfires. Communication overhead multiplies. Task fragment. Integration gets messy. Nine women can&#8217;t have a baby in one month.</p><p>For fifty years, this has been gospel in software engineering. <a href="https://openai.com/index/shipping-sora-for-android-with-codex/">And then OpenAI shipped Sora for Android in 28 days with four engineers.</a></p><p>Number one in Google Play on launch day. 99.9% crash-free rate. Over a million downloads in the first weeks. Built from scratch to production by a team you could fit in a sedan.</p><p>The secret? They didn&#8217;t break Brooks&#8217;s Law. They understood it better than most people who quote it.</p><h2><strong>The Experiment Nobody Expected</strong></h2><p>When Sora exploded on iOS, the pressure to ship Android was immediate. Users were pre-registering on Google Play by the thousands. The company had only a small internal prototype. The timeline was aggressive - ship in a month or miss the momentum.</p><p>Traditional approach: assemble a larger team, divide the work, coordinate it, and hope the integration doesn&#8217;t kill you at the end. That&#8217;s what Brooks&#8217;s Law warns against. More people means exponentially more communication paths, more coordination overhead, more opportunities for things to go wrong.</p><p>OpenAI went the other direction. Four engineers. But each engineer got something that fundamentally changed the equation: Codex.</p><p>They consumed roughly 5 billion tokens over those 28 days. At current pricing, that&#8217;s somewhere between $15,000 and $75,000 in API costs. For a production app that hit number one in its category, that&#8217;s absurdly cheap. But the real story is what those four engineers were actually doing.</p><h2><strong>The First Failed Experiment</strong></h2><p>Here&#8217;s a detail most summaries skip: they tried the obvious thing first. They pointed Codex at the iOS codebase and told it to build the Android version autonomously. Let it run for twelve hours straight.</p><p>The result? Patrick Hum, one of the engineers, said it delivered something that &#8220;certainly wasn&#8217;t anything that we could show anybody.&#8221;</p><p>This is important. The fully autonomous approach didn&#8217;t work. The AI alone, even with access to the entire iOS codebase, couldn&#8217;t figure out what to build or how to structure it properly.</p><p>So they spent the first week doing something that sounds inefficient but turned out to be critical: writing code by hand. Not the app code, the architecture code. The patterns. The conventions. The examples of how things should be done.</p><p>They created what they called a &#8220;context-rich environment.&#8221; Text files documenting best practices. Exemplar features showing the right way to structure components. AGENT.md files that Codex could read to understand team standards.</p><p>This is the part that fascinates me. They treated Codex like what it actually is: a newly hired senior engineer who&#8217;s technically skilled but knows nothing about your specific project, your architecture preferences, or your product vision.</p><h2><strong>From Lines of Code to Systems of Code</strong></h2><p>Most teams today use AI for what I&#8217;d call &#8220;incremental assistance.&#8221; GitHub Copilot suggests the next line. Speed increases by maybe 20-30%. It&#8217;s helpful but not transformative.</p><p>OpenAI did something different. They used Codex for entire features, entire subsystems, entire architectural layers. Four large, intensive sessions where they&#8217;d lay out the architecture, explain the product logic, and then let Codex generate whole blocks of the application.</p><p>Codex wrote approximately 85% of the code. But what were the humans doing?</p><p>They were setting architectural boundaries. Explaining product requirements. Checking edge cases. Making judgment calls about tradeoffs. Reviewing what Codex produced. Deciding what to build next.</p><p>The role changed from &#8220;person who types code&#8221; to &#8220;person who designs systems and evaluates implementations.&#8221;</p><p>This is exactly what I keep seeing across industries. The valuable work shifts up the abstraction ladder. In manufacturing, you stop worrying about whether the weld is at the right temperature and start worrying about whether you&#8217;re welding the right thing in the right place. In software engineering, you stop worrying about syntax and start worrying about architecture.</p><h2><strong>The &#8220;Not Crazy Enough&#8221; Problem</strong></h2><p>Physicist Niels Bohr supposedly said: &#8220;Your theory is crazy, but not crazy enough to be true.&#8221;</p><p>Most companies using AI today aren&#8217;t crazy enough. They&#8217;re using it safely, incrementally, in ways that don&#8217;t fundamentally challenge how they work. Copilot for autocompletion. ChatGPT for documentation. AI as assistant, not as collaborator.</p><p>This is rational risk management. It&#8217;s also why they&#8217;re not seeing transformative results.</p><p>OpenAI went genuinely crazy with it. They let AI write the system - actually generate the bulk of the implementation, not just help write it or suggest improvements. That&#8217;s a level of trust that most organizations won&#8217;t accept, and most engineering teams aren&#8217;t prepared for.</p><p>But here&#8217;s what made it work: they were crazy in a structured way. They didn&#8217;t just turn Codex loose and hope. They built guardrails. They documented patterns. They created feedback loops. They ran multiple Codex sessions in parallel and coordinated the outputs.</p><p>Patrick Hum said they essentially ran four engineers like sixteen. Each engineer managed multiple Codex instances simultaneously, working on different features in parallel. And those sixteen &#8220;virtual engineers&#8221; were arguably more effective than sixteen humans would have been, because you didn&#8217;t have to align them all around a shared vision, they all read from the same documented architecture.</p><p>This is Brooks&#8217;s Law in reverse. Instead of adding people and multiplying coordination costs, they multiplied force per person without multiplying coordination costs.</p><h2><strong>What Actually Broke</strong></h2><p>Let&#8217;s talk about what didn&#8217;t work, because this is where you learn the real lessons.</p><p>Codex, left unguided, would drift on architecture. It would leak logic into the UI layer. It would solve immediate problems in ways that created long-term technical debt. Its instinct, as the team put it, is &#8220;to get something working, not to prioritize long-term cleanliness.&#8221;</p><p>Sound familiar? That&#8217;s exactly what junior engineers do. And it&#8217;s why you need senior engineers to review their work and maintain architectural discipline.</p><p>The solution was robust patterns, exemplar features, and constant review and not to stop using Codex. Same as you&#8217;d do with human junior engineers, except Codex works 24/7 and doesn&#8217;t get offended when you reject its PRs.</p><p>The bottleneck shifted. Instead of &#8220;how fast can we write code,&#8221; it became &#8220;how fast can we make decisions, give feedback, and integrate changes.&#8221; The constraint moved from execution to judgment.</p><p>This is the pattern I keep seeing. When you properly leverage AI, the bottleneck moves upstream. It moves from doing to deciding. From implementation to design. From execution to strategy.</p><h2><strong>The Real Cost of This Approach</strong></h2><p>Here&#8217;s what this model demands that traditional development doesn&#8217;t:</p><p><strong>Clarity of thinking.</strong> You can&#8217;t hand Codex a vague idea and get something useful. You need to articulate exactly what you want and why. This is hard. Most people don&#8217;t actually know what they want until they see what they don&#8217;t want.</p><p><strong>Architectural discipline.</strong> When humans write code slowly, architectural mistakes reveal themselves gradually. When AI generates code quickly, bad architecture creates massive technical debt almost immediately. You need stronger upfront design.</p><p><strong>Constant review.</strong> You can&#8217;t just merge what Codex produces. You have to read it, understand it, and verify it does what you intended. The volume is higher, so this is actually harder than reviewing human code.</p><p><strong>Systemic thinking.</strong> You&#8217;re not managing people who can pushback on bad ideas. You&#8217;re managing agents that will implement whatever you tell them to. If your system design is flawed, you&#8217;ll build the wrong thing very quickly.</p><h2><strong>Why This Isn&#8217;t Happening Everywhere</strong></h2><p>If four engineers can do this, why isn&#8217;t every company working this way?</p><p>First, organizational inertia. Most companies have processes built around traditional development. Code review workflows designed for human PRs. QA processes that assume slower change velocity. Management structures that equate headcount with capacity.</p><p>Second, skill mismatch. The engineers who succeed in this model aren&#8217;t necessarily the same engineers who succeed in traditional development. You need people who are good at architecture, good at articulation, good at review. People who can work at a higher abstraction level. That&#8217;s a different skill set than &#8220;good at implementing features.&#8221;</p><p>Third, risk tolerance. Most organizations won&#8217;t accept a model where AI generates 85% of their production code. The liability concerns alone would kill it in legal review. Never mind the cultural resistance from engineering teams who see this as threatening their expertise.</p><p>Fourth, and this is the one nobody talks about: it requires you to actually know what you&#8217;re building. When development is slow, you can figure it out as you go. When development is fast, unclear requirements become obvious immediately. A lot of organizations don&#8217;t actually have clear product vision, and fast development would expose that.</p><h2><strong>What I&#8217;m Doing With This</strong></h2><p>As for me, this isn&#8217;t theoretical. I already run two separate servers, each running multiple Claude Code sessions simultaneously. That&#8217;s the only way I can maintain the pace and complexity I need for my experiments.</p><p>It&#8217;s not about coding faster. It&#8217;s about testing ideas in parallel. I can spin up one session to explore approach A, another for approach B, a third to gather supporting data, and a fourth to analyze results. By the end of the day, I know which direction works. That used to take a week.</p><p>The workflow is genuinely different. I&#8217;m not writing code linearly anymore. I&#8217;m orchestrating multiple parallel exploration threads, synthesizing results, making decisions about which paths to pursue.</p><h2><strong>The Uncomfortable Question</strong></h2><p>If four engineers with AI can do what used to require sixteen engineers without it, what happens to the other twelve?</p><p>The optimistic answer: they work on different problems. The constraint shifts from &#8220;how much engineering capacity do we have&#8221; to &#8220;how many valuable problems have we identified.&#8221; If you have four engineers who can execute like sixteen, you should be finding more problems for them to solve, not firing twelve engineers.</p><p>The realistic answer: most organizations will try to do the same work with fewer people. That&#8217;s what always happens when productivity tools improve. Some of those displaced engineers will find work on new problems. Some won&#8217;t.</p><p>This is why I keep saying: the main question is how to use tools in ways that create value rather than just cutting costs. Companies that use AI purely for headcount reduction will find themselves competing against companies that use AI to expand what&#8217;s possible.</p><h2><strong>What Success Looks Like</strong></h2><p>The OpenAI example shows what&#8217;s possible when you commit fully to a new model rather than bolting AI onto an old one.</p><p>Four engineers. 28 days. A production app that hit number one in its category and maintained a 99.9% crash-free rate.</p><p>But notice what made it work: architectural discipline, clear documentation, constant review, and engineers who could work at a higher level of abstraction than traditional development requires.</p><p>The main idea is about fundamentally changing what engineering work looks like. Less time typing, more time thinking. Less time implementing, more time architecting. Less time debugging syntax, more time designing systems.</p><p>If your approach to AI feels comfortable and doesn&#8217;t create any internal resistance, it&#8217;s probably already outdated. If it feels too bold, maybe uncomfortable, you might be looking at the future.</p>]]></content:encoded></item><item><title><![CDATA[The Storytelling Gold Rush: Why Every Company Suddenly Needs a Human Voice]]></title><description><![CDATA[I came across something revealing in the Wall Street Journal recently: the market is being flooded with &#8220;storyteller&#8221; job postings.]]></description><link>https://maxvotek.com/p/the-storytelling-gold-rush-why-every</link><guid isPermaLink="false">https://maxvotek.com/p/the-storytelling-gold-rush-why-every</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Mon, 22 Dec 2025 19:33:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/21d601e6-a617-442a-9ea5-e83a06f63c8a_1400x933.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://www.wsj.com/articles/companies-are-desperately-seeking-storytellers-7b79f54e">I came across something revealing in the Wall Street Journal </a>recently: the market is being flooded with &#8220;storyteller&#8221; job postings. Not copywriters. Not PR managers. Storytellers.</p><p>Google, Microsoft, Notion, fintech startups - they&#8217;re all suddenly hiring people who can tell stories. LinkedIn data shows these postings have doubled in the past year alone. More than 50,000 marketing positions and over 20,000 communications roles now use the term. Executive mentions of &#8220;storytelling&#8221; on earnings calls jumped from 147 in 2015 to 469 this year.</p><p>On the surface, this looks like another Silicon Valley rebranding exercise - remember when everyone needed &#8220;growth hackers&#8221; and &#8220;ninja developers&#8221;? But dig deeper and you see something more fundamental happening.</p><h2><strong>The Distribution Crisis Nobody Talks About</strong></h2><p>Here&#8217;s what&#8217;s really going on: the old publicity model is dead, and companies are scrambling.</p><p>For years corporate communications worked through a simple pipeline. You&#8217;d write a press release, pitch it to journalists, and if you were lucky, get coverage in trade publications or mainstream media. That coverage gave you credibility. Third-party validation. A way to reach customers without looking like you were selling.</p><p>That pipeline has collapsed. The US had roughly 66,000 journalists in the 2000s. Today it&#8217;s closer to 49,000. Print circulation is down around 70% since 2005. Traffic to major newspaper sites has dropped more than 40% in just four years.</p><p>Meanwhile, companies have gained something they never had before: direct distribution. Social media accounts. YouTube channels. Newsletters. Podcasts. The ability to publish without intermediaries.</p><p>But here&#8217;s the problem: most companies have no idea how to actually use these channels. They&#8217;re still thinking in press releases and quarterly campaigns, just distributed through new platforms.</p><h2><strong>Why &#8220;Storyteller&#8221; Actually Means Something Different Now</strong></h2><p>When <a href="https://www.chime.com/">Chime</a>, the fintech company, opened a position for director of corporate narratives, they got 500 applications - mostly former journalists from traditional media outlets. Their chief corporate affairs officer specifically avoided calling it an &#8220;editor&#8221; role because that felt too limiting. Stories, she explained, can be created through social media, podcasts, executive appearances, even events.</p><p><a href="https://www.vanta.com/">Vanta</a> is offering up to $274,000 for a head of storytelling. Google is hiring customer storytelling managers for Google Cloud. Microsoft&#8217;s security division wants a &#8220;senior director of narrative and storytelling&#8221; - part cybersecurity technologist, part communicator, part marketer.</p><p>These aren&#8217;t just rebranded PR positions. The expectations are different.</p><p>Companies don&#8217;t want product copy. They want scenarios. Case studies with real details. Honest accounts of implementations that include the problems encountered and how they were solved. Content that sounds like it came from a person, not a marketing department.</p><p>I see this directly in my work at Customertimes. When we&#8217;re talking about complex technology solutions - Salesforce implementations, AI adoption in manufacturing, digital transformation projects - the presentation that wins isn&#8217;t the one with the slickest slides and cleanest talking points.</p><p>It&#8217;s the one where we tell the actual story. The specific challenges a pharmaceutical company faced when moving to a mobile CRM platform. The unexpected obstacles during deployment. The moment when a skeptical sales director suddenly understood why this mattered for his team.</p><p>In pharma especially, I&#8217;ve noticed that clients absorb case studies much better when they&#8217;re framed through a specific professional&#8217;s experience. Not &#8220;our solution improved efficiency by 40%&#8221; but &#8220;Maria, the head of regulatory affairs, told us she was spending sixteen hours a week just tracking down approval documents across different systems. Here&#8217;s what changed after implementation, and here&#8217;s what didn&#8217;t work the way we expected.&#8221;</p><h2><strong>The AI Content Flood Makes Human Voice Scarce</strong></h2><p>There&#8217;s another reason this is happening now, and it&#8217;s the elephant in every boardroom: <strong>AI-generated content.</strong></p><p>We&#8217;re drowning in it. Every company can now produce endless blog posts, social media updates, white papers, case studies. The technical barriers to content creation have collapsed. You can spin up a thousand words on any topic in sixty seconds.</p><p>Which means the actual bottleneck is not production anymore. It&#8217;s trust.</p><p>When everything sounds polished and professional and slightly generic, people start tuning out. They&#8217;re looking for signals of authenticity. Something that sounds like it came from an actual human who knows what they&#8217;re talking about and isn&#8217;t just optimizing for SEO keywords.</p><p>One communications executive quoted in the WSJ put it well: generative AI creates so much information clutter that it breeds distrust. The brands that win are the ones that feel authentic, human, close to people.</p><p>This aligns with what I&#8217;ve been seeing across industries. The competitive advantage is who can produce content that feels real.</p><h2><strong>The Trap Most Companies Will Fall Into</strong></h2><p>But here&#8217;s where most companies are going to screw this up.</p><p>They&#8217;re going to hire someone with &#8220;storyteller&#8221; in their title, pay them $200k+, and then slot them into the exact same workflows and approval processes they had before. The same legal reviews. The same brand guidelines. The same quarterly campaign thinking. The same insistence that everything needs to &#8220;ladder up&#8221; to the corporate messaging framework.</p><p>You can&#8217;t bolt storytelling onto a campaign-driven culture and expect it to work.</p><p>Real storytelling requires editorial independence. It requires the ability to say things that are interesting, which sometimes means saying things that make the marketing team uncomfortable. It requires covering topics because they matter to your audience, not because they fit this quarter&#8217;s product launch schedule.</p><p>The companies that get this right are building editorial infrastructure. They&#8217;re thinking like media companies, not like brands that occasionally publish content.</p><h2><strong>What Storytelling Actually Is (And Isn&#8217;t)</strong></h2><p>Designer Stefan Sagmeister, quoted in the WSJ piece, pointed out something I&#8217;ve been thinking about: &#8220;People who actually tell stories - meaning people who write novels and make feature films - don&#8217;t see themselves as storytellers. It&#8217;s all the people who are not storytellers who suddenly now want to be storytellers.&#8221;</p><p>He&#8217;s not wrong to be skeptical. But I think he&#8217;s missing what&#8217;s actually happening at the better companies.</p><p>Nobody&#8217;s trying to become Hollywood. They&#8217;re trying to become the publication of records for their niche. They&#8217;re trying to build what trade magazines used to provide: authoritative, consistent, useful content that helps buyers make better decisions.</p><p>For me, storytelling in a business context is something more fundamental: the ability to make sense of reality and connect facts, decisions, mistakes, and values into a coherent picture.</p><p>It&#8217;s the difference between saying &#8220;our AI solution improves efficiency&#8221; and explaining &#8220;here&#8217;s what actually happened when a mid-sized manufacturer tried to implement predictive maintenance. Here&#8217;s what broke in week three. Here&#8217;s what we learned about change management that no vendor deck ever mentions.&#8221;</p><p>That second version is harder to write. It requires actual knowledge of the domain. It requires talking to real people and understanding their actual problems. It can&#8217;t be templatized or scaled through AI generation.</p><p>But it&#8217;s also the only version anyone actually wants to read.</p><h2><strong>The Distribution Problem Disguised as a Content Problem</strong></h2><p>There&#8217;s one more thing worth noting: the storytelling gold rush is actually a distribution crisis in disguise.</p><p>Companies are hiring storytellers because they&#8217;ve lost access to the traditional channels that gave them credibility and reach. But a good storyteller without distribution is just someone writing into the void.</p><p>The companies that will actually succeed with this are those who are building an audience. They&#8217;re thinking about how to reach people directly, how to build trust over time, how to create content that people actively seek out rather than content that interrupts them.</p><p>This connects back to what I&#8217;ve written before about the art of persuading others. In a world where everyone has the tools to publish, where AI can generate unlimited content, where traditional media gatekeepers have lost their power - the differentiator becomes whether anyone actually believes what you&#8217;re saying.</p><h2><strong>The Real Scarcity</strong></h2><p>In the end, what we&#8217;re seeing is a fundamental shift in what&#8217;s valuable.</p><p>For a long time the scarce resource was distribution. Getting your message in front of people required access to gatekeepers - publishers, editors, broadcast networks.</p><p>Then for a brief period, the scarce resource was content production itself. Creating good content at scale was expensive and required specialized skills.</p><p>Now, in the AI era, content production is essentially free. Distribution is still challenging but more accessible than ever. What&#8217;s actually scarce is meaning.</p><p>Human meaning. Lived experience. Honest accounts of what actually happened, told by people who were there and understand the implications.</p><p>That&#8217;s what companies are actually hiring for when they post those &#8220;storyteller&#8221; positions. Whether they realize it or not.</p><p>The question is whether they&#8217;re willing to create the conditions for that kind of storytelling to actually happen. Or whether &#8220;storyteller&#8221; will just become another fancy title for someone writing carefully massaged corporate content that nobody reads.</p><p>Based on what I&#8217;ve seen in consulting, most companies will choose the second path. Which means the ones who choose the first will have a genuine competitive advantage.</p><p>The irony is that the companies most desperate for storytellers are often the ones least willing to let anyone tell real stories.</p>]]></content:encoded></item><item><title><![CDATA[How AI is Changing How Programmers Work (and What It Means for the Rest of Us)]]></title><description><![CDATA[I came across a fascinating study from Anthropic about how their engineers use AI in their work, and honestly, it hit close to home.]]></description><link>https://maxvotek.com/p/how-ai-is-changing-how-programmers</link><guid isPermaLink="false">https://maxvotek.com/p/how-ai-is-changing-how-programmers</guid><dc:creator><![CDATA[Max Votek]]></dc:creator><pubDate>Fri, 19 Dec 2025 21:25:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5c190bfe-f97b-4259-8a9a-6b8c8f95220f_1280x720.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I came across a <a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic">fascinating study from Anthropic</a> about how their engineers use AI in their work, and honestly, it hit close to home. Here are the numbers: <strong>productivity increased by 50%, and Claude is used in 60% of work tasks</strong>. But the most interesting part isn&#8217;t just the speed-up, it&#8217;s the fundamental transformation of the work itself.</p><p>Engineers are becoming full-stack specialists, easily venturing into areas they previously avoided touching. A backend engineer building complex UIs. Researchers creating data visualizations. Security teams analyzing unfamiliar code. This is not  just about doing the same work faster, it&#8217;s about doing entirely different work.</p><h2><strong>The Productivity Paradox</strong></h2><p>While productivity is clearly up, the picture is more nuanced than &#8220;AI makes everything faster.&#8221; When you dig into the data, engineers report spending <em>slightly less</em> time per task category, but producing <strong>considerably more output volume</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZpgY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZpgY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ZpgY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ZpgY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ZpgY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZpgY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZpgY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ZpgY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ZpgY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ZpgY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792ea5db-e842-4a46-9126-102bc2eb4b57_1280x720.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Think about that for a second. You&#8217;re not necessarily finishing your debugging faster, you&#8217;re debugging way more things. You&#8217;re not writing each feature quicker, you&#8217;re writing more features, period.</p><p>And here&#8217;s the kicker: <strong>27% of Claude-assisted work consists of tasks that wouldn&#8217;t have been done otherwise</strong>. We&#8217;re talking about overnight demos, interactive dashboards, exploratory work that would never have been cost-effective manually. One researcher described running multiple versions of Claude simultaneously, each exploring different approaches: &#8220;People tend to think about super capable models as a single instance, like getting a faster car. But having a million horses&#8230; allows you to test a bunch of different ideas.&#8221;</p><h2><strong>The Hidden Costs Nobody Talks About</strong></h2><p>Obviously, productivity gains come with new challenges, and this is where the Anthropic research gets really honest.</p><p>Some engineers worry their skills might atrophy. When code is generated so easily, it becomes harder to truly learn something deeply. One engineer put it perfectly: &#8220;When producing output is so easy and fast, it gets harder and harder to actually take the time to learn something.&#8221;</p><p>I&#8217;ve noticed this myself. Using AI consciously while maintaining the ability to critically evaluate results is getting increasingly difficult, because the results often look really impressive at first glance. This is dangerous. As I&#8217;ve written before, <strong>one of the biggest mistakes in business is tolerance for mediocrity</strong>, and technology is inadvertently bringing more of it into our lives.</p><p>There&#8217;s also what researchers call the &#8220;paradox of supervision.&#8221; Effectively using Claude requires supervision. But supervising Claude requires the very coding skills that may atrophy from AI overuse. Think about that circular trap for a moment.</p><p>Some engineers are deliberately practicing without AI to stay sharp. Others argue we&#8217;re moving to higher levels of abstraction, similar to how we moved from assembly language to Python, and that this is progress, not loss.</p><h2><strong>The Social Fabric is Changing</strong></h2><p>Another point in the report hit me hard: <strong>the shift in social dynamics</strong>.</p><p>Previously, people would turn to senior colleagues for advice and now they turn to Claude. This reduces live communication and the quality of mentorship. One engineer admitted: &#8220;I like working with people and it&#8217;s sad that I &#8216;need&#8217; them less now&#8230; More junior people don&#8217;t come to me with questions as often.&#8221;</p><p>And this is about the erosion of something fundamental to how we learn, grow, and build culture in organizations. When you ask Claude instead of your colleague, you get an answer but you lose the conversation, the context, the relationship. You lose the informal knowledge transfer that happens when someone explains not just <em>what</em> to do but <em>why</em> they made certain decisions years ago.</p><p>Some engineers even admit they feel like they&#8217;re &#8220;automating themselves out of a job&#8221; and aren&#8217;t sure what their future role will be.</p><h2><strong>What Engineers Are Actually Learning About AI Delegation</strong></h2><p>The Anthropic engineers have developed remarkably consistent intuitions about what to delegate to AI and what to keep for themselves. They delegate tasks that are:</p><ul><li><p><strong>Outside their expertise but low complexity</strong> (&#8221;I don&#8217;t know Git or Linux very well&#8230; Claude does a good job covering for my lack of experience&#8221;)</p></li><li><p><strong>Easily verifiable</strong> (&#8221;It&#8217;s absolutely amazing for everything where validation effort isn&#8217;t large&#8221;)</p></li><li><p><strong>Repetitive or boring</strong> (&#8221;The more excited I am to do the task, the more likely I am to not use Claude&#8221;)</p></li><li><p><strong>Lower stakes</strong> (throwaway code, debugging, research scripts)</p></li></ul><p>What they keep for themselves: high-level thinking, strategic decisions, design choices requiring &#8220;taste&#8221; or organizational context.</p><p>But here&#8217;s the thing - this boundary is constantly moving. One engineer compared it to adopting Google Maps: at first you only use it for routes you don&#8217;t know, then for routes you mostly know, and eventually for everything, even your daily commute. The same progression is happening with AI.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!of6v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!of6v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!of6v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!of6v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!of6v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!of6v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!of6v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!of6v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!of6v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!of6v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86df869-40bb-4aef-a213-213baea147ed_1280x720.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>The Future Belongs to Strategic Thinkers</strong></h2><p>The developer&#8217;s role is evolving from a craftsman writing code to a strategist managing AI agents. This really resonates with my thoughts on how important it is to develop internal entrepreneurship within teams.</p><p>Look at the data: engineers are spending more time on high-level design and less on implementation details. They&#8217;re tackling increasingly complex tasks - average task complexity jumped from 3.2 to 3.8 on a 5-point scale in just six months. Claude Code now chains together 21 consecutive actions autonomously, up from 10 six months ago.</p><p>But complexity isn&#8217;t the same as strategic thinking. <strong>The winners will be those who can not just delegate tasks, but ask the right questions, see the bigger picture, and turn AI capabilities into real products.</strong></p><p>This is where most organizations will fail. They&#8217;ll focus on the productivity gains while missing the fundamental shift in what skills matter. They&#8217;ll celebrate engineers writing less code without asking whether those engineers are thinking more strategically. They&#8217;ll tolerate mediocre AI-generated output because it&#8217;s &#8220;good enough&#8221; and fast, forgetting that &#8220;good enough&#8221; compounds into deep mediocrity over time.</p><h2><strong>What This Means for Your Business</strong></h2><p>If you&#8217;re running a team, here are the uncomfortable questions you need to ask:</p><p><strong>Are your people developing judgment, or just learning to prompt?</strong> There&#8217;s a difference between someone who can get Claude to generate code and someone who can evaluate whether that code solves the right problem in the right way.</p><p><strong>Are you preserving the social fabric that enables knowledge transfer?</strong> When junior engineers stop asking senior engineers questions, you&#8217;re not just losing mentorship but you&#8217;re losing the next generation&#8217;s ability to become mentors themselves.</p><p><strong>Are you raising your bar or lowering it?</strong> Just because AI can do something doesn&#8217;t mean the result is excellent. Are you using AI to achieve things that were previously impossible, or are you using it to accept things that are merely adequate?</p><p><strong>Are you building entrepreneurial thinking into your team?</strong> The engineers who will thrive aren&#8217;t the ones who write the most code - they&#8217;re the ones who can see opportunities, make strategic bets, and orchestrate resources (including AI) to build something meaningful.</p><h2><strong>The Uncomfortable Truth</strong></h2><p>Some Anthropic engineers expressed a conflict between short-term optimism and long-term uncertainty. &#8220;I feel optimistic in the short term but in the long term I think AI will end up doing everything and make me and many others irrelevant,&#8221; one stated bluntly.</p><p>Others were more pragmatic: &#8220;The important thing is to just be really adaptable.&#8221;</p><p>Here&#8217;s my take: <strong>the future belongs to those who know how to think and not to those who write code</strong>.</p><p>AI is incredible at execution. It&#8217;s getting better at it every day. But execution without strategic thinking is just an expensive activity. The gap between activity and achievement is judgment: knowing what to build, why to build it, and how to know if it&#8217;s working.</p><p>This shift is happening faster than most people realize.</p><p><strong>What are you doing to ensure your team isn&#8217;t just working faster, but thinking better?</strong></p>]]></content:encoded></item></channel></rss>