The Two Ways to Build Healthcare AI: Why OpenAI and Anthropic Made Opposite Bets on Patient Data
OpenAI and Anthropic just launched healthcare AI products within weeks of each other.
Same category. Same technology foundation. Opposite architectures.
ChatGPT Health pulls your medical records into OpenAI’s consumer app. You upload documents. Connect Apple Health. Share your history. Everything flows into their cloud. They store your “health memories” separately from regular chats.
For consumer plans - no Business Associate Agreement. Just Terms of Service.
Claude for Healthcare connects to clinical data inside the perimeter. Zero data retention, contractually guaranteed. The model queries CMS coverage databases, ICD-10, PubMed in real-time through Model Context Protocol. Data stays where it is. PHI never leaves the Virtual Private Cloud.
This isn’t a minor technical difference. It’s two fundamentally different theories about how AI should touch the most sensitive data humans generate.
Intelligence In, Not Data Out
Claude’s architecture inverts the traditional AI approach.
Most AI systems work by pulling data into a central repository, training or fine-tuning on it, then serving predictions back. This works fine for marketing content or customer service. It’s a compliance nightmare for healthcare.
Anthropic built Model Context Protocol specifically to avoid this. The AI connects to data sources: enterprise knowledge bases, clinical databases, coverage policies - and queries them in real-time. The model sees the data momentarily to answer a question, then the connection closes.
No data retention. No training. No persistent storage outside the health system’s infrastructure.
For enterprise deployments through AWS Bedrock, Google Cloud Vertex AI, or Azure OpenAI Service, the health system chooses where compute happens. The AI runs in their VPC. PHI doesn’t cross boundaries.
This is cloud-agnostic by design. No vendor lock-in. No requirement to move clinical data to a specific cloud provider.
OpenAI went the opposite direction.
ChatGPT Health is an aggregator. The value proposition is consolidation: bring all your health data into one place, let the AI see everything at once, and get personalized insights.
Your lab results from Quest. Your prescriptions from CVS. Your fitness data from Apple Watch. Your hospital discharge summaries. All in OpenAI’s infrastructure.
They’ve built separate storage for “health memories” and claim enhanced security. But the fundamental architecture is centralized aggregation. Your data lives in their cloud, under their Terms of Service.
For consumer accounts, which is what most people will use, there’s no BAA. OpenAI isn’t your business associate under HIPAA. You’re giving them your health data as a consumer, not as a patient receiving covered services.
The Geographic Tell
ChatGPT Health launched in the United States only.
EU, UK, Switzerland - explicitly excluded. Not “coming soon.” Not “rolling out later.” Excluded from launch.
This isn’t an accident.
GDPR Article 9 classifies health data as a special category requiring explicit consent and heightened protection. The centralized aggregation model - pull everything into our cloud, store it indefinitely, use it to improve our services - doesn’t comply.
OpenAI could probably build a GDPR-compliant version. They’d need separate infrastructure, different terms, clear data processing agreements, and demonstrated necessity for each use. But that’s not the product they built.
They are built for the US market, where consumer health apps operate under FTC rules and state privacy laws, not HIPAA (unless they’re providing covered services).
Claude for Healthcare works globally. The enterprise architecture: data stays in your infrastructure, AI connects temporarily, zero retention and fits European data protection requirements.
This geographic split tells you everything about the two strategies.
What’s Actually Going On
OpenAI has 230 million weekly active users asking health questions.
That’s the distribution advantage. Millions of people are already using ChatGPT to interpret lab results, research symptoms, and understand diagnoses. They’re not waiting for their doctor to adopt AI, they’re bringing AI to their healthcare themselves.
ChatGPT Health formalizes this. Build the habit first with a free tier. Add premium features for $20/month. Once millions of patients have their health data aggregated in ChatGPT, you have leverage with health systems.
“Your patients are already using our AI. Want to integrate it properly? Here’s the enterprise offering.”
This is a consumer-first strategy. Own the patient relationship. Make health systems adapt to where patients already are.
Anthropic doesn’t have 230 million weekly users. Claude is growing, but it’s not consumer-default the way ChatGPT is.
So they’re entering through the back door: enterprise infrastructure.
Their partner Commure - a healthcare infrastructure company - estimates Claude’s pre-built skills for prior authorization review, claims appeals automation, and care triage from patient portals could save clinicians millions of hours annually.
These aren’t consumer features. These are workflow automation tools for health systems, payers, and pharma companies.
Prior authorization takes providers 13 hours per week on average. It’s pure administrative overhead: checking coverage policies, documenting medical necessity, appealing denials. Exactly the kind of structured, high-volume, rules-based work AI can handle.
Claims appeals run through similar workflows. Coverage policies change quarterly. Medical coding updates annually. Keeping track of which codes require which documentation for which payers is cognitive overhead that doesn’t require human judgment - it requires accurate retrieval and application of policies.
Claude connects to the authoritative sources in real-time. CMS coverage database for Medicare policies. Commercial payer guidelines through health plan APIs. ICD-10 and CPT codes from the official repositories.
This is boring infrastructure work. It’s also where billions of dollars of healthcare administrative costs live.
Two Theories of the Market
OpenAI is betting on the front door.
Patients are the entry point. They have the motivation - it’s their health. They have the data - it’s scattered across multiple systems, and they’re the only ones with access to all of it. They have the ability to pay, $20/month is less than one copay.
Build the consumer habit. Create the aggregated health record. Once patients expect AI-powered health insights, health systems will need to integrate or become the slow, frustrating alternative.
This works if:
Patients trust OpenAI with their health data
The consumer experience is dramatically better than patient portals
Health systems eventually integrate rather than compete
Regulators don’t shut down the aggregation model
Anthropic is betting on the plumbing.
Health systems are the entry point. They have the clinical data. They have the liability. They have the compliance requirements. They have the budget - healthcare IT spending is massive, and automation of administrative work has clear ROI.
Build enterprise infrastructure. Solve workflow problems. Make the AI indispensable to operations. Once health systems depend on your AI for prior auth, claims processing, and clinical documentation, you own critical infrastructure.
This works if:
Health systems adopt fast enough to build defensible market position
Enterprise contracts generate enough revenue to compete with consumer scale
Clinical workflow automation proves more valuable than consumer convenience
Regulations favor data localization over aggregation
Both could be right. Or both could be wrong.
For Pharma and Life Sciences, This Isn’t Academic
Your field reps capture HCP data every sales visit. Call notes, prescribing patterns, coverage challenges, formulary positions. That’s PHI if it includes patient-level information, even de-identified.
Your clinical trials process thousands of patient records. Inclusion/exclusion criteria checking. Adverse event monitoring. Protocol compliance verification. All require AI to scale efficiently.
Your patient support programs - copay assistance, adherence monitoring, nurse navigation - handle PHI daily. Every interaction generates data that could improve outcomes if analyzed properly.
The AI vendor you choose shapes your compliance posture.
Choose a vendor that pulls data into their cloud, and you need to:
Verify their infrastructure meets your security requirements
Ensure BAAs cover all use cases
Monitor what they do with your data
Plan for vendor lock-in
Accept geographic restrictions
Choose a vendor where data stays in your infrastructure, and you:
Control where compute happens
Maintain data sovereignty
Keep multi-cloud optionality
Meet European requirements by default
Own the audit trail
This isn’t just about HIPAA. It’s about GDPR, MDR (Medical Device Regulation), EU AI Act, and whatever comes next.
The architecture you choose today determines which regulations you can comply with tomorrow.
The Geographic Split I Keep Thinking About
The US might go consumer-first.
American healthcare is fragmented. Patients already act as their own care coordinators. They collect records from multiple providers. They research treatment options. They advocate for coverage.
A consumer AI that aggregates everything and helps navigate the system fits American healthcare’s reality.
Europe will almost certainly go enterprise-first.
European healthcare is more centralized. Electronic health records are more standardized. Data protection is stricter. The European Health Data Space regulation, applying from 2027, explicitly requires data minimization and purpose limitation.
An enterprise AI that queries authorized sources without moving data fits European regulatory philosophy.
This creates a strange situation: the same AI companies building for the same use cases will likely deploy completely different architectures depending on geography.
OpenAI might never launch consumer health aggregation in Europe. Anthropic might never need to - the enterprise model could be the only viable approach.
For global companies, this means managing two different AI strategies. Your US operations might use consumer-facing AI that patients bring to appointments. Your European operations might use enterprise AI that never touches patient devices.
The convergence everyone predicts: where AI seamlessly integrates consumer and clinical data might not happen uniformly. It might fragment along regulatory boundaries.
What This Actually Means
We’re watching two different bets play out in real-time.
OpenAI is betting that convenience wins. That patients will trade data control for better insights. That regulatory frameworks will adapt to consumer demand. That health systems will integrate rather than compete.
Anthropic is betting that infrastructure wins. That health systems will pay for workflow automation. Those regulations will favor data localization. That enterprise adoption creates defensible moats.
Both companies are playing to their strengths. OpenAI has a consumer scale. Anthropic has enterprise trust.
The question isn’t which is better in some abstract sense. The question is: which architecture fits your risk profile, your regulatory environment, and your business model?
If you’re building consumer health tools, OpenAI’s aggregation model might be the only way to deliver the experience users expect.
If you’re operating clinical infrastructure, Anthropic’s zero-retention model might be the only way to satisfy compliance and security teams.
And if you’re doing both, which most healthcare companies eventually do, you might need both architectures, deployed differently depending on use case and geography.
The two ways to build healthcare AI aren’t complementary. They’re competing visions of what healthcare data architecture should look like.
One will probably win in the US. The other will probably win in Europe. And global healthcare companies will need to operate in both worlds simultaneously.
The question isn’t convenience or control anymore. It’s: which world are you building first?