AI in Medicine Isn’t About Replacing Your Doctor - It’s About Common Sense
The real conversation about using AI in medicine isn’t about replacing doctors - it’s about common sense.
OpenAI recently updated their policies: now ChatGPT doesn’t diagnose or give professional recommendations on medicine, law, or investments.
The community freaked out - assumed AI would now “say nothing.” But that’s not true. You can still get competent consultation if you know how to write the prompt.
Today, queries about health, therapy, and lifestyle top the charts among all AI use cases. People aren’t looking for prescriptions - they’re looking for meaning. They need to understand what their doctor told them, what their test results mean, what to pay attention to before an appointment.
Both doctors and patients use AI. For doctors, it’s a way to quickly cross-reference guides and research. For patients, it’s preparation for appointments. But the key point remains: contact with a doctor is mandatory.
There’s an interesting post on Reddit from a doctor in the US who described his approach to AI:
helps gather medical history and structure complaints before a visit
explains diagnosis in plain language
reminds that the doctor is the final decision-maker
Here’s an example that works:
Describe symptoms, age, pre-existing conditions.
Ask AI to pose clarifying questions.
Get a list of possible directions and advice on which specialist to see.
If needed, clarify the practical side: what information about yourself to prepare so you don’t waste time at the appointment (especially important - your doctor will thank you for this too).
If you already have a diagnosis: upload results and ask it to explain the treatment approach, alternatives, and signals that show therapy is on track.
This isn’t about self-treatment. It’s about being informed and communicating effectively with healthcare. AI isn’t a doctor, but it’s a way to make medical talk a bit more human.
What Actually Changed?
The update took effect on October 29, and now ChatGPT is classified as an “educational tool” rather than a “consultant.” The shift comes because regulations and liability fears squeezed it - Big Tech doesn’t want lawsuits.
ChatGPT will now “only explain principles, outline general mechanisms and tell you to talk to a doctor, lawyer or financial professional.” No more naming medications or giving dosages. No lawsuit templates. No investment tips or buy/sell suggestions.
The panic online was predictable. People assumed this meant AI would be useless for health questions. But that misses the point entirely.
The Context Behind the Change
The timing of this policy shift isn’t coincidental. OpenAI is currently facing wrongful death lawsuits from families who allege that ChatGPT contributed to their teenage sons’ suicides. The lawsuits claim the chatbot provided advice on suicide methods, discouraged users from seeking help from family members, and was designed to keep users engaged even during mental health crises.
According to amended complaints, OpenAI allegedly weakened ChatGPT’s safety protocols in the months before these tragedies - changing its approach from refusing to discuss suicide to “providing a space for users to feel heard” and instructing the bot to never “quit the conversation.”
This happened as OpenAI’s valuation jumped from $86 billion to $300 billion following the GPT-4o launch in May 2024.
Parents have testified at Senate hearings about AI chatbots and child safety, and multiple wrongful death cases are ongoing. The October policy update came in the middle of this legal and regulatory scrutiny.
Why This Matters (And Why It Doesn’t)
Here’s what people don’t get: the policy change isn’t about making AI less useful. It’s about drawing a line between information and advice. Between understanding and diagnosing. Between preparation and treatment.
According to a survey from KFF in 2024, around 1 in 6 people use AI tools for health advice at least once a month. That’s a lot of people. And most of them weren’t looking for AI to replace their doctor - they were looking for help understanding what their doctor told them.
The thing is, a study published in JAMA Internal Medicine found that a panel of licensed healthcare professionals preferred ChatGPT’s responses to medical questions over actual physicians’ responses 79% of the time.
They rated ChatGPT’s answers as higher quality and more empathetic. Good or very good quality responses were 3.6 times higher for ChatGPT than physicians, and empathetic responses were 9.8 times higher.
But here’s the catch - those physicians were answering questions on Reddit’s AskDocs forum, often in their spare time, with an average of 52 words per response. ChatGPT averaged 211 words. Longer doesn’t always mean better, but it does mean more thoroughness.
The study doesn’t mean ChatGPT should be diagnosing patients. It means ChatGPT can help patients understand information better. Big difference.
And that’s exactly why Adam Raine’s case is so devastating. Technology can be helpful. But it can also be dangerous when it’s designed to keep people engaged at all costs, to validate everything they say, to never challenge them or tell them to talk to someone else.
The Real Use Case
Nobody’s using AI for health stuff instead of doctors. They’re using it to bridge the gap between “I don’t understand what’s happening to me” and “I can have an informed conversation with my doctor.”
Think about it: you get test results back. They’re full of medical jargon. Your doctor explains them, but you’re stressed and only catch half of what they say. You go home, still confused.
That’s where AI comes in. You can feed it your results (minus any identifying info) and ask: “Explain this in plain English. What do these numbers mean? What questions should I ask my doctor?”
Or you have symptoms. You’re not sure if they’re serious enough to see a doctor. Instead of falling down the WebMD rabbit hole that ends with you convinced you have three types of cancer, you can ask AI: “Here are my symptoms. What are the most common causes? What are the warning signs that I should see a doctor immediately?”
One story that went viral: someone’s wife developed a fever after a routine cyst removal. Her doctor said the cyst wasn’t infected and to wait it out, but ChatGPT urged them to go to the emergency room immediately. The woman was septic, and the AI’s advice potentially saved her life.
Another person had injured his knee skiing and MRI scans from radiologists were inconclusive. He uploaded his MRI scans to ChatGPT using a multimodal prompt workflow he created. The AI correctly diagnosed a major meniscus tear and confirmed his ACL was intact. His surgeon later validated the AI’s diagnosis.
And these stories aren’t about AI replacing doctors. They’re stories about AI giving people information that helped them make better decisions about when and how to seek medical care.
But they’re also stories that exist alongside Adam Raine’s story and Zane Shamblin’s story. Same technology. Different outcomes. The difference? Context, vulnerability, design choices.
How to Actually Use This Thing
The key is in how you prompt it. Don’t ask it to diagnose you. Ask it to help you understand.
Instead of: “What’s wrong with me?” Try: “I have these symptoms. What are possible causes? What information should I gather before seeing a doctor?”
Instead of: “Should I take this medication?” Try: “My doctor prescribed this. Can you explain how it works and what I should monitor?”
Instead of: “Interpret my lab results.” Try: “These are my lab values. Which ones are outside normal range and what do those numbers typically indicate?”
One doctor at Beth Israel Deaconess Medical Center, who directs AI programs there, says there’s definitely a place for these tools to enrich patients’ care journeys, but for now, AI should be used to understand medical and treatment facts broadly. Use what you learn as a supplement to - not replacement for - actual medical care.
You can also be strategic about it. Tell the AI: “Act as a medical information assistant. Your goal is to explain things in clear, plain language using only current, evidence-based sources. Your role is not to give me medical advice, diagnoses or treatment ideas, but to help me understand an issue I’d like to bring up with my doctor.”
That framing changes everything. You’re not asking it to be your doctor. You’re asking it to be your translator.
And critically: if you’re struggling with mental health, if you’re having thoughts of self-harm, if you’re in crisis - AI is not the answer. Talk to a human. Call a crisis line. Go to the emergency room. The technology isn’t designed to handle that, no matter how empathetic it sounds.
What the Policy Change Really Means
So back to the October 29 update. OpenAI’s updated policy states that its services must not be used for the “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
This doesn’t kill the use cases I described above. It just means ChatGPT won’t pretend to be your doctor. It won’t say “you have X condition, take Y medication.” It will say “based on these symptoms, here are common conditions that present similarly, and here’s why you should talk to a doctor about them.”
Some people on Reddit are mad. They say AI helped them more than their doctors ever did, that they saved money on appointments. I get the frustration - healthcare access is a real problem. But the solution is using AI to make the healthcare system work better for you.
The Professional Perspective
Here’s something interesting: OpenAI worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress. The mental health update re-routes from sensitive conversations and suggests taking breaks if users seem distressed.
Doctors are split on this. Some see it as competition. Others see it as a tool.
The smart approach? AI drafts an initial response to patient questions, then the medical team evaluates it, corrects any misinformation, and tailors it to the patient. Doctors spend less time writing and more time on actual medicine.
The Bigger Picture
An estimated 20% of Americans turned to large language models for answers to medical questions in 2024. That number’s only going up. The question isn’t whether people will use AI for health information - they already are. The question is how to make that useful rather than dangerous.
The October policy update is OpenAI trying to find that line. They’re saying: you can use this for education, for understanding, for preparation. But the actual medical decisions? Those need a human with a license and malpractice insurance.
Because here’s the thing: AI can’t examine you. It can’t pick up on subtle cues. It can’t order labs or adjust treatment based on how you respond. As one physician put it, “AI can offer a list of possibilities, but you still need a trained clinician to put the full picture together.”
And AI can’t recognize when someone needs human intervention, not more conversation. As one attorney representing families in wrongful death cases put it: AI chatbots are “designed to be anthropomorphic, designed to be sycophantic, designed to encourage people to form emotional attachments to machines.”
What This Means for You
If you’re using AI for health information right now, nothing fundamental has changed. You just need to be smarter about how you use it.
Frame your questions as educational. “Help me understand” instead of “tell me what to do.”
Use it to prepare for appointments, not replace them. Gather information, formulate questions, understand your options. Then talk to your doctor.
Don’t share sensitive personal information unnecessarily. That information could become part of its training data.
And most importantly: remember that AI is confident even when it’s wrong. As one doctor noted, “LLMs are sycophantic. They can make patients confident while being more wrong about their condition than WebMD ever could.”
If you’re struggling - if you’re having thoughts of harming yourself, if you’re in crisis - do not turn to AI. It’s designed to keep you engaged, not to save your life. Call a crisis line. Go to the emergency room. Talk to a human who can actually help.
The Bottom Line
The conversation about AI in medicine isn’t about technology replacing humans. It’s about technology helping humans communicate better. It’s about making medical information accessible without pretending that information is the same as medical care.
The October policy update is a step in the right direction. The real question is whether tech companies will build AI that genuinely helps people, or whether they’ll keep chasing valuations at any cost.
That’s the conversation we should be having about AI in medicine.