On Memories That Never Were
A moment from Lex Fridman’s conversation with Julia Shaw really stayed with me. There was a simple but unsettling observation:
“Every memory we have is false. The only question is - to what degree.”
Memory isn’t an archive, it’s an editor. Each time we remember, we’re not retrieving a recording - we’re rewriting it anew. We add details, discard what seems unnecessary, change the context.
As a result, we don’t remember what actually happened, but rather what we remember now.
This leads to an “internal Wikipedia” effect: people we share our stories with become co-authors of our memories. Their versions gradually displace our own.
At some point, you’re no longer sure - did you actually experience this, or have you simply told yourself the same story many times over?
Memory as a Survival Tool
Julia says this mechanism of memory isn’t a defect, but a survival tool. It makes us adaptable, helps us come to terms with things and move forward.
In the interview, she explains that our brains are fundamentally unreliable narrators. We don’t just misremember minor details - we can be convinced of entire events that never happened. Through her research, Shaw has successfully implanted false memories of committing crimes in people who never committed them. Not small crimes either - assault with a weapon, theft with police involvement.
The process is surprisingly straightforward. Using what she calls the “false memory recipe,” she combines social pressure, repetition, and imagination exercises. Subjects are told that according to records (which don’t actually exist), they committed a crime as a teenager. They’re asked to imagine what might have happened, to fill in the blanks. And gradually, through multiple sessions, the brain does what it does best - it constructs a coherent narrative.
Within just a few sessions, around 70% of participants develop false memories. Not vague impressions, but detailed recollections complete with emotional content. They remember the weather, what they were wearing, how they felt.
The terrifying part? These aren’t people with compromised cognitive abilities. They’re ordinary individuals with normal memory function. Which means we’re all vulnerable.
The Collaborative Nature of Remembering
What makes this particularly relevant now is how we interact with information. Shaw points out that memory is inherently social. Every time we tell a story, every time someone responds with their own version or interpretation, we’re editing the master copy.
She describes this phenomenon of “memory conformity” - when we adopt details from other people’s accounts as our own memories. You and your friend remember the same party, but your friend insists the host wore a red dress. Next time you recall that party, suddenly you “remember” the red dress too, even though your original memory was different.
This isn’t lying. Your brain genuinely believes it remembers the red dress. It has integrated that detail into your memory seamlessly, indistinguishably from details you actually perceived firsthand.
But there’s another side to this: the more we rewrite the past, the further we drift from the truth.
The AI Memory Problem
Here an obvious new threat emerges - AI as a false memory machine.
Generative models today behave like a hyper-version of the human brain: they don’t just supplement, they construct and imagine - confidently, elegantly, with the right tone.
If a person interacts with an AI that “remembers” their past responses - collaborative memory editing begins.
You and the machine create a shared version of your past that can no longer be separated from reality.
In the interview, Shaw explores this danger in depth. She calls AI “the ultimate false memory machine” and explains why: these systems don’t just recall information neutrally. They generate, they embellish, they fill in gaps with plausible-sounding details. They do exactly what our brains do, but with even less grounding in actual events.
Think about how this plays out in practice. You’re talking to an AI assistant about something that happened to you. The AI, drawing on statistical patterns in its training data, suggests details that seem to fit your story. “That must have been frustrating,” it might say, or “I imagine the room was quite crowded.”
These aren’t statements of fact - they’re probabilistic guesses. But presented with confidence and embedded in a seemingly personalized conversation, they start to feel like confirmation. Like evidence that yes, the room was crowded. You remember that now.
The AI becomes a co-author of your personal history. And unlike a human friend whose own biases and limited memory make their suggestions obviously subjective, the AI’s suggestions come wrapped in an aura of computational authority.
The Crisis of Verification
Shaw emphasizes a crucial point in her work: our brains are unreliable sources. This has always been true in courtrooms, where eyewitness testimony - once considered the gold standard - has been repeatedly shown to be shockingly inaccurate. The Innocence Project has exonerated hundreds of people wrongly convicted based on confident, detailed, completely false eyewitness identifications.
But now this unreliability is being amplified by technology that can generate false memories at scale.
She mentions her work on SPOT - the System for Preserving Offense and Trauma memories. The core insight is simple but profound: if you want accurate information, capture it immediately. Don’t rely on your brain to store it faithfully. Don’t let it marinate in the reconstructive soup of human memory.
“Write things down immediately. Don’t take your word for it,” Shaw says.
This becomes even more critical when AI enters the picture. If the AI is helping you “remember” what happened, who’s to say what actually occurred? There’s no ground truth anymore, just layers of reconstruction and generation.
Consider the implications for something like grief bots - AI systems trained on someone’s digital footprint to simulate conversation with them after they’ve died. Shaw finds this concept interesting and potentially valuable. But it raises profound questions: are you talking to a representation of the person, or are you creating new memories of conversations that never happened? Are you processing grief, or are you generating false memories that will eventually become indistinguishable from your real memories of the deceased?
Memory Conformity in the Age of AI
The interview touches on something Shaw calls “memory conformity” - the phenomenon where witnesses to the same event influence each other’s memories just by discussing what happened. This has massive implications for criminal investigations, which is why good detectives separate witnesses and interview them individually before they can contaminate each other’s accounts.
But with AI, we’re all potentially in a constant state of memory conformity with systems that have no actual memories, only probabilistic reconstructions.
The AI “remembers” your previous conversations not as discrete events but as patterns in a neural network. When it references something you told it, it’s not retrieving a recording - it’s generating text that fits the pattern of what you might have said, filtered through whatever biases exist in its training.
And you, the human user, hear this reflection of your words and think: “Yes, that’s right, that’s what I said.” But is it? Or is it what the AI’s statistical model predicted you probably said, given the overall context?
The distinction collapses. The AI’s version becomes your memory.
The Paradox of AI Memory Assistance
Here’s where it gets really interesting. Shaw points out that the same properties that make AI a false memory machine could potentially make it a tool for better memory.
If memory is inherently collaborative - if we’re always co-authoring our past with others - then maybe a well-designed AI system could help us remember more accurately, not less.
The key is in the implementation. Shaw’s work on the Self-Administered Interview shows how careful questioning can actually improve memory recall. By avoiding leading questions, by encouraging free recall before moving to specific questions, by having people sketch what they remember - these techniques help people access more accurate memories.
An AI system designed with these principles in mind could theoretically guide you through proper memory recall. It could timestamp your recollections. It could avoid suggesting details and instead encourage you to generate them yourself. It could help you distinguish between what you actually remember and what you’ve inferred or been told by others.
The technology could go either way. It could become the ultimate false memory machine, subtly rewriting our personal histories one conversation at a time. Or it could become a tool for more honest, more accurate engagement with our own pasts.
Protecting Ourselves
Shaw’s advice is practical: acknowledge that your brain is an unreliable narrator. Don’t trust your memory on important things. Write it down. Record it. Time-stamp it. Create contemporaneous evidence that exists outside your reconstructive cognitive processes.
This advice becomes even more critical as AI systems become more sophisticated at mimicking human conversation, more convincing in their suggestions, more seamlessly integrated into our daily lives.
We need to develop new habits of epistemic hygiene. When an AI says “you told me last week that...” we need to be able to think: “Did I actually say that, or is this a plausible reconstruction?” When we’re using AI to help process memories or experiences, we need to maintain some skepticism about the collaborative narrative being constructed.
But we also need system design that respects the fragility of human memory. AI developers need to understand that they’re not just building conversation partners - they’re building systems that will become embedded in the formation and maintenance of human memories.
This requires input from people like Shaw, who understand how memory actually works, how easily it’s manipulated, and what safeguards might help preserve accuracy even as we delegate more of our cognitive work to machines.