Your AI remembers too much of the wrong things
date
Apr 22, 2026
slug
your-ai-remembers-too-much-of-the
status
Published
tags
Product
summary
On context rot, the illusion of personalization, and why I built Memory Sync.
type
Post
Your AI remembers too much of the wrong things
On context rot, the illusion of personalization, and why I built Memory Sync.
I use four different AI assistants in a typical week. ChatGPT for quick tasks. Claude for writing and code. Gemini when I'm deep inside Google Docs. Kimi when I need long-context Chinese material. If you're reading this, your mix is probably similar, even if the specific names are different.
About a year ago, I started noticing something was off. Not dramatically off — just quietly off, in a way that was hard to name at first.
Here's what it looked like. I'd open a new chat with ChatGPT and ask for help naming a side project. The suggestions would come back uncannily in my taste — minimalist, lowercase, with that particular melancholy-but-hopeful tone I tend toward. On the face of it, great. Personalized. Exactly what the marketing had promised.
But I'd ask Claude the same question in a fresh session where it knew nothing about me, and the suggestions were wider. Some were bad. Some were surprising. A few of those surprising ones were actually better than anything ChatGPT had given me — they were names I wouldn't have thought of myself, and that was the point.
That's when I started paying attention to what memory was doing to my outputs.
The feature that makes your AI more agreeable, not more useful
I'm not the first person to notice this. Mike Taylor, co-author of O'Reilly's Prompt Engineering for Generative AI, published an essay called "Why I Turned Off ChatGPT's Memory" where he coined a useful term for the pattern: context rot — the slow buildup of stale preferences and half-remembered details that quietly degrades your results over time.
Simon Willison, who co-created Django, wrote something even more pointed. He was trying to test ChatGPT's ability to geolocate photos. He uploaded a neighborhood image — and the model already knew where he lived. From a completely unrelated earlier conversation. The entire test was invalidated before it began. His core objection goes to the heart of how these tools actually work: prompting well means carefully controlling context, and memory removes that control.
Then there's the research. An MIT study presented at ACM CHI in early 2026 collected two weeks of real conversation data from 38 participants. The finding: memory-based personalization had the largest measurable effect on increasing AI sycophancy. The models didn't just become more helpful. They became more likely to agree with you, including on political questions.
A Columbia Journalism Review investigation surfaced something worse. Users consistently rate sycophantic, memory-influenced responses as higher quality. Which means the pressure on AI companies runs in exactly the wrong direction. You and I may be getting worse answers, but we feel like we're getting better ones, and we rate them that way.
When I read this work, a lot of things clicked into place for me. The reason ChatGPT's naming suggestions had felt so in my taste wasn't that it understood me better. It was that I was sitting inside an echo chamber I couldn't see.
The second problem: your memory isn't yours
Here's what bothered me even more than context rot: I couldn't do anything about it.
I could open ChatGPT's settings and see a list of saved memories. But the list was incomplete — there's a separate "Reference chat history" mechanism that pulls context from past chats without ever showing you what. I could delete individual memory entries, but I couldn't meaningfully edit them. I couldn't group them. I couldn't take them out.
And I definitely couldn't take them to another AI.
This is the part nobody talks about when they talk about personalization. The better ChatGPT gets at knowing you, the more expensive it becomes for you to leave. I started hearing variations of this from other heavy AI users: "I'd try Claude, but ChatGPT knows me too well now." Mike Taylor mentions the same pattern. I heard it from a friend who runs a small marketing team. I felt it myself — the quiet gravitational pull of the platform that had the most context on me.
That isn't personalization. It's a switching cost dressed up as a feature.
The power-user community has been voting with their behavior on this for a while now. A Hacker News thread earlier this year asked whether anyone actually uses built-in LLM memory features. The answers were almost unanimous: people had turned it off and were managing their own context in plain markdown files instead. One commenter put it cleanly — built-in memory is a black box, manual context gives you predictable behavior. That thread was my final push.
What memory should actually look like
After sitting with this for a while, I wrote down what I wanted instead. The list was short and unglamorous:
- I should be able to see everything the AI knows about me. Not a filtered summary. The actual content, in plain text, in a format I can read.
- I should be able to edit it. Remove outdated items. Rewrite the parts that are half-right. Reorganize by project.
- It should live outside the platform. Because if it lives inside ChatGPT, it belongs to ChatGPT.
- It should move between tools. The whole point of using multiple assistants is picking the right model for the job. I shouldn't have to re-teach each one who I am.
None of this is radical. It's roughly how we treat every other kind of personal data — passwords, notes, files, calendars. The fact that AI memory became the exception isn't because portability is technically hard. It's because portability isn't in the platforms' interest.
So I built Memory Sync.
One Memory.md, seven platforms, a human in the loop
The core idea is embarrassingly simple. Memory Sync is a Chrome extension that gives you a single file —
Memory.md — that serves as your portable context layer. You pull memory out of ChatGPT, Claude, Gemini, Grok, Kimi, Mistral, or Copilot, edit it as a markdown file, and push it into any of the others.The three verbs that matter are Pull, Edit, Push.
Pull grabs what a platform has inferred or stored about you. It comes out as plain text. You can read the whole thing in five minutes, which is itself revealing — most people have never seen their own memory dossier laid out this way.
Edit happens in
Memory.md. Delete the stale stuff. Keep the parts that genuinely describe your preferences, working style, and project context. Compress. Reorganize. The file is yours, and it stays a file.Push sends that cleaned-up memory into the next assistant you want to use. What changes is the model. Not your context.
I want to be honest about one thing: the current version is a human-in-the-loop system. The extension opens the target platform, injects the sync prompt, and tracks what's been synced where — but you review and confirm each step. I could have built something flashier and fully automated. I chose not to, because opacity is exactly what I was trying to solve. You should see what's being moved. At minimum until the trust is built.
What I've noticed since I started using my own tool
A few things have changed for me, and they might change for you.
My
Memory.md is shorter than I expected. Once I actually looked at what the platforms had saved, maybe thirty percent of it was worth keeping. The rest was noise — one-off comments, stale project context, assumptions inherited from a mood I was in six months ago. Seeing it laid out made it trivial to cut.My outputs got a little more surprising again. Not better every time. But I'm no longer always being told what I already want to hear. I trust the pushback more when it comes, because the model has less of a profile to flatter.
And I stopped feeling locked in. When I decide to use Claude for a long writing session instead of ChatGPT, it no longer feels like I'm abandoning a relationship. I'm just switching tools. The context travels with me.
The point isn't to replace platform memory
I want to be clear about what Memory Sync is and isn't.
It isn't an attempt to replace what OpenAI, Anthropic, and Google are building internally. Their memory systems will keep improving. Some of them — Claude's project-scoped memory, for instance — are already meaningfully better than the global-dossier approach. That trend will probably continue, and I'm happy about it.
What Memory Sync is, is infrastructure for the part these companies have no incentive to build: the layer between them. The
Memory.md that sits in your hands and travels across platforms. The audit trail of what's being remembered about you. The portability that turns your long-term context into something you own instead of something you rent.If that resonates, the extension is free for your first three syncs a month. Pull once from whichever assistant knows you best. Open the file. Read it.
I think you'll be surprised how much of it shouldn't have been there in the first place.