What’s of interest? DOC • To grow, we must forget… but now AI remembers everything
Tell me more!
AI’s infinite memory could endanger how we think, grow, and imagine. And we can do something about it.
Forgetting, then, begins to look like a flaw. A failure to retain. A bug in the code. Especially in our own lives, we treat memory loss as a tragedy, clinging to photo albums and cloud backups to preserve what time tries to erase.
But what if human forgetting is not a bug, but a feature? And what happens when we build machines that don’t forget, but are now helping shape the human minds that do?
Forgetting is a feature of human memory
“Infinite memory” runs against the very grain of what it means to be human. Cognitive science and evolutionary biology tell us that forgetting isn’t a design flaw, but a survival advantage. Our brains are not built to store everything. They’re built to let go: to blur the past, to misremember just enough to move forward.
This is one among many items I will regularly tag in Pinboard as oegconnect, and automatically post tagged as #OEGConnect to Mastodon. Do you know of something else we should share like this? Just reply below and we will check it out.
Thank you for sharing this—really interesting perspective.
About the idea of “forgetting”: I think the point is that human memory is designed to let go in order to function, grow, and adapt. AI, on the other hand, doesn’t “remember” in the human sense. It stores and retrieves information because it’s programmed to do so, and only within the boundaries set by developers, policies, and technical limits.
So AI’s “infinite memory” isn’t natural—it’s a design choice. And depending on how we use it, it can support or challenge how humans think and learn.
That’s how I understand it—curious to hear others’ thoughts too.
As Tamim Ansary has pointed out in The Invention of Yesterday, we didn’t always need to remember what happened yesterday. And, memory is very selective. It depends on the connections.
Cell phones are NOT the problem in U.S. classrooms; it’s what the cell phones are connected to. LLMs depend on connections, too. You can use an LLM that’s not connected to the internet if you want to not have your questions for the LLM and the subsequent answers remembered by others. We can choose.
Existing retrieval methods in Large Language Models show degradation in accuracy when handling temporally distributed conversations, primarily due to their reliance on simple similarity-based retrieval. Unlike existing memory retrieval methods that rely solely on semantic similarity, we propose SynapticRAG, which uniquely combines temporal association triggers with biologically-inspired synaptic propagation mechanisms. Our approach uses temporal association triggers and synaptic-like stimulus propagation to identify relevant dialogue histories. A dynamic leaky integrate-and-fire mechanism then selects the most contextually appropriate memories. Experiments on four datasets of English, Chinese and Japanese show that compared to state-of-the-art memory retrieval methods, SynapticRAG achieves consistent improvements across multiple metrics up to 14.66% points. This work bridges the gap between cognitive science and language model development, providing a new framework for memory management in conversational systems.
?? What are “biologically-inspired synaptic propagation mechanisms” or “A dynamic leaky integrate-and-fire mechanism” ??