Hey, I’m Kenneth

I build personal AI systems (like KenLM), run experiments with models, and share what I learn about ML, tools, and creativity.

Giving My Local LLM a Memory

Introduction LLMs are incredible at many things. If you stop and think about it for a moment, they’re absolutely remarkable. Through a relatively simple decoder-only transformer architecture, we’ve created what is essentially a natural-language computer that mimics intelligence. With the right orchestration, these models can act as tools, reasoning engines, or even full agents that automate entire chunks of our personal and professional lives. However, as powerful as they are, today’s LLMs still fall short of anything resembling human intelligence. As this paper from Google points out, LLMs lack neuroplasticity—the ability to change their structure over time in response to new experiences. In practical terms, once an LLM is trained, its memory becomes frozen. The model ends up with long-term semantic knowledge baked into its weights and a short-term working memory constrained by its context window, but no built-in mechanism for converting useful short-term experiences back into lasting knowledge. ...

December 4, 2025 · 16 min · Kenneth Layton