Inside the agent's brain

What’s inside an agent’s brain?
I’ve been building AI agents for a while now, and for most of that time I treated them like tools++: kind
of impressive for a few seconds, and then they forget everything the moment the session ends.

I didn’t think much of it at first. That’s just how LLMs work, right? You spin them up, they do a thing, they die. Next session, fresh brain.

Pros of this approach are obvious, clean context on every session. Ask the previous session to summarize “everything” for you and keep going!

But it just isn’t that impressive if you do it like that.

The more I built, the more I kept running into the same wall: my agents were constantly re-learning the same lessons. I’d spend an afternoon teaching one my quirky way of structuring a repo… and the next morning it’d happily suggest the exact thing I’d told it not to do yesterday.

Then I figured out, okay let’s just make the agent read lessons from a markdown file.
This was in Spring 2024. Agents.MD + Claude.MD followed this path. Finally SKILLS.MD became a real big thing in 2026.

So I started thinking about it differently.
What if the agent wasn’t a process?

AI memory mapped like a human

That’s when I stumbled into what I’m now calling a human-like-memory-way of building agents.
The memory doesn’t just go away. I still remember how my college days were spent.
Agents must become a continuous thing too.

We have to treat it as something that exists across sessions.

I am not talking about consciousness, calm down… just a little file somewhere that says “here’s who I am,
here’s what I’ve learned about the human I work with, here’s what I’m working on.”

Suddenly the agent can walk back into a conversation and actually remember what you did and how you did it TOGETHER.
It’s like when you see a friend and suddenly you know their favorite gelato flavor, the name of their dog, the name of their gf, maybe their address too…

It’s the difference between a coworker and a stranger who happens to have read your files on demand.

Memory isn’t unlimited, neither is context

Here’s the part that took me the longest to internalize: context windows are NOT UNLIMITED.
Agents have the context of an LLM to begin with.

The more you compact the context, the more you lose data/information, the worse it gets. So you have to write yourself some notes.

I know, I know… we keep getting bigger ones. Opus/Sonnet/Gemini: they’re all about a million tokens, ten million, whatever.
Doesn’t matter. They’re still finite,

Context hasn’t been solved, tokens are currency and they cost something, and the more currency you spend on the agent + LLM - and fill the context - the agent gets dumber the more junk you cram into it.

So the agent has a budget. It has to decide what stays in the hot path and what gets shipped off to long-term storage.
High-priority stuff lives in active memory (active context). Everything else goes to the file system. It is retrievable when needed, invisible when not.

It feels a lot like being a human, actually.
I don’t hold my entire life in my head at once. I remember where to find things.
Have you heard of Rogan Josh? It’s a curry recipe I absolutely love! I don’t recall all the ingredients, but I have a piece of paper with the complete recipe and ingredients in a drawer in the kitchen.
That’s it.
The agent needs the same skill.

Learning from work

Normally when we talk about an AI learning, we mean self-supervised to supervised (fine-tuned) to reinforced learning.
Big expensive jobs, tons of energy. But a memory-native agent learns a different way — by rewriting its own notes.

Made a mistake? Easy fix: update the memory block so you don’t make it again.
Discovered a better pattern? Write it down.
Noticed the user keeps asking for the same thing? Consolidate, compress, reorganize.

The agent doesn’t just append everything to a log.
It’s editing itself, summarizing and refining with intelligence.

Not a blackbox

And because all of this lives in plain text, I can read it.
I can see exactly what my agent thinks it knows about me and my work. If it’s wrong, I fix the file. No rocket science, just text on disk.

Whether you use Obsidian or Git or something else to keep track of your agent’s memories doesn’t matter.
Use what works for you.
I use markdown files.
The agent can ls its own memories.
It can grep for something it vaguely remembers.
It can git log to see how its understanding of a project evolved over time.
It’s navigating its own past the same way I navigate a codebase.

Open, transparent…portable!

It’s open. Anyone can read my agent’s memories if I decide to share it with them, doesn’t require any private algo or tool.
It’s transparent. I can see what the agent thinks it knows. I can audit it. I can correct it. No black box, no “the model just decided that.” The behavior is in the files, and the files are in git.
It’s portable. If my agent’s personality and knowledge live in text files, I can move it from one model to another.
No lock-in.

and best of all it’s autonomous. The agent is managing its own memory smartly. It’s doing the housekeeping I used to do manually, and I get to spot-check the work.

I don’t think every agent needs this. If you’re building a one-shot script that summarizes a PDF, you don’t need this

But for the agents I actually want to work with — the ones that sit next to me for months at a time, that know my projects, that I’m going to be annoyed at when they forget something — yeah. This is the direction.

Pick sqlite, text files and git and go wild. Go build your memory.