Skip to content
Go back

AgenticMemory, Part 1: The Memory Your Agent Was Always Missing

Published:
• 10 min read Edit on GitHub

This is part one of a four-part series on AgenticMemory. Each post covers four capabilities. Start here if you want to understand what changed and why it matters.

Every agent today

Conversation ends. Memory gone. Start from zero next session. No reasoning trail. No history. No way to ask “why did you say that last week?”

AgenticMemory

One .amem file grows with every session. Facts, decisions, corrections all typed and linked. Any agent can read it. The reasoning is always there.

Every AI you have ever used has amnesia. Not partial amnesia. Complete, total, reset-every-conversation amnesia. The “memory” features that platforms advertise? They save preference snippets. “User likes Python.” That is the full extent of it. Whatever reasoning the agent used, whatever decisions it made, whatever it learned — it evaporates the moment you close the window.

I built AgenticMemory to fix that. Not to add a feature on top of existing systems, but to build the underlying thing that was never there.

These are the first four capabilities it has.


1. The Binary Brain

The .amem file — single-file cognitive graph, sub-millisecond queries

Right now, every AI agent has a notepad it throws away after every conversation. If it has any memory at all, it is a flat text file or a vector database someone bolted on. Neither of those is a brain. They are filing cabinets. You can store things in them but you cannot reason across them.

AgenticMemory gives the agent a binary graph. A single .amem file where every fact it learned, every decision it made, every mistake it corrected is a node, and every relationship between them is an edge. The file is memory-mapped by the OS, compressed with LZ4, indexed for direct lookup. It does not require a database. It does not require a cloud account. It lives on your machine.

.amem file format
64-byte header
72-byte nodes
32-byte edges
LZ4 content block
276ns
to add a memory
<1ms
to query anything
8MB
per 10,000 memories

276 nanoseconds to add a memory. Under 1 millisecond to query it. 10,000 memories fit in 8 megabytes. One file is the entire brain.

In plain terms: right now your AI has a whiteboard it erases at the end of every conversation. After this, it has a notebook it keeps. Every session adds pages. You can always go back.


2. The Portable Mind

Your memory travels with you across any LLM provider

Here is something nobody talks about when they explain AI memory: even when it works, the memory belongs to the platform, not to you. Your history with Claude is trapped inside Claude. Your history with ChatGPT is trapped inside ChatGPT. Switch providers? Start from nothing. Every preference, every context the agent had built up about how you work and what you need — gone.

AgenticMemory inverts this. The .amem file lives on your machine. It is yours. When you switch from Claude to GPT-4 to a local Ollama model, the file goes with you. The new agent reads it and picks up from where the old one left off. Same facts. Same decisions. Same context about who you are and what you are working on.

This is not a theoretical claim. There are 21 cross-provider validation tests in the repo that prove it works across Claude, GPT-4o, and Ollama with identical read results.

In plain terms: right now, switching AI providers is like getting amnesia and starting a new life. After this, it is like changing doctors. Your records follow you because they are yours.

The file is portable in every sense of the word. Save it. Copy it. Back it up. Move it between machines. The brain follows the person, not the platform.


Capabilities one and two are about structure and portability. The next two are about what actually gets stored.


3. Cognitive Events

Six typed memory categories — not just text, but meaning

Every memory system that exists today stores text. Raw text. “User said they prefer Python.” A flat string floating in a void with no type, no context, no connection to anything else. There is no way to ask “show me all the decisions the agent made last week” because there is no concept of a “decision” in the system. It is all just text.

AgenticMemory stores typed events. Every memory has a category:

Fact
Something the agent learned is true
Decision
A choice the agent made and why
Inference
A conclusion drawn from other facts
Correction
Something the agent got wrong and fixed
Skill
A procedure or capability the agent learned
Episode
A compressed summary of a full session

The agent does not just remember what you said. It remembers what kind of thing it learned. A Fact is different from a Decision. A Decision is different from a Correction. And that difference changes how you query it, how you weight it, how the decay model treats it over time.

“Show me all decisions from last week” is now a filter, not a guess.

In plain terms: right now, your AI throws every memory into the same junk drawer. After this, it has labeled filing cabinets. Finding what you need takes a second.


4. Reasoning Chains

CAUSED_BY edges — the agent can walk backwards through its own thinking

Ask any AI “why did you recommend that?” and you will get a made-up answer. Not because the agent is dishonest. Because the reasoning existed in the context window when it generated the response, then evaporated. There is no record. There is nothing to walk back through.

AgenticMemory stores reasoning chains. Every Decision node has CAUSED_BY edges connecting it to the Fact nodes that led to it. The agent recommended Python? Walk backwards: caused_by “team has no Rust experience” + caused_by “deadline is two weeks” + caused_by “existing codebase is Python.” Three hops. Three facts. The chain is preserved as a traversable path in the graph.

Reasoning chain — Decision: Choose Python
Decision: Use Python
CAUSED_BY
Fact: Team has no Rust experience
Fact: Deadline is two weeks
Fact: Existing codebase is Python

This is not reconstructed from context. Not hallucinated. Stored. The chain is queryable. “What caused this decision?” has a real answer because the answer was written down when the decision was made.

In plain terms: right now, asking your AI “why?” is like asking someone who blacked out what they did last night. They will make something up. After this, there is a receipt. Every decision has a paper trail back to the facts that caused it.


That is the foundation. Four things working together: a binary file format fast enough to not get in the way, portability that makes your memory yours, typed events that give structure to what gets stored, and reasoning chains that make the agent’s logic auditable.

Tomorrow: how the agent handles being wrong, how it searches its own memory, and how it decides what matters most.


Part 1 of 4
Part 2: Search and Structure →
Source
github.com/agentic-revolution/agentic-memory
pip install agentic-memory
Share this post on:

New posts, research updates, and nerdy links straight to your inbox.

2× per month, pure signal, zero fluff.

Go back