Skip to content
Go back

AgenticMemory, Part 3: The Capabilities That Required a New Data Structure

Published:
• 9 min read Edit on GitHub

Part 1 covered the foundation. Part 2 covered search and structure. Today is different.

The four capabilities in this post are not improvements on existing things. They are things that did not exist before, and they are only possible because the memory is a typed causal graph. You cannot bolt these onto a vector store or a flat text file. The data structure has to be designed for them from the start.


9. Shortest Path

Find the connection chain between any two ideas in the knowledge graph

The agent knows about your budget constraints. It also knows about Kubernetes. Can it tell you how those two things are related in its own knowledge?

With a flat memory store: no. The memories exist, but there is no relationship between them. They are isolated nodes.

Shortest path — Budget to Kubernetes
Budget constraint
CAUSED
Decision: Cheaper hosting
REQUIRED
Kubernetes
Path length: 2 hops

With AgenticMemory: the shortest path algorithm traverses the edges between any two nodes. Budget constraint caused a decision to use cheaper hosting, which required Kubernetes. Two hops. The path exists in the graph because the edges were written when the reasoning happened.

This is useful in more ways than it sounds. “How is my team’s skill gap connected to the deadline?” “How did the early architectural decision affect the current performance problem?” The agent can find the chain.

In plain terms: you know the game “Six Degrees of Kevin Bacon”? Any actor connected to any other in six steps? This is that for your AI’s knowledge. Any two ideas connected through their actual reasoning chain.


10. Belief Revision

Counterfactual propagation — “if I learn X, what breaks?”

This is the one that made me stop and think for a while. It is also the one nobody else has built.

When new information arrives, no current AI system can tell you what it invalidates. “Your team just learned Go.” Okay. Which of the agent’s past decisions assumed they did not know Go? Which recommendations were built on that assumption? Which inferences depended on it? Nobody knows. Not the agent. Not you. The information lands and you just hope nothing important breaks.

Counterfactual: “Team now knows Go”
Decision #1042 “Chose Rust — no Go experience” is invalidated
Inference #1567 “Team is Rust-only” drops from 90% to 30% confidence
2 other decisions flagged for review
No changes committed. This is a simulation only.

Belief Revision injects the hypothetical new fact into a simulation of the graph, then traces every CAUSED_BY and SUPPORTS edge forward to find what it affects. The agent reports exactly which stored beliefs are invalidated, which ones have reduced confidence, and which ones need review. Nothing is changed until you confirm.

It is a what-if analysis on the agent’s own belief system. Run it before you tell the agent anything important, and know what you are walking into.

In plain terms: right now, getting new information is like pulling a random block from a Jenga tower — you do not know what is going to fall. After this, the agent simulates pulling the block and shows you exactly which pieces wobble, before you touch anything.


11. Reasoning Gap Detection

Structural audit — “where am I guessing?”

You cannot see inside your AI’s reasoning right now. It makes decisions and you assume they are well-founded. Maybe a big decision was built on one shaky Inference. Maybe a conclusion was drawn from a single low-confidence Fact. You would never know, because the structure is invisible.

Gap Detection scans the entire knowledge graph and reports on structural weaknesses. Not content weaknesses. Structural ones. It is looking at the shape of the reasoning, not the topic.

Reasoning health auditScore: 0.73 / 1.0
Unjustified decisions (zero supporting evidence)3 found
Fragile inferences (single-fact basis)5 found
Low-confidence foundations (many decisions depend on them)4 found

A Decision with zero supporting Fact nodes is an unjustified decision. An Inference based on exactly one Fact is fragile. A Fact with low confidence but seven Decisions pointing to it through CAUSED_BY is a dangerous foundation. The audit surfaces all of these with a single call, along with an overall health score.

This is structural analysis of machine reasoning. There is nothing like it in any current system because no current system has the causal graph structure needed to run it.

In plain terms: right now, your AI’s thinking is a black box. After this, it is an X-ray machine for its own logic. “Show me every place where you are guessing” — and it does. Like a building inspector finding cracks before the house falls.


12. Analogical Reasoning

Subgraph pattern matching — recognizing the same shape across different domains

A senior engineer who has solved ten different migrations brings something to a new migration problem that a junior engineer does not. They can feel that it is the same shape as something they have seen before. “This reminds me of that time we…” — and they are right, even though the domain is completely different.

Right now, an AI agent can search its memory by topic. It cannot recognize structural patterns across different domains.

Same shape, different domain
Current problem
Migrate monolith to microservices
large system → ordered parts → dependency sequence
same
shape
Found in memory (20 sessions ago)
Migrate Flask app to FastAPI services
large system → ordered parts → dependency sequence

Analogical Reasoning uses subgraph pattern matching. The agent is not matching keywords. It is matching shapes. The structure of “decompose a large complex system into ordered parts and execute them by dependency” is identical whether you are migrating a codebase or planning a kitchen renovation. The agent finds the match and surfaces it: “Here is how I solved a structurally similar problem 20 sessions ago.”

Different words. Different domain. Same graph shape.

This requires the typed cognitive graph from the start. You cannot add structural fingerprinting to a flat memory store after the fact. The edges have to be there, typed and traversable, before this is possible.

In plain terms: right now, your AI can only find memories by what they are about. After this, it thinks like someone who has seen enough problems to recognize the pattern under the surface.


One more day. Tomorrow is the final four: how the agent maintains itself, how it tracks the evolution of its own beliefs over time, why it needs nothing beyond a single file, and the research paper that formalizes all of it.

← Part 2: Search and Structure
Part 3 of 4
Part 4: Infrastructure and the Bigger Picture →
Source
github.com/agentic-revolution/agentic-memory
pip install agentic-memory
Share this post on:

New posts, research updates, and nerdy links straight to your inbox.

2× per month, pure signal, zero fluff.

Go back