Hi, I'm Omoshola.
AI/ML researcher and systems builder. I work on the seam between what AI can do and what institutions can trust — memory the agent owns, decisions that can be replayed, evidence a third party can verify. Most of my work lives where capability is no longer the bottleneck and the unsolved problem is accountability.
I work on the seam between what AI can do and what institutions can trust. The interesting questions in agentic systems have stopped being about capability — the open ones are about accountability. Whether an autonomous decision can be replayed. Whether a model's memory belongs to the user or the vendor. Whether a regulator can independently verify what happened, three years after it happened, without the original operator in the room.
Most of what I build, write, and review lives in that gap. The writing is about what the gap actually looks like in code; the work is the long version.
Recent Posts
-
Verity: The Truth Engine Underneath XAP
Published:• 14 min readA close read of verity-engine — five Rust crates, seven trust properties, and the design invariants that turn an agent settlement decision into a record any third party can independently replay.
-
XAP: The Settlement Layer for Autonomous Agents
Published:• 10 min readA close read of XAP (eXchange Agent Protocol): six primitive objects, one transaction flow, and what it takes to make agent-to-agent commerce verifiable without any human in the loop.
-
AgenticMemory, Part 3: The Capabilities That Required a New Data Structure
Published:• 9 min readFour things that are only possible because of the typed cognitive graph — finding connections, simulating what breaks, auditing gaps in reasoning, and recognizing patterns across completely different domains.
-
AgenticMemory, Part 2: How the Agent Searches and Knows What Matters
Published:• 8 min readSelf-correction without data loss, exact-term search that actually works, hybrid retrieval that uses both signals, and a way to find which beliefs everything else is built on.
-
AgenticMemory, Part 1: The Memory Your Agent Was Always Missing
Published:• 10 min readFour foundational capabilities that give AI agents a real brain — not a search bar. The binary format, the portable file, the typed events, and the reasoning chains.
-
Building a Payment System That Proves Finality Instead of Asserting It
Published:• 11 min readMost payment platforms tell you a transaction succeeded. I am building one that proves it — cryptographically, deterministically, and in a way that can be replayed and audited years later. Notes from the work in progress.
-
Building a Memory System for AI Agents
Published:• 22 min readVector databases tell you what is similar. They do not tell you what happened, what was decided, or what the agent learned that overrides what it knew before. I needed something different.
-
Building a Web Cartographer for AI Agents
Published:• 21 min readI wanted AI agents to understand the web the way a researcher does — not just fetch a page, but navigate structure, follow intention, and remember where they have been. Notes from building Cortex.
-
Building Risk Intelligence Systems That Institutions Can Trust
Published:• 12 min readLessons from building AI-powered vulnerability detection for financial institutions and supply chain operators. The hard part is never the algorithm.
-
Why Explainable AI Matters for Financial Regulation
Published:• 11 min readThe case for transparency in AI-driven credit and risk decisions affecting millions of Americans. Not just for compliance — for doing this right.