Omoshola Owolabi
Analytics engineer and AI/ML researcher. The work is about how complex organizations stay correct when decisions, data, and money move faster than confirmation. Most of it lives in regulated financial environments, critical supply chains, and the open infrastructure underneath both.
I am drawn to the problems that are hard precisely because the people inside them cannot afford for the AI to be wrong.
The technical side
Most enterprise systems break in the same place every time. The gap between when a decision gets made and when reality catches up to it. Inventory that exists in the system but not in the warehouse. Transactions that are processed but not fully reconciled. Decisions made before the confirmation arrives. The interesting failures are not the ones that happen when nothing is moving. They are the ones that happen when decisions, data, and money are all in motion at once.
That gap is what I work on. In supply chain planning it shows up as forecast reliability under uncertainty, traceable recommendations a procurement officer can interrogate, and decision clarity that survives a change in conditions. In financial systems it shows up as credit risk models whose adverse action reasons satisfy ECOA and FCRA before a regulator ever asks, and risk intelligence pipelines that can explain themselves under stress. The shape of the problem is the same in both domains. Speed matters, but clarity matters more. The most resilient systems are not the ones that never fail. They are the ones that leave no ambiguity when they do.
Explainability is not a feature you bolt onto a model at the end. It is a design constraint that shapes every decision from feature engineering to model selection to how the audit trail gets generated. The work runs across the full stack. From the mathematics of fairness constraints and uncertainty propagation, to the engineering of production inference pipelines, to the regulatory interpretation of what "explainability" actually requires in a given jurisdiction. Research and implementation inform each other. You cannot do one well without doing the other, and most of the field's interesting problems live exactly where the two have to meet.
Published work in this area has accumulated over 100 citations across ethical frameworks for AI in financial decisioning, machine learning for credit risk prediction, blockchain in supply chain finance, and network analysis for systemic risk assessment. Contributions to IEEE international standards development cover AI ethics, cybersecurity, financial LLM requirements, and supply chain security.
The non-technical side
I grew up in Nigeria. My path into this field came through trade finance and supply chain operations, not through a straight line from school to research. I spent years working inside the systems that AI is now trying to improve before I started building the AI. That gave me something a purely academic route would not have: I understand what it feels like to be on the receiving end of a bad model's output.
That background shapes everything. When I write about algorithmic bias in credit scoring, I am not writing abstractly. I have seen what it means for communities where the formal financial system has historically not worked for them. The governance questions are personal before they are professional.
I review research, judge hackathons and statistics competitions, mentor practitioners earlier in their careers, and have submitted technical commentary to federal AI policy processes. The people who build these systems should be in the room where they get governed, not watching from outside.
Outside the work I think seriously about African knowledge systems and what they have to teach us about computation and intelligence. The connection between Ifa divination's binary structure and the mathematical foundations of computing is not a metaphor. It is history that most people in this field have simply never encountered.
What I am building
Everything on the desk points at the same question. What does it actually take for an AI system to be trustworthy over time? Not just at launch, but as the world changes around it, as decisions accumulate, as edge cases surface, as a regulator asks the right question a year later. The question has two halves. The domain systems that need to be trustworthy on the outside. The agent infrastructure that makes them run on the inside. I work on both because neither one survives without the other.
The hard problem in supply chain planning is not the algorithm. It is making the algorithm's reasoning visible enough that a human expert will act on it. That insight is what Nexus is built around. A supply chain intelligence platform in Rust that treats demand, lead times, and supplier reliability as probability distributions instead of point estimates, runs Monte Carlo simulation across thousands of scenarios, and surfaces recommendations through an agentic layer a procurement team can interrogate. The full architecture is in code. Temporal graph kernel, BOM/MRP/procurement modules, GraphQL API, React interface. The thing the system ships is not the recommendation. It is the trust the recommendation earns.
Generalist models are wide. Specialists trained to reason like domain experts beat them on the work that actually matters in regulated industries, and that gap widens as the stakes go up. Solen (supply chain management), Verac (finance and settlement), and Axiom (financial markets) are a family of specialists I am fine-tuning on top of Gemma 4 around exactly that thesis. The training pipeline scores every example across reasoning depth, domain accuracy, calibration, and practical value before it enters training. The models learn from expert mistakes and corrections, not just from correct answers. Nexus is powered by Solen. ZexRail is powered by Verac.
The next AI is not a model you call. It is an entity that grows alongside you, with memory you own and a constitution you can audit. Hydra is the most ambitious thing on the desk and the version of that thesis I am betting on. A living digital entity in Rust, built around 68 crates with a self-writing genome, persistent memory, and constitutional governance. Drop a TOML file in front of it. Hydra learns.
Once agents start paying each other, hiring each other, and splitting value
across multi-step workflows with no human in the loop, cognition is only half
the system. The other half is economics, and that half does not exist yet in any
open form. XAP (the eXchange Agent Protocol) and the
Verity Truth Engine are the layer I am building into that gap.
An MIT-licensed open protocol for how autonomous agents discover, verify,
negotiate, settle, and deterministically replay every economic decision they
make. The protocol is at v0.2 with 115 validation tests passing, the Python SDK
ships as pip install xap-sdk, and the truth engine is open source
at agentra-commerce/verity-engine.
ZexRail is the production reference implementation that runs on top.
Agents need primitives we have never had to give software before. A memory the agent owns. An identity that can be cryptographically attested. A contract layer that enforces obligations. A communication layer that is structured rather than improvised. Each of those is a sister in the Agentra ecosystem. AgenticMemory for the persistent cognitive graph, plus Vision, Codebase, Identity, Time, Contract, Comm, Planning, Data, Workflow, Connect, Veritas, Cognition, and Reality. AgenticMemory is in active release with over 440 tests passing, dual Python and Rust distributions, and a peer-reviewed paper. The full ecosystem lives at github.com/agentralabs.
The thread connecting all of it is short. Agents operating in financial and regulated environments cannot just be capable. They have to be auditable, correctable, and honest about what they do not know. Every project on this page is one answer to that constraint at a different layer of the stack.
How I work
I think in systems.
I test ideas against the domain they live in.
I design for what actually breaks.
Get in touch
Open to speaking engagements, research collaborations, peer review, and advisory conversations. Particularly around AI governance, explainable AI for financial services, agentic systems infrastructure, and supply chain risk intelligence.
Reach me by email or on LinkedIn. Research output is indexed on ORCID.