Your AI forgets everything. OMEGA remembers.
Decisions, lessons, and context persist across every session. Works in under 60 seconds. Zero cloud required.
Apache-2.0 · Local-first · Python 3.11+
The problem
Every session begins in darkness.
Context lost
Architecture decisions vanish when the window closes. You re-explain the same codebase every session.
Context preserved
Decisions, preferences, and architecture choices persist across every session, automatically.
Bugs re-solved
Your agent fixed that ECONNRESET last week. Today it debugs from scratch with zero memory of what worked.
Lessons recalled
Prior fixes surface automatically. OMEGA remembers what worked so your agent never re-solves the same bug.
Questions repeated
“What auth method?” gets asked every session. You become the bottleneck for your own tools.
Answers remembered
Preferences, constraints, and prior answers are stored once and recalled forever. You stop repeating yourself.
The evolution
From retrieval to real memory
The industry moved from static retrieval to agentic workflows. The next stage is memory that reads, writes, reflects, and forgets.
2023
RAG
Read-only, one-shot
“How to retrieve”
2024
Agentic RAG
Read-only via tools
“When to retrieve”
2025+
Agent Memory
Read-write via tools
“How to manage knowledge”
OMEGA operates here
Framework adapted from industry research on memory evolution in AI agents
The awakening
OMEGA remembers.
Capture
Hooks detect decisions, lessons, and errors during normal work. Nothing to tag manually.
Store & Index
Memories get embedded, indexed for full-text and semantic search, and stored locally in SQLite.
Surface
Next session, relevant context appears automatically. The agent picks up where you left off.
What you get
12 MCP tools. One server.
Not just storage - search, learning, and forgetting. Fully local.
bge-small-en-v1.5 embeddings + sqlite-vec. Finds relevant memories even when the wording is different.
omega_query(query="deployment steps")Hook system captures decisions and lessons automatically during normal work. No manual tagging required.
[hook] auto-captured: CJS/ESM mismatch fixLessons, preferences, and error patterns accumulate over time. Agents learn from past mistakes.
omega_lessons(task="debug API timeout")Decay curves, conflict detection, compaction, and a full audit trail. Memories don't just pile up.
omega_forgetting_log(limit=10)After initial retrieval, the top 20 candidates are re-scored by a neural cross-encoder. Better precision, same speed.
[reranking] 20 candidates rescored in 12msNew memories are checked against existing ones. Negations, antonyms, and preference changes get flagged automatically.
[contradiction] "prefers dark mode" conflictsIntegration guide
Memory for OpenClaw agents.
OpenClaw has 194K+ stars but ships with flat-file memory. OMEGA adds semantic retrieval, contradiction detection, and checkpoint/resume. Install both, configure MCP, and your agents learn across sessions instead of starting from scratch.
The economics
Memory that scales without the bill.
Some memory systems dump 70K tokens into every query. Others charge $249/mo for graph features. OMEGA retrieves only what's relevant - locally, for free.
Fewer tokens per query
Hybrid search retrieves 5–10 relevant memories (~1.5K tokens) vs. 70K token context dumps.
Embedding costs
Local ONNX model (bge-small-en-v1.5). No OpenAI API key. No per-query fees. Ever.
Platform fees
Mem0 Pro: $249/mo. Zep Cloud: $25–475/mo. OMEGA core: Apache-2.0, free forever.
Monthly context cost at scale
10,000 sessions/month · GPT-4 Turbo input pricing
That's $82,200/year saved - enough to hire another engineer.
Based on GPT-4 Turbo input pricing ($0.01/1K tokens). OMEGA's local ONNX embeddings add $0. See full cost comparison.
Beyond the infinite
#1 on LongMemEval - ICLR 2025 - 500 questions
LongMemEval (ICLR 2025) is the standard benchmark for AI memory systems. 500 questions testing extraction, reasoning, temporal understanding, and abstention.
Tested with GPT-4.1 + OMEGA v1.0.0. Full methodology and source available in the repo. *Zep/Graphiti score from their published evaluation.
See the full comparison with Mem0, Zep, and Letta or read how OMEGA works under the hood.
If OMEGA is useful, a star helps others find it.
Questions
Frequently asked.
Stay in the loop
Get updates on OMEGA.
New features, benchmarks, and launch news. No spam.
Get started in 60 seconds.
Two commands. No API keys. No Docker. No cloud signup.
Works with
Apache 2.0 · Foundation Governed · Local-first · Python 3.11+