Skip to main content
#1 on LongMemEval · ICLR 2025

Your AI forgets everything. OMEGA remembers.

Decisions, lessons, and context persist across every session. Works in under 60 seconds. Zero cloud required.

Apache-2.0 · Local-first · Python 3.11+

$pip install omega-memory && omega setup
12 MCP tools registered. Ready.
>The auth tests are failing again
[OMEGA]You fixed this Jan 12: stale JWT secret in .env.test.
Applied in 8 seconds. No re-debugging.

The problem

Every session begins in darkness.

×

Context lost

Architecture decisions vanish when the window closes. You re-explain the same codebase every session.

Context preserved

Decisions, preferences, and architecture choices persist across every session, automatically.

Bugs re-solved

Your agent fixed that ECONNRESET last week. Today it debugs from scratch with zero memory of what worked.

Lessons recalled

Prior fixes surface automatically. OMEGA remembers what worked so your agent never re-solves the same bug.

?

Questions repeated

“What auth method?” gets asked every session. You become the bottleneck for your own tools.

Answers remembered

Preferences, constraints, and prior answers are stored once and recalled forever. You stop repeating yourself.

The evolution

From retrieval to real memory

The industry moved from static retrieval to agentic workflows. The next stage is memory that reads, writes, reflects, and forgets.

2023

RAG

Read-only, one-shot

How to retrieve

2024

Agentic RAG

Read-only via tools

When to retrieve

2025+

Agent Memory

Read-write via tools

How to manage knowledge

OMEGA operates here

Framework adapted from industry research on memory evolution in AI agents

The awakening

OMEGA remembers.

$omega_query("that webpack thing that broke prod")
[reranking]47 candidates → top 3:
1. "CJS/ESM mismatch in stripe lib" (0.96) Nov 12
FIX: dynamic import() + moduleResolution: 'bundler'
2. "Webpack 5 broke Firebase init" (0.89) Oct 28
3. "Can't resolve 'crypto'" (0.84) Oct 15
>Check for contradictions before I deploy
[OMEGA]Scanning 230 memories...
[⚠️ conflict]You said "never use Redis for sessions"
on Dec 3, but redis-session-store was just added.
Reason from Dec 3: "session data must survive crashes"
01

Capture

Hooks detect decisions, lessons, and errors during normal work. Nothing to tag manually.

02

Store & Index

Memories get embedded, indexed for full-text and semantic search, and stored locally in SQLite.

03

Surface

Next session, relevant context appears automatically. The agent picks up where you left off.

What you get

12 MCP tools. One server.

Not just storage - search, learning, and forgetting. Fully local.

Semantic Search

bge-small-en-v1.5 embeddings + sqlite-vec. Finds relevant memories even when the wording is different.

omega_query(query="deployment steps")
Auto-Capture

Hook system captures decisions and lessons automatically during normal work. No manual tagging required.

[hook] auto-captured: CJS/ESM mismatch fix
Cross-Session Learning

Lessons, preferences, and error patterns accumulate over time. Agents learn from past mistakes.

omega_lessons(task="debug API timeout")
Intelligent Forgetting

Decay curves, conflict detection, compaction, and a full audit trail. Memories don't just pile up.

omega_forgetting_log(limit=10)
Cross-Encoder Reranking

After initial retrieval, the top 20 candidates are re-scored by a neural cross-encoder. Better precision, same speed.

[reranking] 20 candidates rescored in 12ms
Contradiction Detection

New memories are checked against existing ones. Negations, antonyms, and preference changes get flagged automatically.

[contradiction] "prefers dark mode" conflicts

Integration guide

Memory for OpenClaw agents.

OpenClaw has 194K+ stars but ships with flat-file memory. OMEGA adds semantic retrieval, contradiction detection, and checkpoint/resume. Install both, configure MCP, and your agents learn across sessions instead of starting from scratch.

The economics

Memory that scales without the bill.

Some memory systems dump 70K tokens into every query. Others charge $249/mo for graph features. OMEGA retrieves only what's relevant - locally, for free.

~50×

Fewer tokens per query

Hybrid search retrieves 5–10 relevant memories (~1.5K tokens) vs. 70K token context dumps.

$0

Embedding costs

Local ONNX model (bge-small-en-v1.5). No OpenAI API key. No per-query fees. Ever.

$0/mo

Platform fees

Mem0 Pro: $249/mo. Zep Cloud: $25–475/mo. OMEGA core: Apache-2.0, free forever.

Monthly context cost at scale

10,000 sessions/month · GPT-4 Turbo input pricing

OMEGA$150/mo
Zep Cloud$1,000–1,475/mo
Observational$7,000/mo

That's $82,200/year saved - enough to hire another engineer.

Based on GPT-4 Turbo input pricing ($0.01/1K tokens). OMEGA's local ONNX embeddings add $0. See full cost comparison.

Beyond the infinite

0.0%

#1 on LongMemEval - ICLR 2025 - 500 questions

OMEGA
95.4%
Zep / Graphiti*
71.2%
No Memory
49.6%

LongMemEval (ICLR 2025) is the standard benchmark for AI memory systems. 500 questions testing extraction, reasoning, temporal understanding, and abstention.

Tested with GPT-4.1 + OMEGA v1.0.0. Full methodology and source available in the repo. *Zep/Graphiti score from their published evaluation.

See the full comparison with Mem0, Zep, and Letta or read how OMEGA works under the hood.

Star on GitHub

If OMEGA is useful, a star helps others find it.

Questions

Frequently asked.

Mem0 is cloud-first — it requires an API key and sends your data to their servers. Graph features cost $249/mo. OMEGA is local-first: everything runs on your machine, embeddings are computed locally with ONNX, and graph relationships are included free. OMEGA also scores 95.4% on LongMemEval; Mem0 hasn’t published a score.

Yes. OMEGA v1.0 ships with 12 MCP tools, AES-256 encryption at rest, intelligent forgetting with audit trails, and multi-agent coordination. It’s been tested across thousands of sessions and is the #1 ranked system on the LongMemEval benchmark (ICLR 2025).

No. OMEGA uses a local ONNX embedding model (bge-small-en-v1.5) and SQLite for storage. Zero API keys, zero cloud dependencies, zero external calls. Your data never leaves your machine.

Minimal. Embedding a memory takes ~8ms on CPU. Queries return in under 50ms. The SQLite database and ONNX model add about 100MB to disk. There’s no background server to run — OMEGA starts on demand via MCP.

Any MCP-compatible client: Claude Code, Cursor, Windsurf, Cline, and more. If your tool supports the Model Context Protocol, OMEGA works with it. Setup takes two commands.

Stay in the loop

Get updates on OMEGA.

New features, benchmarks, and launch news. No spam.

Get started in 60 seconds.

Two commands. No API keys. No Docker. No cloud signup.

$pip install omega-memory && omega setup
MCP server configured. Embeddings loaded.
Database initialized at ~/.omega/omega.db
# Next time Claude starts:
[OMEGA]No prior context. Starting fresh.
# The session after that:
[OMEGA]Welcome back. 7 decisions recalled.
You'll never start from zero again.

Works with

Claude CodeCursorWindsurfClineAny MCP Client

Apache 2.0 · Foundation Governed · Local-first · Python 3.11+

Star on GitHub