Skip to main content

OMEGA

Persistent memory for AI agents

Decisions, lessons, and context persist across every session. Two commands to install. Zero cloud required.

The problem

Every session begins in darkness.

×

Context lost

Architecture decisions vanish when the window closes. You re-explain the same codebase every session.

Bugs re-solved

Your agent fixed that ECONNRESET last week. Today it debugs from scratch - zero memory of what worked.

?

Questions repeated

"What auth method?" gets asked every session. You become the bottleneck for your own tools.

The awakening

OMEGA remembers.

$pip install omega-memory && omega setup
15 MCP tools registered. Ready.
# New session - but OMEGA was here before:
[SessionStart]Welcome back. 3 decisions surfaced.
Auth: JWT with 15-min refresh (Dec 12)
API: REST→tRPC migration approved (Dec 14)
[lesson]ECONNRESET - connection pool fix (accessed 3×)
01

Capture

Hooks detect decisions, lessons, and errors during normal work. Nothing to tag manually.

02

Store & Index

Memories get embedded, indexed for full-text and semantic search, and stored locally in SQLite.

03

Surface

Next session, relevant context appears automatically. The agent picks up where you left off.

What you get

15 MCP tools. One server.

Not just storage - search, learning, and forgetting. Fully local.

Semantic Search

bge-small-en-v1.5 embeddings + sqlite-vec. Finds relevant memories even when the wording is different.

omega_query(query="deployment steps")
Auto-Capture

Hook system captures decisions and lessons automatically during normal work. No manual tagging required.

[hook] auto-captured: CJS/ESM mismatch fix
Cross-Session Learning

Lessons, preferences, and error patterns accumulate over time. Agents learn from past mistakes.

omega_lessons(task="debug API timeout")
Intelligent Forgetting

Decay curves, conflict detection, compaction, and a full audit trail. Memories don't just pile up.

omega_forgetting_log(limit=10)

Beyond the infinite

0.0%

#1 on LongMemEval - ICLR 2025 - 500 questions

OMEGA
95.4%
Mastra OM
94.87%
Zep / Graphiti*
71.2%
No Memory
49.6%

LongMemEval (ICLR 2025) is the standard benchmark for AI memory systems. 500 questions testing extraction, reasoning, temporal understanding, and abstention.

Tested with GPT-4.1 + OMEGA v1.0.0. Full methodology and source available in the repo. *Zep/Graphiti score from their published evaluation. Mastra OM score (gpt-5-mini actor) from their published research.

OMEGA uses category-tuned prompts (different answer prompts per question type); Mastra does not. Different methodologies, not directly comparable.

Two commands. Infinite memory.

$pip install omega-memory
$omega setup
Done. Your next session has memory.

Works with Claude Code, Cursor, Windsurf, and any MCP client.

Apache 2.0 · Foundation Governed · Local-first · Python 3.11+ · Built in the open