How OMEGA Works
A single MCP server that connects your AI client to a local SQLite database with vector search and encryption.
12 Tools, Five Capabilities
One memory system with semantic search, auto-capture, learning, forgetting, and graph relationships.
Beyond Storage
OMEGA doesn't just store and retrieve. It re-ranks results for precision and detects when new information contradicts what it already knows.
Initial retrieval uses fast vector similarity to find candidates. Then a neural cross-encoder (ms-marco-MiniLM-L-6-v2) re-scores the top 20 results by encoding each query-passage pair together, catching semantic relationships that separate embeddings miss.
When you store a new memory, OMEGA checks it against existing ones. Four heuristic signals detect conflicts: negation asymmetry, antonym pairs, preference value changes, and temporal overrides. Both memories get annotated with bidirectional metadata.
Full Comparison
Honest, side-by-side. Tool counts are approximate and based on publicly available documentation as of Feb 2026.
Benchmark Breakdown
LongMemEval (ICLR 2025) scores by category. 500 questions across 5 capability areas.
Temporal reasoning is a known weakness. I publish honest numbers because trust matters more than optics.
Category scores from our 95.4% task-averaged accuracy (466/500 raw). Methodology: LongMemEval.
Real Numbers
Measured on M1 MacBook Pro, ~240 memories, bge-small-en-v1.5 ONNX model. RSS via Activity Monitor.
Built for MCP
The Model Context Protocol is the fastest-adopted standard in AI tooling. OMEGA provides the persistent memory infrastructure that MCP doesn't standardize yet.
Works with any MCP client. No vendor lock-in. Ecosystem stats from Anthropic and community registries, Q1 2026.
What You Need
Minimal dependencies. Runs on hardware you already have.
- Python 3.11+
- macOS / Linux
- No GPU required
- Claude Code
- Cursor
- Windsurf
- Any MCP-compatible client
- SQLite (built-in)
- ~337 MB RAM after first query
- ~10 MB per 250 memories