Skip to main content

Zero-Dependency AI Memory

No Docker. No Neo4j. No API keys. No cloud accounts. pip install omega-memory and you're done.

TL;DR: Most AI memory systems require external services — vector databases, embedding APIs, graph databases, Docker containers. OMEGA requires none of these. It uses SQLite for storage and ONNX for local embeddings. Everything runs on your machine with a single pip install. This matters for developer experience, regulated industries, and anyone who wants memory that works without infrastructure overhead.

What Zero Dependency Means

When we say “zero dependency,” we mean zero external infrastructure beyond Python itself. Specifically:

  • No Docker: OMEGA installs as a Python package. No container orchestration, no Dockerfile, no compose files.
  • No external databases: Storage uses SQLite, which ships with Python. No Neo4j, no PostgreSQL, no Pinecone, no Qdrant.
  • No embedding APIs: Embeddings are generated locally using ONNX Runtime. No OpenAI, Cohere, or Voyage API calls.
  • No cloud accounts: No sign-up required for any service. No cloud dashboard to manage. No billing to track.
  • No API keys: Zero API keys to any external service. No secrets to manage, rotate, or accidentally commit to git.

This is not a compromise. OMEGA scores 95.4% on LongMemEval and provides 25 MCP tools including semantic search, cross-encoder reranking, contradiction detection, intelligent forgetting, and multi-agent coordination. The zero-dependency architecture applies to the infrastructure, not the feature set.

Why This Matters

Developer Experience

One command to install. Zero configuration. No README with 15 setup steps, no environment variables to set, no Docker daemon to start. You go from zero to working memory in under 60 seconds.

Reliability

Every external service is a potential point of failure. When your embedding API is down, your memory system is down. When your vector DB has an outage, your agent forgets everything. Zero dependencies means zero external failure modes.

Cost

External embedding APIs charge per token. Vector database services charge per query or per stored vector. Cloud memory platforms charge per API call. OMEGA's ongoing cost after installation is exactly zero.

Portability

Works on any machine with Python 3.11+. Mac, Linux, Windows. Laptops, servers, CI runners. No platform-specific requirements. Copy the SQLite file and the entire memory travels with it.

SQLite + ONNX = Fully Self-Contained

OMEGA's architecture is deliberately simple. Two technologies handle all the heavy lifting:

SQLite

The world's most deployed database. Ships with Python. Requires no server process, no configuration, no administration. A single file contains the entire memory store.

  • Stores all memories, metadata, and graph edges
  • Full-text search for keyword queries
  • ACID transactions for data integrity
  • Single file — easy to backup, copy, or move
  • AES-256 encryption at rest

ONNX Runtime

Open-source inference engine from Microsoft. Runs ML models on CPU without a GPU or cloud API. OMEGA uses it for both embedding generation and cross-encoder reranking.

  • bge-small-en-v1.5 for embeddings (130MB, 384-dim)
  • Cross-encoder for reranking search results
  • CPU-only — no GPU required
  • Model downloads once, runs locally forever
  • Zero API calls for any embedding operation

Dependency Comparison

OMEGA

Required

  • Python 3.11+
  • pip install omega-memory

Not Required

  • No Docker
  • No Neo4j
  • No vector DB service
  • No embedding API
  • No cloud account
  • No API keys

Mem0

Required

  • Python
  • OpenAI API key (embeddings)
  • Qdrant or other vector DB
  • Cloud account (managed mode)

Zep / Graphiti

Required

  • Python
  • Neo4j database
  • OpenAI API key
  • Docker (recommended)

Letta

Required

  • Python
  • Docker
  • PostgreSQL or SQLite
  • OpenAI or other LLM API

Get Started in 60 Seconds

The entire installation process:

# Install
pip install omega-memory

# Run setup
omega setup

# That's it. 25 MCP tools are now available.

No environment variables to set. No configuration files to edit. No services to start. The first time you use an embedding tool, OMEGA downloads the ONNX model (~130MB) and caches it locally. Every subsequent operation is fully offline.

Your memory is stored in a single SQLite file at ~/.omega/memory.db. To back up your agent's entire knowledge: copy that file. To move it to another machine: copy that file. To inspect it: any SQLite client works.

Frequently Asked

What does 'zero dependency' actually mean?

Zero dependency means OMEGA requires no external services to function. No Docker containers, no Neo4j or other graph databases, no vector database services (Pinecone, Qdrant), no embedding APIs (OpenAI, Cohere), and no cloud accounts. Everything — storage, embeddings, search, reranking — runs locally using SQLite and ONNX. The only requirement is Python 3.11+.

How can OMEGA do embeddings without an API?

OMEGA uses ONNX Runtime to run embedding models locally on your CPU. The default model (bge-small-en-v1.5) is 130MB and produces high-quality 384-dimensional embeddings. No GPU required. No API calls. The model downloads once on first use and runs locally forever after. Cross-encoder reranking also runs locally via ONNX.

Does zero dependency mean limited features?

No. OMEGA has 25 MCP tools covering semantic search, cross-encoder reranking, typed memories, graph relationships, contradiction detection, intelligent forgetting, multi-agent coordination, behavioral analysis, and AES-256 encryption. Zero dependency refers to the infrastructure, not the feature set. The 95.4% LongMemEval score is achieved entirely with local components.

Can OMEGA work in air-gapped environments?

Yes. After the initial pip install (which downloads the package and ONNX models), OMEGA runs entirely offline. No network access is required for any operation — storage, embedding, search, and coordination all happen locally. This makes it suitable for air-gapped environments in defense, intelligence, and high-security financial systems.

One pip install. Zero dependencies.

Persistent memory that works out of the box. No infrastructure required.